Choosing Good

Well, Reid Hoffman beat me to it. I had been wrestling with how the pithy “cheap, fast, or good: pick two” equation had been fundamentally upended by AI when he said the following:

“There’s a classic framing at work: fast, cheap, good—pick two. AI is beginning to mess with that triangle by allowing organizations to quickly reallocate the tradeoffs. For example: You can take “fast” and reinvest it into “good.” Or take “cheap” and reinvest it into “fast.” But you don’t get to keep the AI amplification without consequences, because other players in your market also get the tool. Once everyone has the AI accelerant, it’ll be about how organizations intelligently deploy AI, and how they strategically reinvest the time savings that will separate the winners and the losers.”

So thanks, Reid. I’ll take it from here.

There are two strategic decisions in Reid’s point that I want to highlight:

  1. The choice to invest in good
  2. The choice to reallocate or reinvest any gained capacity

I am going to spend most of my words from here on the first, but it’s in service of the second. So keep both in mind.

The real world evidence is building, demonstrating that the dominant choice across AI-impacted domains is fast and cheap as the default.

Good be damned.

Consequently, when everyone is making the same calculation, while using the same tools, there remains only one differentiable strategic decision: whether to invest in good.

Yes, good has an impact on the cheap part of the equation, but it all depends on which horizon you’re looking for impact. What the oft cited “95% of GenAI pilots fail” tells me is that most applications of AI are choosing the shortest possible return cycle, and choosing to prioritize the marginal bottom line impact as opposed to long-term reinvestment as a strategic objective. The ROI of increased capacity is difficult to quantify if not redeployed in a value creating way. It’s easier to just assume that any gained efficiencies can be banked as cost cutting. But this is a missed opportunity that demonstrates just how marginalized the “good” option is.

For a technology struggling with sustained adoption due to a combination of a lack of trust in the origins of the models (theft of intellectual property), lower trust in the quality of the outputs (hallucinations and mistakes), and its overall questionable utility (the world is more than lines of code), any vision for how AI will ultimately impact the world, any good it might engender, has to reconcile the perception of present day destruction of value with its prophesied future.

Those with a vested interest in the revolutionary potential of AI often ignore these present realities while arguing that everyone who isn’t unflinchingly getting on board is obstructing progress. Not considering them real challenges to address will create even more harm than what has already been unleashed; “Trust us, AGI is coming” is not convincing enough given the stakes, and an abdication of responsibility in the face of the already significant negative externalities.

On every level, and from every angle, the prioritization of speed is emerging as the biggest contributor to those negative externalities, and the biggest strategic problem.

Speed of infrastructure build out.

Speed of change.

Speed of output.

And the speed of the requisite judgement to process and respond to it all.

It’s no longer a given that an increase in the speed of a technology leads to better outcomes.

Speed as a stand-in for good was always debatable, but never more so than now. “Move fast and break things” was a warning, not a mantra.

What is needed are competing visions of good to overcome the default of fast and cheap in order to avoid the worst possible outcomes of AI’s rapid expansion. What follows is the scaffolding of how we might think about “good” and AI.

To do so, let’s talk about aviation. Specifically the history of autopilot.

As use and adoption of autopilot increased over the course of multiple decades there emerged a worry, after a number of accidents, that autopilot was contributing to the atrophying of pilot skills such that pilots struggled to fly when assistance failed. This became of sufficient concern that NASA researched the issue and published Examination of Automation-Induced Complacency and Individual Difference Variates:

Automation-induced complacency has been documented as a cause or contributing factor in many airplane accidents throughout the last two decades. It is surmised that the condition results when a crew is working in highly reliable automated environments in which they serve as supervisory controllers monitoring system states for occasional automation failures.

Simply put, pilots delegated much of the requisite judgement of flying to the automation tool turning them into monitors who were no longer as able to influence or direct the outcome of the flight as they once were.

There have been a number of similar studies related to AI and this more recent study, but rather than get into the debate about whether one should or shouldn’t adopt AI, let’s look at how the aviation industry handled the risks associated with Automation-induced complacency as an analogous path forward — and through — a lot of the noise around AI today.

The Aviation industry handled Automation-Inducted Complacency by recognizing that the best outcomes required a hybrid approach. Pilots are now encouraged to manually fly to maintain their skills (take-off and landing in particular), while still taking advantage of the benefits of autopilot (reduced cognitive load over the course of long flights) to help with the overall quality of the flight: safety, fuel efficiency, smoothness, punctuality, etc.

This automation in service of quality is an important lesson for AI. How aviation responded to the issue sits in stark contrasts to how AI leaders have handled safety concerns: scolding the public for not adopting your solution without demonstrating the requisite amount of reflection to understand why there might be hesitation. If planes were falling out of the sky would anyone scold the public for not flying more? Yet here we are.

For myriad reasons—many cultural—we struggle to define the boundaries of “good” for AI because those who push the technology believe any kind of constraint on innovation will kill growth. And without growth, well, there is no other acceptable “good” that could emerge.

Yet, the history of aviation paints a picture of growth that is strikingly different than that of technology over the last two or three decades. What is unique to aviation is that the speed of planes has not materially changed in 50 or 60 years. In fact, with the retirement of the Concorde, the average top speed has actually dropped. While it is apples and oranges to compare aviation to computers, given the present and future risks of AI I think the contrast is worth considering.

And, while I am no expert on aviation, from my vantage it looks like an industry that grew thanks to an investment in safety and not in ever increasing speed.

And with safety the industry had a clear definition of “good” that resulted in increased demand.

Increased capacity became an expansive investment, not an efficiency to be filled with additional cycles, or flights.

For AI to have a meaningful impact outside of software engineering, it is going to have to reconcile with the fact that most domains do not operate on rapid Build-Measure-Learn cycles. There is no staging environment. Most domains exist in production. Yet, large AI models come with the disclaimer “this LLM can make mistakes”. So, unlike autopilot, AI powered automation counterintuitively burdens the user with additional cognitive load thanks to the cycles wasted verifying work done with these models.

The superpower of AI is not automation through delegation, but rather its ability to foreground moments of discernment and judgement that would previously take exponentially longer to synthesize concretely.

But, asking people to execute more of those moments as the realization of its value is an acceleration of burden, fatigue, and burnout; not necessarily the creation of capacity through efficiencies.

The semantic search capabilities of LLMs are truly remarkable — the ability to connect, stitch, condense, and make accessible disparate forms of information is an achievement — but only when executed against a known corpus of knowledge and data.

Otherwise, the burden of discernment is doubled because we are asking people to exercise judgment over the validity of the source (and its training material), and then having to assess and potentially act on the output, which is by its nature emergent (non-deterministic as traditional software is) and thus variable in substance and quality.

Getting to the point where we can trust the source material is a much larger investment than anyone espousing the transformational potential of AI seems to understand – evidenced because of the paucity of investment in the middle layer between LLM and action. The focus to date has almost exclusively been on the models and the belief that better models will solve their own failings given enough training.

Believing that is risky.

Investing in discrete deployments of LLMs against a known corpus is where quality emerges.

Removing the requirement to question whether to trust the outputs should be an imperative.

Prioritizing it is one way to choose “good”.

When organizations report that they are not seeing the ROI from their AI investments it is likely because they are not designed to give their employees more opportunity to exercise good judgement and discernment.

But that may be by design, or the product of legacy management. Asking employees to exercise judgement and discernment is after all risky. Risky because the history of enterprise is one of repeatable and predictable procedures at scale. LLMs are not that. At least not how they have been deployed to date. Most deployments to date have been done with the belief that their value is intrinsic to their horizontal training and not the product of their deliberately integrated vertical end-to-end design—in other words, deployed in contexts where “good” is known, and intentionally chosen as the desired outcome.

Good can mean a lot of different things. And unless you interrogate what you mean by good, you will inevitably end up choosing fast and cheap, like everyone else, and race towards undifferentiated commodification. No one wants that. I don’t. I prefer my revolutionary technologies to deliver more than marginal efficiencies. I want net new modes of value creation and exchange.

WHAT VALUE ARE WE GAINING?

 

One way of thinking about good is in the most traditional market sense: two parties exchanging value, selling or acquiring what the other party couldn’t do themselves. The trajectory we appear to be on is one where no service is worth its price-win comparison to spinning up your own home-brewed replacement. Setting aside the real cost of trading service for maintenance—and the difference between a bespoke suit and spinning your own yarn—I am not convinced that a future that turns everyone into an atomized unit of competition deploying their uniquely vibe-coded bots and apps to fight over scraps of micro-transactional value is a viable solution to downward economic pressures. We are falsely equating speed-of-build to time-to-value. Valuing building anything over actual use.

It is counter-productive to trade the capacity afforded by delegating to existing systems in favour of the false promise that ownership is always worth the accompanying responsibility and accountability. Often, in the case of these DIY replacements, they also come without the capital benefits (and liabilities) of incorporation. It is just shifting labour out of the realm of compensation and into the realm of chore, equivalent in market value to everything done to maintain a home. But now with token costs to track.

So what is gained? What capacity, if any, is created? When benefits are framed in terms of efficiencies they are never qualified by specifying whether it is efficiency in the sense of a reduction in waste, or, efficiency in terms of a reduction in friction allowing capital to cycle faster.

It is telling that a lot of what we’re trying to automate away with these DIY substitutes, and in the coming white collar apocalypse, are the inefficiencies born out of trying to cycle capital as fast as possible. Almost as though such a pursuit of efficiency is a fundamentally wasteful activity.

I believe in helping people achieve what they cannot do on their own. That fundamental asymmetry of needs and skills, supply and demand, is the foundation of commerce. Its at the heart of the earliest markets. But it is in systemic decline. Substituting people for AI in the equation—where exchange happens between individuals and their AI—is contraction in exchange, not growth, in my view. Good is a future where AI multiplies the ways in which we support each other, not one where everyone and their bot is in competition with each other, exchanging nothing of value. When I only listen to my personalized GenAI playlist of songs, and you listen to yours, never shall we come together and in chorus.

SOLVING THE PROBLEMS OF SPEED WITH MORE SPEED

 

The history of computers has been one where the problems of yesterday are promised to be solved tomorrow by the sheer advancement of technological power. Efficiency through endlessly accelerating cycle times.

As I have been arguing, the speed with which AI operates creates new problems that chip away at the promise of automated efficiency. As Moore’s Law is slowing down, I think this well worn projection is losing faith, and relevancy.

Maybe the reason this generation of AI isn’t being adopted at the scale its creators hoped when LLMs were introduced to the world a couple years ago is because what AI is good at doesn’t align to the struggles most people face; people aren’t going to reinvest any gained capacity into accelerating the pace of their already burdened lives.

The reason AI is not achieving adoption to meet the expectations of the vast sums invested is because of a lack of product-market-fit. In other words, AI is trying to solve the problems born out of a world that already feels too fast, with more speed. And while we think we are automating away the low value tasks, what we’re really doing is pushing the frontier of expected frequency of transaction.

I believe AI is not being adopted widely as a consumer product because we have hit a limit for how frequently we can exercise discernment and judgement before burning out. We are approaching our transactional asymptote. Speed no longer equals good. We don’t see any value in going faster.

The metaphor I would use comes from ornithology. A number of birds have four cones in their eyes. The fourth cone allows them to see ultraviolet. Ultraviolet is a faster wavelength than our eyes can process. It contains information but we would need help to see it. It would have to be translated, slowed down, for us to understand it. Otherwise, it might as well be black, the absence of colour. Which is what the command prompt terminal is to most people. Flat black with the promise of ultraviolet; solving speed with more speed is pushing our collective experience of technology into the ultraviolet, into a frequency beyond which we can derive meaningful information. Agent to agent interactions are not intrinsically valuable because they transact faster. Their value, if they’re generating any, only emerges when the process is slowed down to a wavelength we can perceive.

If we are to see gains in capacity as a result of collaborating with AI, we need patterns of interaction that slow down the judgment process and give time for better decisions. What we don’t need is the acceleration of their volume such that we are are compelled to delegate into the ultra-violet.

If we are approaching the asymptote of our collective ability to transact it follows that if we’ve automated the space between those moments of judgement then we should have more time to make good ones, less waste in the form of bad decisions and thus better outcomes. Isn’t that the ultimate efficiency? Where AI in healthcare is the automation of what happens between the care, not of the care itself? Where gained operational capacity is redeployed to add more doctors seeing more patients, not have the same number of doctors squeezing more people into their already tight judgement windows.

Its safer flights. Not faster flights.

Its making fewer, better decisions.

Higher value. Lower frequency.

I believe there is no shortage of worthy labour if you care about defining good. It just looks different. And AI can help us get to good. AI can be a release valve on the cognitive load of all the judgement and discernment required today. It doesn’t have to be an amplifier of burnout.

But that looks very different than the current applications that are actively eroding what humans care most about. We are experiencing loss of value because it’s what the technology and its creators supply, not what we demand.

We should demand AI be deployed to create real human capacity.

Capacity that can be redeployed to areas AI can’t service.

Capacity to slow the world down.

Doing this well requires new definitions of good; which requires imagination.

AI is fundamentally constrained by its training data. It is bound to history. Not the future. It might be able to code the next version of itself, but that will always be a sharpening of it rearview mirror.

I, for one, am not going to allow AI to define what good means. Nor let it determine its impact. And I will do my best to harness any capacity emergent from its properties to bring more good into my world. But I will never delegate the judgement of whether I have succeeded in doing so. Nor should you.

Slow down.

Choose good.

Whatever that is for you. Start there.

 

Share this article