I’ve been thinking a great deal about how AI1 is reshaping the way we craft software, and work, and human activities in general. It’s been on my mind for years. Here’s a picture that shows my thoughts, with dot * in the lower right quadrant being where we are today:

ai-what-do

The Axes Link to heading

The x-axis represents where AI power sits, i.e. who owns the software and hardware assets running AI computation, where those assets are physically located, and how they are governed.

The y-axis is about AI capabilities and human choices about how to actually use AI. I originally framed this as “AI % share of labor”. But us humans do so much more than just work. And it turns out AI is capable of non-work things, too. So the y-axis instead represents AI capabilities, a.k.a. “from a purely technically standpoint, what percentage of human activity could AI stand in for, if we wanted to use it as a substitute”. Those activities might be cognitive (happening now), or people-interactive, or physical (robot stuff).

Note: the y-axis is NOT actual substitution, i.e. “to what degree are we substituting AI for human efforts”. It represents what’s possible, the technical capability. And regardless of what’s possible, we have choices about how to apply the tech. (Think about nuclear tech for an analogy.)

You are here: * Link to heading

Back to *: where we are today. I’ve drawn it far down on the y axis to indicate today’s relatively low level of AI capability. Yes, there is a huge amount of chatter and hype about what AI can do. And in the software field, particularly, AI substitutability for humans is significant and growing fast. But in the full sphere of human activity there are many, many things AI cannot do. Yet.

* is drawn far to the right on the x-axis to indicate the current high degree of AI power centralization and concentration. The vast bulk of people using AI services today consume from a handful of AI labs: OpenAI, Google, Meta, DeepSeek, Microsoft. Almost exclusively atop Nvidia’s silicon.

Everything else on the graph is scenarios… speculations about the future.

Scenario No: No AI, thank you Link to heading

Moving clockwise from *, the No scenario represents people who either cannot access AI (e.g. due to barriers like unreliable or costly electricity, limited compute access, etc.) or don’t want to.

Some people will undoubtedly remain in this quadrant, but I believe it will be a small minority in the long run. Compare to adoption of electricity. 10%, maybe?

Scenarios Ag (Agency) and Bg (Borg) Link to heading

Dots Ag and Bg represent extreme future worlds.

In Ag, everyone has powerful AI running on a personal laptop or computing appliance. AI is ubiquitous, because it has become cheap to produce (R&D and train) and operate (inference, power consumption), and probably because the model algorithms and weights are open. Important: in this scenario, even though AI has immense capability, we may choose not to use it for certain applications, because those areas are reserved for human craft. We have agency. We can choose.

In Bg, a few or even just one mega-corp-government-borg-thingy wins out, and all the AI power is concentrated in one place. Individuals have effectively no agency over AI. Instead, we all rent AI services from the central power, with whatever assets we already had in the bank before AI became the main means of production. The rich get richer. Hopefully Borg-thingy is a nice landlord. Hope is a strategy, right?

Where to go from here? What to do? Link to heading

The arrows radiating outwards from * represent possible next steps, and choices we need to make.

On current trajectory, AI technical capabilities are growing quickly, so * will shift up the y-axis as time passes. This might be slowed by headwinds such as technical barriers, regulatory changes, or societal backlash. But right now it looks like continued, fast technical capability growth, and both the major AI labs and smaller R&D entities are pushing hard for that. Tailwinds.

As for the x-axis, our aggregate heading seems far more uncertain to me. There are people trying to head in every direction. Impossible to predict.

I’m not saying we are going to end up at Ag, or to Bg. I actually believe both are unlikely, for various reasons, and the more probable outcome is some unpredictable hybrid inbetween scenario. (Unless AGI happens, in which case all knobs get turned straight to 11 and we tilt to one extreme or the other. Right, AI-labbers?)

What I am certain of is we are going to keep going “up” a long way in terms of AI capability and the opportunity/risk of substituting AIs for humans. In a very broad range of activities. As an individual I have no control over that. What I can do individually is choose where to head on the x-axis.

I’m going to start heading towards Ag. More agency. Less Borg risk.

As a softwware person, concrete things I can do: use many different AI models, not just what’s convenient; run AI locally on my own hardware, not just in the cloud; support open models, not just opaque black boxes. In general, avoid the monoculture. I’m open to ideas.

Hope to see you en route.


  1. AI meaning large language models, their applications, machine learning, the whole ball of wax ↩︎