The Next Species of Computing

There is a diagram I drew on whiteboards for twenty years. Three boxes, stacked vertically. Application at the top. System Software in the middle. Hardware at the bottom. I called it the ASH model. Students thought they were learning about computer architecture. What they were actually learning was how the world absorbs a good idea.

The pattern works like this. Every transformative technology in computing begins as an application, something a developer builds to solve an immediate problem. When it proves its value, the operating system absorbs it. Digital Darwinism mandates that the feature becomes infrastructure. Then, driven by the demands of speed and security, it descends further still, becoming permanent logic in the hardware itself.

We watched this happen with networking in the 1990s, with cryptography in the 2000s, with graphics acceleration in the 2010s. The path was never straight and never fast, but it was always the same path.

Artificial intelligence has been living happily at the Application layer for the past several years. ChatGPT, Copilot, Gemini, Claude: all of them arrived as cloud-hosted applications, accessible through a browser or an API call. The intelligence lived somewhere else, on a server farm you never saw, billed by the token.

This is exactly where transformative ideas are supposed to start, and the consumer benefits have been real. Professionals write emails faster, summarize documents in seconds, and skip research steps that used to take hours. The Application layer delivered on its promise for the people sitting at the front of the experience.

Microsoft recognized this consumer opportunity early and moved aggressively. Copilot was integrated into Windows and woven through the Office suite, carrying a price tag that reached thirty dollars per user per month for enterprise licenses. The results were, by most honest measures, disappointing. Adoption was lower than projections anticipated, enterprise pilots were frequently abandoned, and accuracy problems were persistent enough that some organizations disabled it outright.

Microsoft has quietly repositioned the product multiple times since launch. The core problem was not execution. It was diagnosis. Microsoft identified a real phenomenon, that people wanted AI assistance in their daily work tools, and built toward it at the Application layer, adding capability on top of an infrastructure that was never redesigned to support it. The seams showed. The experience felt bolted on because, architecturally, it was.

Because there is another layer of people entirely, and their time problem is an order of magnitude larger.

These are the engineers, researchers, and infrastructure professionals who set up AI systems, configure the environments, and keep everything running. Before a single consumer saves a minute using an AI tool, someone spent days, sometimes weeks, doing the unglamorous work underneath. Setting up NVIDIA’s drivers on a fresh machine has historically required adding third-party repositories, manually resolving software version conflicts, debugging kernel module failures, and hoping nothing breaks when the next system update arrives. AMD’s equivalent stack was even harder to configure reliably. A developer beginning a new AI project did not open a terminal and write code. They opened a terminal and spent two days fighting the environment before they were permitted to write code. The promise of AI saving time had a significant footnote: it first cost enormous amounts of time from the people doing the building.

Ubuntu 26.04, scheduled for release this April, is the moment the ASH model’s second migration begins. Canonical, the company behind Ubuntu, is treating AI the same way the operating system has always treated mature technologies: by absorbing them. For the first time in the history of the platform, NVIDIA and AMD’s AI development toolkits will be available directly from the official Ubuntu software archive, maintained by Canonical’s own security team, installable in a single command.

A hardware-optimized AI model, ready to run locally and answer queries from any application on the machine, can be installed just as simply. The operating system detects the hardware automatically, selects the right configuration, and exposes a standard interface that any developer can write code against. What previously took days of expert configuration now takes minutes of standard setup.

This is the ASH migration made visible. AI is not a new application feature in Ubuntu 26.04. It is infrastructure, something the operating system manages as a first-class responsibility alongside networking, file systems, and security. And the Hardware layer signal is already appearing: Ubuntu 26.04 includes kernel-level support for dedicated AI processing chips from Intel, AMD, and Qualcomm, along with hardware-rooted security features that allow AI workloads to run in environments where even the host machine cannot read the data being processed. Canonical has committed to maintaining the entire AI stack, from silicon drivers through model runtimes, for up to fifteen years. You extend that kind of commitment to infrastructure. You do not extend it to features.

What the ASH model teaches is that technologies do not become important when they become popular. They become important when they become invisible, when they stop being something you configure and start being something you depend on without thinking about it. The consumer experience of AI has already reached that threshold for many people. What Ubuntu 26.04 represents is that threshold arriving for the builders.

The diagram still has three boxes. What has changed is the velocity at which a generation-defining technology is moving through all of them at once, and what that means for the people who make AI work is that their own time, expensive, skilled, and genuinely scarce, is finally being treated as something worth protecting.

Leave a Reply