AI's Widening Digital Divide
Back in 2018, a research paper crossed my desk and led me, through a footnote, to a GitHub repository. For readers who don't spend their days in the code world, GitHub is where software developers share their work online, and a repository, or repo, is the folder where the code lives. Anyone with the right skills can pull it down and run it on their own machine.
The paper was about something called GPT-1. OpenAI had just posted it as a research preview, which is what you call a version released so researchers can kick the tires rather than a polished product. There was no chat interface, no clean website, no ChatGPT brand, none of that existed yet. It was raw model files sitting in a public repo, so I downloaded it, got it running on my laptop, and walked out to the quad on our campus.
It was a nice day, so I found a bench and gave the model a simple prompt: tell me a dad joke. What came back didn't land. The output rambled and the grammar drifted, and whatever punchline it was reaching for got lost somewhere on the way. Today we'd call that early learner intelligence, because it stumbled the same way a preschooler stumbles through a sentence. But I knew how fast technology moves, and I knew this was a research preview, so right there on the bench I looked up.
Students walking to class with backpacks. A professor hurrying across the grass, coffee in hand. Staff drifting back toward their offices. A parent on a tour with her teenager. I remember the thought hitting me, which was that none of them knew what was coming and none of them were ready for what this would become.
Every person on that quad was going to be touched by this technology, including the students, the faculty, the staff, the parent, and the delivery driver pulling up to the science building. It wasn't going to stop at the edge of campus either, because it was going to reach workplaces, homes, classrooms, hospitals, even the guy who changes my oil next year.
I kept that little preschooler in my pocket like a good trick, a secret productivity multiplier. It was barely stringing together coherent sentences, yet it shaved hours off my day in ways my coworkers couldn't match, and that's how I ended up doing the work I do now. Once the tech went mainstream, I spent years teaching the tricks I'd picked up early, because once you touch Promethean fire, sooner or later everyone wants to hold it too. Whether people use it responsibly is another problem, worth its own article.
This piece is about the opposite problem. What happens when the fire gets pulled back, when AI, or at least the best of it, ends up walled off for the people who can afford the entrance fee?
Before ChatGPT, There Was Jasper
Rewind to before ChatGPT was a household name. OpenAI had the models, but they weren't selling to the public yet. What they were doing was selling access to GPT-3 through an API, which stands for Application Programming Interface, a fancy way of saying a pipe that lets one piece of software talk to another. If you had the money and the technical skill, you could build a product on top of OpenAI's models without ever being OpenAI yourself.
One of those products started life in January 2021 as Conversion.ai. The team renamed it Jarvis, and then, after Disney's Marvel lawyers came knocking about the trademark, it settled on Jasper. Jasper quickly became the go-to AI writing platform for marketers. The interface wasn't a chatbot, it looked more like a word processor. You typed a prompt or a few sentences into a document, clicked a button, and Jasper would keep writing for you, whether that meant a paragraph, an ad headline, or a long blog post if you gave it enough leash. It was predictive text wearing a bigger hat.
The catch was the billing model. Tokens. Tokens are how AI systems count text, where one token is roughly three quarters of a word, give or take. Every time the model reads your input, that's tokens, and every time it writes a response, that's more tokens, so the longer the conversation goes, the faster they burn. Jasper gave you a monthly allowance and then charged per token beyond it.
I paid. And paid. And paid.
One month I opened the bill to find about seven hundred dollars in overage on top of the regular subscription, and my stomach dropped, because that was a conversation with my wife I didn't want to have. But I was hooked. Productivity is addictive when you find a clean version of it, and I was cutting corners on robot busywork, the kind of stuff we now use AI for without blinking. It felt like I'd hired a very polite middle school intern who worked at the speed of electricity.
The Leveling Stick
Late November 2022, OpenAI launched ChatGPT. The model underneath was GPT-3.5, and anyone could sign up with an email address and start chatting, for free, with no tokens to count and no invoice at the end of the month.
My secret was out, because every week more people I knew were talking about this new ChatGPT thing. For a month or so I held on to Jasper out of habit and pride, but eventually reality won, because ChatGPT was free and quickly became smarter than the tool I was paying for. I cancelled Jasper.
Claude showed up in March 2023, also free, and before long Gemini from Google followed. The models kept improving, and the pricing shifted from tokens at a premium to free at the front door with optional paid tiers above. The digital divide for AI access, which had felt like a canyon just a few years earlier, had narrowed down to a curb.
The divide that was left wasn't really about money anymore, but about effort and curiosity and whether you were willing to try. You could be broke and still have access to the best model in the world, so the only real barrier was whether you cared enough to learn how to use it.
AI Years and the Grow-Up Sprint
When people ask me how fast AI has changed, I don't use months or quarters, because those don't really capture it. I think of it as an AI year. The old joke is that one human year equals seven dog years, but with AI we don't measure age at all, we measure how many grade levels the models skip between birthdays.
In 2018, GPT-1 was an early learner. By late 2022, ChatGPT-3.5 landed somewhere around middle school in terms of reasoning and coherence. Then the models stopped walking and started sprinting through high school, past undergraduate writing, and on into graduate student territory. In September 2024, OpenAI released o1 as the first reasoning model, meaning that instead of simply predicting the next word, it chained steps of thought before answering, which was the jump into doctoral candidate territory.
By now, in 2026, the best models are graded at doctoral level in every subject. That preschooler I met on the bench in 2018 grew up in about seven human years flat, and no child has ever grown up that fast. We've never had to sit on a bench and watch intelligence scale at that rate in front of us.
Yes, there's AI slop, plenty of it, and that's the bathwater. But you don't toss the baby with it, because that baby is learning to read, write, and argue its way through complicated problems faster than most of us can keep up.
The Rising Tide Era
For about two years, which I think of as the rising tide phase, the market actually worked in favor of the user. The best models kept landing on free tiers across most of the major providers, and twenty dollars a month bought you access to premium features, roughly what people were already paying for Netflix. Free users rode the coattails of paid users, because yesterday's premium model rolled down to the free plan once the next model took its place.
I remember reading about programs where kids in regions with no reliable teachers got AI tutors instead, and their scores came back matching students who had expensive private schooling. A phone and a data plan became a portal to something close to a personal instructor, available at three in the morning when the real teacher was asleep. For a field that has always fought the access problem, that's a big deal.
I liked all three of the big labs back then for different reasons. OpenAI had the head start and the brand, Anthropic had the best writing quality and, for my money, the cleanest code output, and Google's Gemini kept pulling surprises out of the hat. OpenAI was on the scene first and became to AI what Kleenex is to tissue, which is why people will tell you they checked ChatGPT when they actually used Gemini, because the brand is synonymous with the category now. Anthropic was always the scrappier player, and I rooted for them.
Anthropic Changes the Business Model
Then the tone started to shift. Anthropic pivoted hard toward business customers, and part of it was capacity, because they hadn't invested in raw compute infrastructure the way OpenAI did early on, so they couldn't serve the same scale of free traffic. Part of it was strategy, because their revenue from enterprise contracts and from developers paying per token through the API was always going to dwarf what they could make at twenty dollars a pop from consumer subscriptions.
At the same time they shipped two products that deserve a place in the Promethean fire hall of fame. Claude Code came first, a command line tool that lets developers delegate real coding work to Claude from inside their terminal. Then Claude Cowork, a desktop assistant that can move through files, apps, and tasks for people who don't code. Both were, in the right hands, the closest thing to hiring an unflappable junior colleague who works at four in the morning and never complains.
If you used either tool well, you got addicted. People built businesses on top of them, and friends of mine started running their entire day through Cowork. I watched one of them turn Claude Code into the spine of his consulting workflow and finish a dissertation draft in weeks instead of months.
Then Opus 4.6 arrived, a better model and an expensive one. When a model is called expensive, it means running it eats more compute, which is the electricity and GPU time that makes the whole thing go, so token expenditure, in plain English, is how much text it chews through per task. Opus 4.6 was good enough that I had to have it, and the problem was the usage limits.
The Limits Start to Pinch
On the twenty dollar Pro plan I could blow my weekly cap in a few serious hours of work, and there were also five-hour rolling windows that could empty out inside of sixty minutes. I moved up to the hundred dollar Max plan just to keep doing what I was already doing, because I wasn't getting new features so much as buying back the productivity the tightening caps had taken away.
That shift is where the old metaphor stops working. You can still drive a reliable truck with the lower tier models, but if you want the Ferrari, you pay Ferrari money. And the digital divide, the one that had almost closed a year or two earlier, started to open again.
The OpenClaw Episode
In late 2025, an Austrian developer named Peter Steinberger released a side project originally called Clawdbot. After some naming drama that included trademark concerns from Anthropic, it landed as OpenClaw. The idea was clever, because if you had a Claude subscription, you could route that subscription through OpenClaw and use Claude models inside an open-source agent framework. OpenClaw could run on your own hardware, save your data to local Markdown files, and keep agents churning on real work while you slept.
People loved it. The GitHub repo hit roughly two hundred forty-seven thousand stars and almost fifty thousand forks, and in February 2026, Steinberger joined OpenAI.
A few weeks later, on April 4, 2026, Anthropic cut off third-party agent frameworks like OpenClaw from Claude subscriptions. If you wanted to keep using Claude that way, you had to pay per token through the API, and for someone who had been running two OpenClaw agents at a calm pace on a hundred dollar subscription, the equivalent API bill could easily run into the thousands. Anthropic's reasoning made sense on its face, because agentic use chews through compute in ways a flat subscription was never designed to support. Still, the subsidy party was over.
The Pro Plan Scare
On April 21, 2026, the pricing page on Claude's website quietly updated. The twenty dollar Pro plan no longer listed Claude Code as an included feature, and support documents changed their title from "Using Claude Code with your Pro or Max plan" to "Using Claude Code with your Max plan." Reddit and X caught fire within a few hours.
Anthropic's Head of Growth, Amol Avasare, posted that it was a small test affecting about two percent of new prosumer signups, and that existing subscribers weren't being moved. Within hours the site reverted and Claude Code came back to the Pro plan's feature list, but the damage was done. A lot of us spent an uneasy afternoon staring at a future where the tool we had built our work around sat behind a hundred dollar gate.
The part I keep coming back to is what Avasare also said in that thread, which is that their current plans weren't built for the way people use Claude now. That's the quiet part said out loud, and it signals more changes coming, though we just don't know the shape of them yet.
OpenAI Plays a Different Hand
Over at OpenAI, the messaging has swung the other direction. Tibo Sottiaux, who leads Codex product at OpenAI, publicly committed that Codex, their answer to Claude Code, will keep running on the free and twenty dollar Plus plans, because OpenAI has the compute and efficient models to support it. A week before the Claude Code Pro test, OpenAI also launched a new hundred dollar tier that sits directly opposite Claude Max, with five times the Codex usage of the Plus plan.
This is a real departure from how things felt a year ago, when OpenAI was catching heat for testing ads on the free tier of ChatGPT. I wasn't a fan of that move at first, but life taught me a lesson in context, because servers cost money and GPUs cost money, and at some point someone has to pay for all of it.
At Super Bowl LX in February 2026, Anthropic spent millions of dollars on a series of satirical ads. The headlines read like horror movie one-word titles. Deception. Betrayal. Violation. All of them closed with the tagline that ads were coming to AI but not to Claude. Sam Altman responded in public with a pointed line of his own, framing Anthropic as a company that serves a pricey product to well-off customers, and arguing that OpenAI feels a duty to bring AI to the billions of people who can't pay for subscriptions.
Honestly, when I reread both sides, I'd rather have ads on my free AI tier than see the best models locked behind a hundred dollar monthly minimum, because ads are annoying but a closing door is worse.
The Kingmaker Question
Here's what keeps turning in my head late at night when I close the laptop.
Access to the best AI models isn't a nice-to-have anymore. It's closer to a necessity expense, sitting in the same column as your cell phone bill, your car insurance, the gas in your tank, and the electricity keeping your fridge running. Businesses are starting to run on agents and people are starting to run personal workflows on them, so if you're trying to compete for a job, a grant, a client, or even your own calendar, the gap between someone with a Ferrari model and someone with a midsize truck keeps widening.
Anthropic didn't set out to be a kingmaker. But when the subscription model tightens, and the best tools climb steadily toward the two hundred dollar plan, and third-party loopholes close, and doctoral-level intelligence only sits in reach for those who can pay steep monthly fees, there's a real risk that we drift into a world where a company like Anthropic gets to decide who holds the amplified superpowers.
Their pattern of releasing very capable models first to select partners for safety testing reinforces that feeling. I've praised that approach in the past for closing security gaps before a model goes wide, and I still think it's smart for safety, but I also notice the pattern of the most capable intelligence going first to the select few.
I don't want to conspiracy-brain this, because there's no smoky room where Anthropic picks winners and losers. Even without a smoky room, the math of access writes its own story, because when capability correlates perfectly with ability to pay, power concentrates. That's how most technologies have always moved, except AI moves faster.
What to Do About It
A few practical takeaways, in the voice I'd use with someone at the back of one of my bootcamps.
Don't assume the free ride continues, because the era where the best model in the world was free by default is probably ending, so budget for AI the way you budget for your cell phone plan.
Still, don't let price define ambition, because free and low-cost models are shockingly good. Last year's flagship is a better assistant than most of us had three years ago, and Google's Notebook, the free Gemini tier, and the free Claude tier are plenty for most learners.
Learn the tools while the window is open, because productivity gains compound. The friend who built a strong AI workflow back in 2023 is coasting now, and the person starting one today is still ahead of whoever waits until 2028.
And pay attention to the politics of access, because this is a story about power and economics as much as technology, and it will shape the next decade in ways we can't fully see yet.
The digital divide for AI was almost closed for a brief, bright stretch, and now it's widening again. The Grand Canyon metaphor I keep using is meant as a forecast rather than a flourish, so prepare yourself for the fallout, prepare the people you love for the fallout, and keep your hand on the fire long enough to know how it behaves.