Commentary

Are We Building AI's Future on Rented Ground?

Raffi Krikorian doesn't pull any punches: "We are renting our intelligence from a handful of companies."

That was Krikorian last week in Munich at DLD (Digital-Life-Design), Europe's leading innovation conference where tech executives, policymakers, and media leaders gather to debate digital power. And he wasn't easing into the conversation.

Krikorian is not offering hot takes from the sidelines. He is the chief technology officer at Mozilla, where he now leads work on trustworthy, open AI. Before joining Mozilla, he served as CTO at Emerson Collective, and earlier ran some of the most critical pipes in modern digital life: vice president of platform engineering at Twitter, head of Uber's Advanced Technologies Center, and chief technology officer of the Democratic National Committee. He has spent his career inside both platform power and democratic vulnerability.

advertisement

advertisement

Interviewing him was Nicholas Thompson, CEO of The Atlantic. Thompson has spent the better part of two decades chronicling technology's impact on culture and power, first as editor of NewYorker.com, then as editor in chief of Wired. He understands how new platforms reorder the media business, which made him the right foil for a conversation about who controls the intelligence layer we are all starting to lean on.

The problem with renting, Krikorian explained, is that landlords change the rules. And in AI, the landlords are a handful of companies who sit between us and every answer, every search, every piece of information we consume. They decide what we see, how it is ranked, and what incentives are baked into the response. We have no way to inspect their choices. We just get the output and move on.

Thompson pushed him on whether this was really different from Google. After all, we have been trusting search engines for decades.

Krikorian's answer was sharp. Google's 10 blue links, for all their flaws, gave users visibility. You could see which sources ranked first, which results were ads, and how the system was thinking. It was imperfect, but it was legible.

"We traded transparency for convenience," Krikorian said. "Ten blue links gave us a window. AI hides the room entirely."

That is not an aesthetic complaint. It is a structural shift in power. When a chatbot gives you a single polished answer, the ranking, the sources, the business deals, and the editorial choices all disappear. If it sends you to Expedia instead of a direct airline, or quotes one news outlet over another, the logic is invisible. Is it relevance? Revenue? A partnership? You will never know.

For media companies, this should be setting off alarms. We have already lived through the era when Facebook and Google inserted themselves between publishers and readers. At least those platforms still used feeds, links, and pages that left some forensic trail. Generative AI is different. It ingests the work, extracts the value, and returns a summary. The original source becomes a footnote, if it appears at all. That is not distribution. That is extraction.

And it is happening at scale. Training datasets already include years of journalism, photography, video, and research. The economic model is simple: take everything, pay for nothing, and let the courts sort it out later. If that sounds familiar, it is because we watched the same playbook with music, then film, then news. The difference now is speed. AI companies are not waiting for permission. They are building the future on other people's work and daring someone to stop them.

Krikorian did not show up in Munich to complain. He came with a plan. Not a utopian one, but a pragmatic one: build credible alternatives at every layer of the stack.

"We need credible alternatives," he said. "Not because they win the market, but because they change it."

His analogy was Signal. Most people still use iMessage or WhatsApp, but Signal exists, and its existence has forced the rest of the industry to take encryption seriously. It is leverage. It does not need to dominate to matter.

Krikorian laid out four pressure points where that leverage could be built: developer tools, data, models, and hardware.

- Developer tools that let builders switch providers instead of hardwiring everything to one API. Right now, using OpenAI or Anthropic is so easy that it becomes the default. That convenience turns into dependency. Mozilla is working on abstraction layers that let developers swap models the same way they swap cloud providers. If you can move, the landlord has less power.

- Data that is licensed, traceable, and fair. If AI companies want to train on journalism, music, or film, they should pay for it. Mozilla just launched the Mozilla Data Collective, a two-sided marketplace for licensable datasets. It is an attempt to create an actual economy around training data instead of a free-for-all. The New York Times lawsuit and others like it are forcing the question. Krikorian thinks the law will eventually catch up. The marketplace is one way to get ahead of it.

- Models that are smaller, cheaper, and purpose-built. Not everything needs a giant general-purpose model. Specialized models can run locally, cost less to train, and give users control over their data. Pinterest saved more than $10 million in inference costs by switching to open models. That is not ideology. That is economics.

- Hardware that is not locked to a single vendor. Right now, Nvidia owns the infrastructure layer. Krikorian thinks abstraction layers can create room for other chip makers and cloud providers. It does not require overthrowing Nvidia. It just requires making alternatives viable.

None of this is certain. But the alternative is worse. If AI solidifies as a rented, closed system controlled by five companies, the rest of us will spend the next decade negotiating terms we did not set. Media companies will watch their archives get turned into training data with no economic return. Advertisers will lose visibility into how targeting and attribution actually work. Governments will find themselves dependent on private companies for the infrastructure that shapes public knowledge.

Europe understands this better than Silicon Valley wants to admit. DLD is one of the few places where regulators, executives, and technologists actually sit in the same room and talk about power instead of features. Europe still has the political will to write rules around privacy, portability, and competition. As Krikorian put it, political will becomes policy, and policy becomes engineering requirements. That is not theory. It is how we ended up with web standards instead of a Microsoft monoculture 20 years ago.

Krikorian's argument in Munich was not about rejecting AI. It was about insisting that we build it in a way that does not make us tenants in our own digital future. Renting is convenient until the landlord raises the rent, changes the locks, or decides your lease is up.

The defaults are being set right now. Once they harden, they will be very difficult to change. If we want to own the intelligence layer instead of renting it, the work starts today.

You can watch the full DLD conversation here: https://www.youtube.com/watch?v=xCckoDNxc8c

Next story loading loading..