In my last column, I explored the idea that software as a product may be giving way to software as something generated on demand. As generative systems become capable of building tools tailored to individual businesses and workflows, the economic foundations of enterprise software begin to shift.
But beneath that shift lies another question, one that may prove more consequential than the fate of any individual software vendor. If intelligence itself is becoming infrastructure, how will it be paid for?
Other WRAL Top Stories
Today, large language models feel accessible and even abundant. Most platforms offer free tiers, modestly priced subscriptions and enterprise upgrades. To the casual observer, this resembles the familiar pricing ladders of cloud software and telecommunications. Yet these systems are extraordinarily expensive to build and operate. Data centers require billions in capital investment. Compute and energy costs are ongoing and significant. Investors backing these firms expect returns commensurate with that scale.
>> The hidden costs of North Carolina’s data center boom
This week, we saw early signals of how those returns may begin to materialize. OpenAI launched a pilot program introducing advertisements for lower-tier users while maintaining ad-free experiences for higher-paying subscribers. Around the same time, a prominent researcher left the company, warning publicly that advertising inside conversational AI risks repeating the mistakes of social media, where user trust gradually became a monetized asset.
These developments suggest that the funding model of AI is no longer an abstract debate. It is actively being shaped.
The pattern we recognize
The trajectory feels familiar. Over the past two decades, we have watched digital platforms follow a predictable arc. They begin by offering free access to accelerate adoption and build network effects. As market share consolidates and switching costs rise, monetization becomes imperative. Advertising, often framed initially as limited or carefully contained, expands over time.
Search engines began as tools to organize information. Social networks promised connection. Streaming services once assured viewers that subscription revenue would free them from commercial interruption. In each case, economic gravity eventually pulled these platforms toward advertising as a dominant revenue stream.
>> Guest column: Super Bowl ads predict the end of the AI bubble
Even the cultural signals are revealing. During the recent Super Bowl, Anthropic purchased multimillion-dollar advertising slots to promote its AI products and to emphasize that it would not use advertising within its systems. The irony of advertising to promise an ad-free future was difficult to miss, and foreshadows that advertising is already a priority for Claude.
I’m doubtful Anthropic’s commitment will remain permanent. It is good PR for a news cycle or two. But I’d be amazed if the promise lasts over the long haul. Consumers have been duped by these promises time and time again.
What makes this moment different is not the reappearance of advertising. It is the nature of the information now at stake. Here’s what I mean.
From behavior to intention
Search engines learned what we clicked. Social platforms observed what we liked and shared. Those signals were powerful, but they were still behavioral traces, or external manifestations of interest.
Large language models operate at a different layer. They do not merely observe our browsing patterns; they participate in our thinking. Users confide in them about health concerns, career anxieties, financial decisions, personal relationships and half-formed ideas. In “voice mode” (which I use all the time), the interaction becomes even more intimate. The system receives not only typed queries but tone, cadence and spontaneous reflection.
In economic terms, this is not incrementally better data. The shift is orders of magnitude more significant. We have moved from insight into intention.
That distinction matters because intent carries far more predictive and persuasive power than clicks or page views. A conversational AI that understands not only what you bought last month but what you are uncertain about today occupies a profoundly asymmetrical position. If that asymmetry is leveraged for advertising, the ethical landscape shifts.
We prohibit insider trading because it grants certain market participants privileged knowledge unavailable to others, distorting fairness and undermining trust. A trader with inside information can consistently outmaneuver those operating without it. The system ceases to function as a level playing field.
When advertisers gain access, directly or indirectly, to the “inside” of our cognitive lives, a similar imbalance emerges. They are no longer inferring preferences from surface behavior. They are targeting based on articulated fears, vulnerabilities, and aspirations. The consumer, unaware of the depth of insight shaping the message, is placed at a structural disadvantage.
This imbalance does not affect only individuals. It also reshapes competition. Companies with the largest marketing budgets will be best positioned to purchase access to these sophisticated targeting capabilities. Smaller retailers, local businesses, and emerging brands will struggle to compete in an environment where influence is calibrated by AI systems trained on intimate user data. The result could be not only consumer manipulation but further market concentration, reinforcing the dominance of already powerful firms.
In that sense, advertising within conversational AI is not merely another monetization tactic. It crosses an ethical boundary. It transforms a tool designed to assist human reasoning into a channel through which asymmetrical power can be exercised at scale.
The risk of a two-tier cognitive system
The pilot rollout of ads to lower-tier users while preserving ad-free environments for premium subscribers hints at a broader structural risk. If privacy and neutrality become features reserved for those who can afford higher subscription tiers, we may find ourselves constructing a two-tier system of intelligence.
Wealthier individuals and enterprises would operate within protected, minimally monetized environments. Lower-income users would access systems funded by advertising, where conversational outputs could be subtly shaped by commercial incentives.
When AI tools increasingly mediate access to education, healthcare information, legal guidance and employment opportunities, such stratification takes on societal significance. The question is no longer whether an advertisement interrupts entertainment. It is whether economic status determines the neutrality of the cognitive tools available to you.
Funding intelligence without selling intention
None of this dismisses the underlying economic challenge. Advanced AI systems are expensive. Subscription revenue alone may not cover the full costs of continuous model improvement and infrastructure expansion, especially in a competitive market.
The question, however, is not whether we will pay. It is how we choose to distribute the burden and align incentives.
One possibility is deliberate cross-subsidization. Enterprise clients and high-volume users could fund broad public access, allowing individuals to use core AI capabilities without exposure to targeted advertising. Such models already exist in other industries, where higher-margin segments support universal service. Utilities, for example, often rely on commercial and industrial customers to stabilize and offset residential rates, ensuring that essential services remain broadly accessible. Airlines operate on a similar principle: premium cabin revenue makes lower economy fares viable for millions of travelers.
We have long accepted that essential infrastructure should not depend on extracting disproportionate value from those least able to afford it. When a service becomes foundational to economic participation, education, or civic life, fairness demands that its costs be distributed according to capacity to pay, not according to who is most vulnerable to monetization.
Another approach would treat user data as a governed asset rather than a byproduct. Data cooperatives or data trust structures could grant individuals ownership and control over how their conversational data is used. If data is economically valuable, then participation should be explicit and compensated, not implicit and opaque.
More fundamentally, we may need to consider whether a publicly funded AI infrastructure has a role to play. When technologies become central to participation in modern life like roads, electricity or public libraries, we should think of them as basic human rights. The fees of private utilities are still set by the public for this very reason. Basic access to the foundations of society must be paramount.
Consider telecommunications. Access to communication technology should be seen as a basic human right. We got this right with the landline phones and with the internet, but we failed with broadband. By law, phone companies had to provide copper wire to every new home or business without “penalty” pricing for location, no matter how rural or hard to reach. Those costs were subsidized within the greater system, for the broader societal good.
In the early days of the World Wide Web, policymakers chose to maintain open standards rather than allowing proprietary protocols to dominate. The internet’s core architecture remained neutral and interoperable, enabling broad participation and innovation. Later, debates around net neutrality sought to prevent network providers from discriminating among types of traffic based on commercial interest and this largely was successful. Everyone must participate in using the Internet today, as it becomes more and more core to education, healthcare, banking and entertainment.
But we failed regarding access to the internet in the age of broadband. As fiber optics and high speed cellular networks replaced copper-wire telecommunications, private industry won the day, securing exclusive licenses to wireless spectrum and loose regulation over population coverage requirements. As a result, less profitable areas (primarily rural communities and low income urban areas) do not have adequate access to broadband.
We may now face a comparable decision regarding AI. If these systems become mediators of knowledge, opportunity and decision-making, protecting their neutrality may be as important as protecting the neutrality of the networks that carry our data. An “AI neutrality” principle that limits direct monetization of intimate user intent and guarantees baseline protections regardless of subscription tier, could serve as the modern extension of those earlier commitments.
A new paradigm requires a new mindset
The software world is changing, and business models are changing with it. As generative systems blur the line between tool and collaborator, the economic structures surrounding them will shape not only corporate earnings but human agency.
We have, in past technological transitions, allowed monetization logic to outpace ethical reflection. Only later did we confront the unintended consequences of surveillance advertising and algorithmic influence. With conversational AI, the stakes are higher because the layer being monetized lies closer to human cognition itself.
We are still early enough to decide differently. If access to intelligence is becoming as essential as access to the internet once was, then it deserves a framework that protects users from asymmetric manipulation and preserves fair competition in the marketplace.
The question before us is not simply how AI companies will generate revenue. It is whether the most intimate digital systems we have ever created will operate as neutral infrastructure or as instruments of commercial persuasion. In answering that question, we are not merely financing technology. We are defining the ethical boundaries of the next economic era.