The Prophets of Silicon Valley

The chief executives of the world’s most powerful artificial intelligence companies have been saying remarkable things about the future of work.

Dario Amodei of Anthropic has warned, in multiple interviews, that AI could eliminate half of all entry-level white-collar jobs within the next few years.

Sam Altman of OpenAI has spoken of AI agents “joining the workforce” and of intelligence eventually becoming “too cheap to meter”, a utility as abundant as electricity, capable of rewriting the rules of the economy.

Jensen Huang of Nvidia, perhaps the most ebullient of the three, has described a future workforce of “humans and digital humans,” predicted that companies will hire and onboard AI agents just as they do people today, and suggested that the current infrastructure buildout is the largest in human history.

These are not idle remarks. They come from people who run companies at the very centre of the AI industry, who speak at Davos and on 60 Minutes and to US congressional committees, and whose words move markets. They deserve to be taken seriously, and that is precisely why they deserve to be examined seriously.

I want to offer three arguments against the prevailing Silicon Valley consensus on AI and jobs. The first is empirical: the evidence does not support the predictions. The second is technical: the predictions rest on a misunderstanding of what large language models actually are. The third is economic: even setting aside the first two objections, the predictions ignore the most fundamental question of all: at what price?

What the Evidence Shows

The most rigorous attempt to date to measure AI’s actual effects on firms and workers was published in early 2026 by a team including Nicholas Bloom and Steven J. Davis, drawing on surveys of thousands of senior executives across four major economies. The overwhelming majority reported no measurable effect of AI on either employment or productivity over the preceding three years. The effects, where reported at all, were vanishingly small.

This is not an isolated finding. A Yale Budget Lab study, reviewing Bureau of Labor Statistics data through late 2025, found no significant differences in employment outcomes between occupations with high and low AI exposure.

Sam Altman himself acknowledged at a recent conference that companies are blaming AI for layoffs “whether or not it really is about AI”, an admission that ought to give pause to anyone constructing a narrative of AI-driven displacement.

Robert Solow observed in 1987 that you could see the computer age everywhere except in the productivity statistics. The paradox named after him has not gone away. It has simply acquired new occupants.

This should not be surprising to anyone who studies the history of general-purpose technologies. The personal computer arrived in the early 1980s, but the productivity gains it enabled only became measurable in the 1990s. The pattern is consistent across technological revolutions: the gap between a technology’s demonstrated capability and its measurable economic impact is large, and it is measured in decades rather than years.

Altman predicted in early 2025 that AI agents would “join the workforce” and materially change company output within the year. They did not. The prediction has now been quietly extended to 2026, then perhaps 2027.

What Language Models Actually Are

There is a deeper problem with the displacement narrative, which concerns the nature of the technology itself. Large language models are genuinely impressive. But the source of their impressiveness is also the source of their limitation, and that limitation is structural, not a matter of scale.

At their core, these systems are prediction machines. They are built to estimate, given a sequence of words, what word is likely to come next, a process trained on an enormous corpus of human-generated text. The outputs can be fluent, coherent, and occasionally brilliant.

But the mechanism is statistical pattern completion, not reasoning. When a language model produces an analysis of a legal question or a financial situation, it does so not because it understands the question, but because it has encountered vast quantities of text in which similar questions were discussed in similar ways.

The financial industry has been doing something broadly analogous for decades, using quantitative models to find patterns in data and generate predictions. Nobody called those systems intelligent, and nobody suggested they would replace lawyers and analysts wholesale. The AI revolution, in important part, is the democratisation and broadening of such methods, not a qualitative leap into something categorically different.

This matters enormously for the displacement question. The tasks at which these systems genuinely excel are those resembling sophisticated pattern completion: drafting standard documents, summarising lengthy texts, generating code from specifications, producing first drafts from structured inputs.

The tasks at which they remain genuinely poor are those requiring abstract reasoning, causal inference, judgement under genuine uncertainty, and the kind of theoretical model-building that underlies the higher-value components of professional work.

Apple’s research division published a paper in 2025 testing frontier models on logical puzzles requiring genuine reasoning, and found that performance collapsed at high complexity even when the correct method was provided explicitly. The METR research group found, in a randomised controlled trial, that experienced software developers were measurably slower when using AI assistance than without it, and that is this is the domain where AI is supposed to perform best.

Jensen Huang is fond of arguing that AI will enhance rather than replace professionals: the radiologist, he says, will use AI to handle routine work and focus on judgement and care, making hospitals more productive and creating more jobs. This is a reasonable description of what AI does well. But it is precisely not the scenario of mass white-collar displacement. Enhancement and elimination are different economic mechanisms, and they have different implications.

The Price Nobody Mentions

The third objection is the one that I find most decisive as an economist, and the one that receives almost no attention in the public debate.

The current price of AI services does not reflect the true cost of producing them. The leading AI companies are, by their own internal projections, running significant losses and do not expect to reach positive cash flow until the late 2020s at the earliest, and those timelines have already been revised once.

They are sustained by a continuous flow of investor capital at valuations that require extraordinary future growth to justify. The largest technology companies are collectively committing hundreds of billions of dollars annually to AI infrastructure, a scale of capital deployment without precedent in the history of the technology industry.

What this means, economically, is that the price businesses are paying today for AI services is heavily subsidised, not by governments, but by investors who are betting on a future in which these services become vastly more valuable.

The analogy I find useful is this: imagine that every morning a helicopter with a pilot arrived at your door to take you to work, entirely free of charge. Your productivity would rise. You would reorganise your working life around it. You might even let go of some arrangements that no longer seemed necessary. But if you had to pay the actual market cost of a private helicopter and pilot, the calculation would look entirely different. Many of the apparent gains would evaporate.

This is the position businesses are in today with AI. They are restructuring around a technology priced far below its true cost.

When prices normalise, as they must if the companies providing these services are ever to become profitable, many applications that currently appear economically attractive will prove not to be. The entry-level professional who seemed redundant next to a free AI agent may look considerably less redundant next to a properly priced AI agent.

There is also a resource constraint that the displacement narrative tends to ignore. Running AI at the scale Amodei and Altman envision requires enormous quantities of electricity, specialised chips, water for cooling, and capital for infrastructure.

The International Energy Agency projects that global data centre electricity consumption will roughly double by 2030, reaching the equivalent of Japan’s entire annual consumption, and that projection does not assume anywhere near the scale of white-collar displacement being predicted.

If one took seriously the claim that half of all entry-level white-collar work would move to AI within five years, the implied demand on physical infrastructure would be orders of magnitude larger than anything currently being planned. AI does not abolish scarcity. It relocates it.

The Incentive Structure

It would be unfair not to note that Amodei, Altman, and Huang are not disinterested parties in this debate. They are the chief executives of companies whose valuations, fundraising capacity, and competitive positioning all depend on a compelling story about AI’s transformative economic impact.

Sustaining investor confidence in the capital deployed requires a narrative of imminent disruption. Amodei himself has acknowledged that he is deeply uncomfortable with a small group of people making decisions about technology that will affect everyone, yet he continues to make exactly those kinds of claims about economic transformation, from exactly that position.

Altman was at least admirably honest at a recent conference, saying of the current moment: “If there was an easy consensus answer, we’d have done it by now, so I don’t think anyone knows what to do.” That is a notably different register from predicting that AI agents will join the workforce within the year, or that intelligence will become too cheap to meter. The gap between private uncertainty and public prophecy deserves attention.

What Will Actually Happen

None of this is an argument that AI will leave the economy unchanged. It will not. These are genuinely useful tools, and their usefulness will grow as the technology develops and as institutions learn to integrate it into their workflows in ways that are reliable, safe, and cost-effective.

The appropriate historical frame is not the industrial revolution but something more modest and more instructive: the spreadsheet. The spreadsheet did not eliminate finance departments. It changed what finance departments did, making certain kinds of analysis cheaper and faster while freeing human attention for the work that actually required judgement. Demand for financial analysis expanded to fill the additional capacity. Employment in finance did not collapse.

The Jevons paradox, named for the nineteenth-century economist who observed that more efficient steam engines led to more coal consumption rather than less, is worth keeping in mind here.

If AI genuinely makes junior professionals more productive, the likely consequence in many sectors is not that firms need fewer of them, but that demand for their services expands. Lower effective cost stimulates demand. The structure of employment changes; the aggregate volume does not necessarily decline.

I should be transparent about my own position. I use these tools every day, and they have made me more productive in concrete and specific ways. Writing this piece itself involved Claude, which is of course Anthropic’s product.

What I fear most is not mass unemployment. It is a cycle of inflated expectations followed by disillusionment.

After the dot-com crash, many businesses retreated from internet investment at precisely the moment when the genuine long-run benefits were beginning to materialise. The internet did ultimately transform banking, retail, and media but it did so over fifteen to twenty years, not in the two-to-five year windows being promised in 1999.

AI will likely follow the same arc. The worst outcome would be a premature rush driven by subsidised pricing and exaggerated predictions, followed by retrenchment, delaying the genuine benefits by a decade.

The technology is real. The potential is genuine. But Solow’s paradox did not disappear because the predictions got louder. And the entry-level lawyer, the junior consultant, and the graduate analyst may prove rather more resilient than the prophets of Silicon Valley believe, not because AI is unimpressive, but because impressive technology and economically viable technology are not the same thing, and because scarcity, as economics has always insisted, cannot be wished away. It can only be moved.

But Anthropic’s Claude certainly is an amazing product, because it helped me write a lot of this post. Then again, Claude was trained on, among other things, this very blog. Maybe I should ask for a discount on my Claude subscription. I should also confess that writing this article required a fair amount of time correcting Claude’s hallucinated figures and citations that simply did not exist in reality. Perhaps Amodei hallucinates too.


Lars Christensen is an economist, Head of Analysis and co-founder of PAICE, and external lecturer at Copenhagen Business School’s Department of Digitalization. He is the originator of Market Monetarism and writes The Market Monetarist blog.

Contact: LC@paice.io

Leave a comment

Leave a Reply

Discover more from The Market Monetarist

Subscribe now to keep reading and get access to the full archive.

Continue reading