The Prophets of Silicon Valley

The chief executives of the world’s most powerful artificial intelligence companies have been saying remarkable things about the future of work.

Dario Amodei of Anthropic has warned, in multiple interviews, that AI could eliminate half of all entry-level white-collar jobs within the next few years.

Sam Altman of OpenAI has spoken of AI agents “joining the workforce” and of intelligence eventually becoming “too cheap to meter”, a utility as abundant as electricity, capable of rewriting the rules of the economy.

Jensen Huang of Nvidia, perhaps the most ebullient of the three, has described a future workforce of “humans and digital humans,” predicted that companies will hire and onboard AI agents just as they do people today, and suggested that the current infrastructure buildout is the largest in human history.

These are not idle remarks. They come from people who run companies at the very centre of the AI industry, who speak at Davos and on 60 Minutes and to US congressional committees, and whose words move markets. They deserve to be taken seriously, and that is precisely why they deserve to be examined seriously.

I want to offer three arguments against the prevailing Silicon Valley consensus on AI and jobs. The first is empirical: the evidence does not support the predictions. The second is technical: the predictions rest on a misunderstanding of what large language models actually are. The third is economic: even setting aside the first two objections, the predictions ignore the most fundamental question of all: at what price?

What the Evidence Shows

The most rigorous attempt to date to measure AI’s actual effects on firms and workers was published in early 2026 by a team including Nicholas Bloom and Steven J. Davis, drawing on surveys of thousands of senior executives across four major economies. The overwhelming majority reported no measurable effect of AI on either employment or productivity over the preceding three years. The effects, where reported at all, were vanishingly small.

This is not an isolated finding. A Yale Budget Lab study, reviewing Bureau of Labor Statistics data through late 2025, found no significant differences in employment outcomes between occupations with high and low AI exposure.

Sam Altman himself acknowledged at a recent conference that companies are blaming AI for layoffs “whether or not it really is about AI”, an admission that ought to give pause to anyone constructing a narrative of AI-driven displacement.

Robert Solow observed in 1987 that you could see the computer age everywhere except in the productivity statistics. The paradox named after him has not gone away. It has simply acquired new occupants.

This should not be surprising to anyone who studies the history of general-purpose technologies. The personal computer arrived in the early 1980s, but the productivity gains it enabled only became measurable in the 1990s. The pattern is consistent across technological revolutions: the gap between a technology’s demonstrated capability and its measurable economic impact is large, and it is measured in decades rather than years.

Altman predicted in early 2025 that AI agents would “join the workforce” and materially change company output within the year. They did not. The prediction has now been quietly extended to 2026, then perhaps 2027.

What Language Models Actually Are

There is a deeper problem with the displacement narrative, which concerns the nature of the technology itself. Large language models are genuinely impressive. But the source of their impressiveness is also the source of their limitation, and that limitation is structural, not a matter of scale.

At their core, these systems are prediction machines. They are built to estimate, given a sequence of words, what word is likely to come next, a process trained on an enormous corpus of human-generated text. The outputs can be fluent, coherent, and occasionally brilliant.

But the mechanism is statistical pattern completion, not reasoning. When a language model produces an analysis of a legal question or a financial situation, it does so not because it understands the question, but because it has encountered vast quantities of text in which similar questions were discussed in similar ways.

The financial industry has been doing something broadly analogous for decades, using quantitative models to find patterns in data and generate predictions. Nobody called those systems intelligent, and nobody suggested they would replace lawyers and analysts wholesale. The AI revolution, in important part, is the democratisation and broadening of such methods, not a qualitative leap into something categorically different.

This matters enormously for the displacement question. The tasks at which these systems genuinely excel are those resembling sophisticated pattern completion: drafting standard documents, summarising lengthy texts, generating code from specifications, producing first drafts from structured inputs.

The tasks at which they remain genuinely poor are those requiring abstract reasoning, causal inference, judgement under genuine uncertainty, and the kind of theoretical model-building that underlies the higher-value components of professional work.

Apple’s research division published a paper in 2025 testing frontier models on logical puzzles requiring genuine reasoning, and found that performance collapsed at high complexity even when the correct method was provided explicitly. The METR research group found, in a randomised controlled trial, that experienced software developers were measurably slower when using AI assistance than without it, and that is this is the domain where AI is supposed to perform best.

Jensen Huang is fond of arguing that AI will enhance rather than replace professionals: the radiologist, he says, will use AI to handle routine work and focus on judgement and care, making hospitals more productive and creating more jobs. This is a reasonable description of what AI does well. But it is precisely not the scenario of mass white-collar displacement. Enhancement and elimination are different economic mechanisms, and they have different implications.

The Price Nobody Mentions

The third objection is the one that I find most decisive as an economist, and the one that receives almost no attention in the public debate.

The current price of AI services does not reflect the true cost of producing them. The leading AI companies are, by their own internal projections, running significant losses and do not expect to reach positive cash flow until the late 2020s at the earliest, and those timelines have already been revised once.

They are sustained by a continuous flow of investor capital at valuations that require extraordinary future growth to justify. The largest technology companies are collectively committing hundreds of billions of dollars annually to AI infrastructure, a scale of capital deployment without precedent in the history of the technology industry.

What this means, economically, is that the price businesses are paying today for AI services is heavily subsidised, not by governments, but by investors who are betting on a future in which these services become vastly more valuable.

The analogy I find useful is this: imagine that every morning a helicopter with a pilot arrived at your door to take you to work, entirely free of charge. Your productivity would rise. You would reorganise your working life around it. You might even let go of some arrangements that no longer seemed necessary. But if you had to pay the actual market cost of a private helicopter and pilot, the calculation would look entirely different. Many of the apparent gains would evaporate.

This is the position businesses are in today with AI. They are restructuring around a technology priced far below its true cost.

When prices normalise, as they must if the companies providing these services are ever to become profitable, many applications that currently appear economically attractive will prove not to be. The entry-level professional who seemed redundant next to a free AI agent may look considerably less redundant next to a properly priced AI agent.

There is also a resource constraint that the displacement narrative tends to ignore. Running AI at the scale Amodei and Altman envision requires enormous quantities of electricity, specialised chips, water for cooling, and capital for infrastructure.

The International Energy Agency projects that global data centre electricity consumption will roughly double by 2030, reaching the equivalent of Japan’s entire annual consumption, and that projection does not assume anywhere near the scale of white-collar displacement being predicted.

If one took seriously the claim that half of all entry-level white-collar work would move to AI within five years, the implied demand on physical infrastructure would be orders of magnitude larger than anything currently being planned. AI does not abolish scarcity. It relocates it.

The Incentive Structure

It would be unfair not to note that Amodei, Altman, and Huang are not disinterested parties in this debate. They are the chief executives of companies whose valuations, fundraising capacity, and competitive positioning all depend on a compelling story about AI’s transformative economic impact.

Sustaining investor confidence in the capital deployed requires a narrative of imminent disruption. Amodei himself has acknowledged that he is deeply uncomfortable with a small group of people making decisions about technology that will affect everyone, yet he continues to make exactly those kinds of claims about economic transformation, from exactly that position.

Altman was at least admirably honest at a recent conference, saying of the current moment: “If there was an easy consensus answer, we’d have done it by now, so I don’t think anyone knows what to do.” That is a notably different register from predicting that AI agents will join the workforce within the year, or that intelligence will become too cheap to meter. The gap between private uncertainty and public prophecy deserves attention.

What Will Actually Happen

None of this is an argument that AI will leave the economy unchanged. It will not. These are genuinely useful tools, and their usefulness will grow as the technology develops and as institutions learn to integrate it into their workflows in ways that are reliable, safe, and cost-effective.

The appropriate historical frame is not the industrial revolution but something more modest and more instructive: the spreadsheet. The spreadsheet did not eliminate finance departments. It changed what finance departments did, making certain kinds of analysis cheaper and faster while freeing human attention for the work that actually required judgement. Demand for financial analysis expanded to fill the additional capacity. Employment in finance did not collapse.

The Jevons paradox, named for the nineteenth-century economist who observed that more efficient steam engines led to more coal consumption rather than less, is worth keeping in mind here.

If AI genuinely makes junior professionals more productive, the likely consequence in many sectors is not that firms need fewer of them, but that demand for their services expands. Lower effective cost stimulates demand. The structure of employment changes; the aggregate volume does not necessarily decline.

I should be transparent about my own position. I use these tools every day, and they have made me more productive in concrete and specific ways. Writing this piece itself involved Claude, which is of course Anthropic’s product.

What I fear most is not mass unemployment. It is a cycle of inflated expectations followed by disillusionment.

After the dot-com crash, many businesses retreated from internet investment at precisely the moment when the genuine long-run benefits were beginning to materialise. The internet did ultimately transform banking, retail, and media but it did so over fifteen to twenty years, not in the two-to-five year windows being promised in 1999.

AI will likely follow the same arc. The worst outcome would be a premature rush driven by subsidised pricing and exaggerated predictions, followed by retrenchment, delaying the genuine benefits by a decade.

The technology is real. The potential is genuine. But Solow’s paradox did not disappear because the predictions got louder. And the entry-level lawyer, the junior consultant, and the graduate analyst may prove rather more resilient than the prophets of Silicon Valley believe, not because AI is unimpressive, but because impressive technology and economically viable technology are not the same thing, and because scarcity, as economics has always insisted, cannot be wished away. It can only be moved.

But Anthropic’s Claude certainly is an amazing product, because it helped me write a lot of this post. Then again, Claude was trained on, among other things, this very blog. Maybe I should ask for a discount on my Claude subscription. I should also confess that writing this article required a fair amount of time correcting Claude’s hallucinated figures and citations that simply did not exist in reality. Perhaps Amodei hallucinates too.


Lars Christensen is an economist, Head of Analysis and co-founder of PAICE, and external lecturer at Copenhagen Business School’s Department of Digitalization. He is the originator of Market Monetarism and writes The Market Monetarist blog.

Contact: LC@paice.io

A new service: Expert Briefings with Lars Christensen

Regular readers of this blog will know that most of what I do here is explain, analyse, and argue about macroeconomics, monetary policy, and increasingly artificial intelligence. That work is public and free – and I intend to keep it that way.

But over the past year, we have increasingly been asked – through PAICE, the consultancy I co-founded – to bring that kind of thinking directly into organisations. Investment committees, boards, executive teams, and strategy sessions.

So we have formalised that offering under the name Expert Briefings.

What is an Expert Briefing?

It is not a standard presentation. It is a tailored, interactive session where your organisation gets focused access to specialist knowledge on the topics that matter most to you right now. We start from your industry, your risk exposure, and your questions – and work from there.

At PAICE, we cover three interconnected areas:

Macroeconomics, monetary policy, and financial markets – global growth, inflation, central bank policy and communication, interest rates, currencies, commodities, and financial imbalances. The question we always come back to: what does any of this actually mean for your business, your investments, and your risk picture?

Geopolitics and business risk – trade conflicts, energy markets, security policy, and geopolitical shifts. And critically: the concrete implications for companies and investors who have to make decisions in an uncertain world.

Artificial intelligence and technology – the latest developments in AI, what they mean for your specific sector, and how you position yourself strategically when the pace of change is this fast.

These three areas can be addressed individually or in combination, depending on what your organisation needs.

Who is this for?

The organisations we work with range from pension funds and asset managers who need regular macro and market input for investment committees, to CEOs, CFOs, and boards who want an independent external perspective on economics, monetary policy, technology, and geopolitics. Exporters and multinationals dealing with currency risk and trade policy. Banks building internal analytical capacity. Technology companies navigating AI regulation and competitive dynamics.

The common thread is that they want direct access to expertise – not a consultant’s slide deck, but a genuine conversation with someone who has spent decades thinking about these questions.

Formats

We try to be flexible about how this works in practice. Some organisations want a regular monthly or quarterly cadence to stay continuously updated. Others prefer an on-demand retainer – a standing arrangement that allows them to call on a briefing when the need arises, without going through a full procurement process each time. And sometimes the need is simply a one-off session for a strategy day or board meeting.

For topics that genuinely benefit from multiple perspectives, we can also convene an expert panel – two or more specialists combining, for instance, macroeconomics with AI and technology, or geopolitics with energy markets.

Who delivers the briefings?

The briefings are anchored by me. I spent fifteen years as Head of Emerging Markets Research at Danske Bank, including co-authoring the 2006 “Geyser Crisis” report that identified the risks building in the Icelandic banking system ahead of the 2008 collapse. Today I serve as co-founder, co-owner, and Head of Analysis at PAICE, where our work sits at the intersection of macroeconomic analysis, monetary policy, AI, and data.

For broader panels and cross-disciplinary sessions, I draw on PAICE’s network of specialists across macroeconomics, monetary policy, financial markets, artificial intelligence, technology, and geopolitics.

Getting in touch

If any of this sounds relevant for your organisation, we are happy to have an initial conversation about what might make sense.

Contact: hello@paice.io

Speaker platform: http://www.globaletanker.dk (in Danish)

The Blue Owl in The Coal Mine – Private Credit: The New Subprime?

Blue Owl is one of Wall Street’s big names in private credit – a manager of nearly $300 billion in assets. The company’s logo is an owl: the animal that, according to legend, can see everything, even in the dark.

On February 19, 2026, Blue Owl restricted withdrawals from one of its retail-focused funds and quickly sold $1.4 billion in loans to raise liquidity. Investors who wanted out couldn’t get out. The stock has fallen nearly 60% over 13 months.

The blue owl in the coal mine had not seen it coming.

It reminds us of something we have seen before.

In the mid-2000s, many were warning about the American housing market. Lending standards were too loose. Too much capital was chasing too few good loans. And those bearing the risk often didn’t know they were doing so. We know how that story ended.

There is now a new part of the financial system that deserves the same attention. It is called private credit. Most people have never heard of it – and that is itself part of the problem.

Regulatory arbitrage and monetary policy created this market

Private credit is fundamentally a child of two policy choices.

The first was regulation. Banks were subjected to far stricter capital requirements via Basel III after 2008. The intention was understandable enough – but the consequence was predictable: capital and credit demand do not disappear because banks withdraw. They move precisely to where regulation does not follow.

Private credit funds operate without equivalent capital requirements, without the same transparency requirements, and without meaningful macroprudential oversight.

This is the definition of regulatory arbitrage – and it is a foreseeable consequence of asymmetric regulation, not an accidental side effect. The IMF noted in its Global Financial Stability Report in April 2024 that insurance companies were also incentivized to move into private credit precisely because the capital charges are lower and less risk-sensitive than those applicable to commercial banks. Regulation did not reduce risk. It relocated it.

The second was monetary policy – but let us be precise here. This is not the story of a decade of near-zero rates after 2008. That is the wrong diagnosis.

This is the story of what happened from 2020. The COVID response triggered the largest expansion of the American money supply in peacetime history. M2 grew by nearly 27% year-over-year in early 2021 – the highest peacetime rate since the Federal Reserve was founded in 1913. Some of this expansion was justified given the lockdowns. But it continued far too long.

That sent a tsunami of capital into private credit, because institutional investors desperately sought returns in a world where traditional fixed income products yielded nothing.

The market grew from $2 trillion to $3 trillion in precisely that period. When the money supply and rates finally turned from 2022, enormous sums were already locked into illiquid structures held by borrowers priced for a world of extraordinarily cheap money.

Regulation that does not eliminate risk, but merely displaces it – combined with a tsunami of liquidity. That is precisely the cocktail that mixed the subprime crisis.

The Austrian school element: the AI boom as malinvestment

It is worth drawing on an older analytical tradition here – but with an important caveat.

Friedrich Hayek and Ludwig von Mises were the two central figures of the Austrian school – a tradition in economic thinking that flourished in interwar Vienna before spreading to London and Chicago. Hayek received the Nobel Prize in Economics in 1974.

Their theory of the business cycle – known as Austrian Business Cycle Theory (ABCT) provides an explanation of what happens when central banks hold interest rates artificially low for too long.

The argument is simple. When the rate is lower than the market would have set itself, a false signal is sent to investors: capital is cheaper than it really is.

This attracts investment into projects that only look profitable at artificial financing costs – not at the natural rate. Hayek and Mises called this malinvestment – misallocations that look sensible during the boom, but are exposed brutally when monetary policy normalises.

But here is the caveat – and it is important. Austrian Business Cycle Theory is a theory of the unsustainable boom. It explains how the misallocations arise. It does not explain what happens next – and in particular, it does not tell us whether the bust will become a catastrophe. That depends on something else entirely: monetary policy.

The AI boom is a textbook example of the first part of that story.

What we could all ‘The Tesla boom’ of 2020-21 was to a large extent what financed the training of ChatGPT. The COVID liquidity injection sent capital into tech equities and pushed financing costs for AI companies below the natural rate – precisely the Hayekian mechanism, just with a modern transmission. Credit channels kept the expansion going far beyond what the underlying productivity numbers could justify.

Now we are in the middle of the massive buildout of data centres necessary to make the business models profitable – the four largest tech companies spent $360 billion on AI infrastructure in 2025 and are planning $650 billion in 2026. And the financing story is becoming increasingly speculative: AI companies are seeking capital in the Middle East, the Trump administration is talking about using Fannie Mae and Freddie Mac to finance the sector. It resembles the Icelandic banks in 2006-07, rolling around finding new liquidity sources while the warning lights were flashing.

The underlying problem is that the business model that must repay the debt is far weaker than assumed. Microsoft’s AI chief Mustafa Suleiman recently promised that AI would automate most office jobs within 12-18 months. A large study from the National Bureau of Economic Research of 6,000 senior executives shows that nearly 90% report AI has had no measurable effect on employment or productivity. Penn Wharton estimates AI’s contribution to productivity growth at 0.01 percentage points in 2025.

That is not an argument against AI as a technology in the long run – I am after all a huge fan of Large Language Models and a addictive user of LLMs – but it is an argument against pricing $650 billion in annual infrastructure investments on promises that the data consistently contradict.

Private credit and the AI boom are not two separate stories. They are two symptoms of the same misallocation of capital – both created by the same COVID fiscal and monetary expansion, both now under pressure as the bill is presented.

Moral hazard under the Trump boom

On top of regulatory arbitrage and the COVID monetary expansion came a third element: moral hazard on a grand scale.

I have for some time been warning about the dangerous fusion of the Trump administration and parts of American business – particularly the tech sector. When Trump administration AI czar David Sacks says “we can’t afford to go backwards”, I read it as an implicit promise: the government will underwrite the AI boom.

And when large tech companies are increasingly defined as strategically important – as “too big to fail” – there arises precisely the incentive problem that economist Robert Hetzel identified in his analysis of the financial crisis: financial institutions take greater risks when they know others bear the consequences. Gains are privatised. Losses are socialised.

This is not a new phenomenon. It is the same dynamic we saw with Fannie Mae and Freddie Mac before 2008. And it almost always ends the same way.

The cockroaches

The warning lights began flashing in earnest in the autumn of 2025. Let us go through the events in chronological order – because the pattern is more troubling than any individual episode in isolation.

Tricolor Holdings collapsed first. The company operated as both a used car dealer and a subprime lender – packaging high-yield car loans to credit-impaired borrowers into AAA-rated securities and selling them on to investors. When the repayments failed, the whole construction collapsed.

Fifth Third Bank is now accusing Tricolor of fraud, claiming the company pledged the same assets as collateral for multiple loans simultaneously. Investigators are reviewing what may prove to be a manipulated loan database.

First Brands Group followed shortly after. Just weeks before the bankruptcy, the company was marketed by Jefferies Investment Bank as an opportunity for $6 billion in lending – and the company was said to have nearly $1 billion in cash.

It collapsed anyway. It had borrowed massively for acquisitions, then borrowed again against invoices and inventory – and when tariff pressure hit imported components, there was no margin left. Jefferies, Millennium Management, JPMorgan, Barclays, and Fifth Third Bank are all exposed.

Jamie Dimon put it precisely: “When you see one cockroach, there are probably more.”

Then came a series of events that together sketch a pattern. BlackRock wrote down a private loan from full value to zero in three months. This is not as surprising as it sounds: private credit loans are not traded on markets but valued using internal models – what is known as mark-to-model. The IMF documented in its Global Financial Stability Report in 2024 that adjustments to private credit valuations are systematically smaller and slower than in public markets, and that it takes at least four quarters for prices to converge after a shock.

Losses are there before they are visible. A related warning sign is the sharp rise in payment-in-kind interest – where borrowers pay their interest by adding it to the loan principal rather than in cash. The IMF found that the payment-in-kind share in business development company portfolios doubled between 2019 and 2023. Borrowers are not defaulting. They are deferring. Blue Owl restricted withdrawals and sold assets to raise liquidity. Blackstone’s large private credit fund BCRED experienced redemption requests in early 2026 that exceeded the fund’s quarterly cap. And Morgan Stanley and Cliffwater have both been forced to limit withdrawals from their retail-focused funds after investors tried to redeem more than the structures allow.

The official default rate has risen steadily: from 1.76% in Q2 2025 to 1.84% in Q3 and 2.46% in Q4 2025 according to Proskauer’s Private Credit Default Index. That still sounds low – but when debt restructurings and creative loan extensions are included, the real rate approaches 5%.

Fitch puts the actual default rate in private credit at 5.8% through January 2026 – the highest since the index began. In February 2026 alone, 11 default events were recorded, nearly double the monthly average for all of 2025.

Mohamed El-Erian describes it as the “ATM scenario”: investors who cannot exit illiquid positions begin selling what they can sell – regardless of asset class. “If you can’t sell what you want, you sell what you can.”

That is the classic contagion mechanism – not losses, but liquidity pressure that propagates. It resembles a traditional bank run.

And lurking behind the share prices is a mechanism that has not yet fully played out.

The large private credit managers are today rated at the lower end of investment grade. One or two notches down, and they fall below the investment grade threshold – and this is not just a question of prestige. Pension funds, insurance companies, and large bond funds operate under mandates that forbid them from holding securities below investment grade.

A downgrade to junk triggers automatic forced selling from all these institutional investors – not because they want to, but because they are regulatorily obligated to. The IMF estimated in 2024 that pension funds and insurance companies globally had more than $600 billion invested in private credit funds – a figure that has grown rapidly since.

Some of the world’s largest pension funds, with combined assets exceeding $7 trillion, have significantly increased their allocation to private credit while simultaneously raising their financial leverage. The IMF identified this combination – illiquid assets and leveraged balance sheets – as a specific systemic risk. When collateral calls come, these institutions sell what is liquid. That is El-Erian’s ATM scenario in institutional form.

UBS estimates that in a severe AI disruption scenario, the US private credit default rate could hit 13% – twice the stress level for leveraged loans.

That creates precisely the cascade we know from 2008: forced selling pushes prices down, which pushes other funds toward the same threshold, and suddenly it is no longer just private credit under pressure, but all liquid assets that investors must sell to create room on the balance sheet. El-Erian’s ATM scenario, but in a rules-based and automated version.

The blue owl in the coal mine was not the only bird in the mine.

The near-perfect copy of 2008

One must be precise here. $3 trillion sounds like a lot, but in global financial context, the private credit market is relatively modest. The American equity market alone is about 15 times larger. The global bond market is more than 30 times larger.

But that is the wrong way to frame the question.

The subprime market was not the world economy’s largest market in 2006 either.

It still triggered the worst financial crisis since the 1930s. Then-Federal Reserve Chairman Ben Bernanke said himself in 2007 that the problems were “contained”.

They were not – because it was not about subprime’s absolute size, but about its connections to the rest of the financial system and what it revealed about the broader misallocation of capital. It is worth noting that the IMF’s own Global Financial Stability Report from April 2024 concluded that the financial stability risks from private credit “appear contained at present”. That is a precise echo of Bernanke’s formulation. It may prove equally accurate.

That is precisely the same point here. Private credit may not be large enough in itself to trigger a global crisis. But it can be the symptom of a far larger misallocation of capital – driven by the same COVID expansion, the same artificially low rates, and the same moral hazard – that will manifest as losses elsewhere in the financial system as reality catches up with the valuations made in a world that no longer exists.

This is where the data matters. U.S. nominal GDP growth remained relatively stable at around 5-6% through 2006 and into early 2007 – even as the first subprime warning signs appeared and global imbalances were becoming visible. The private credit market was wobbling. But the economy was not yet in crisis. It was only when those financial tensions translated into a de facto monetary contraction – and NGDP growth began to fall sharply through 2007-8 – that a financial correction became a macroeconomic catastrophe.

Critically, central banks made it worse. Spooked by what they perceived as bubble re-ignition – and facing rising headline inflation from an oil price surge – they turned hawkish at precisely the wrong moment (a number of central banks even hiked interest rates during the Summer of 2008). The result was a collapse in nominal spending that turned a necessary market correction into the Great Recession.

Secondary deflation, as Hayek called it, is not a natural consequence of a bust. It is a consequence of monetary policy failure.

The same risk exists today. Private credit may well be mis-priced. AI investment may well include substantial malinvestment. These corrections can be painful. But they do not have to become systemic crises. The key variable is not the size of the bust. It is whether the Fed ensures nominal stability through it.

On February 28, 2026, the US and Israel launched “Operation Epic Fury” – a coordinated strike on Iran that killed Ayatollah Ali Khamenei and threw the country into chaos.

Iran responded with drone and missile attacks on the Gulf states and effectively closed the Strait of Hormuz – the narrow passage through which 20% of the world’s oil consumption passes daily. Brent crude, which was trading below $70 per barrel at the start of February, hit above $100 earlier this week. That is a price increase of over 35% in fewer than four weeks.

Private credit is already cracking, as described above. The AI boom is under pressure from rising financing costs – and higher energy prices is not exactly good news for the energy intensive AI sector either. And now the oil price shock is a reality – the classic stagflation scenario that economists since 1973 have feared repeating.

In that situation, the Fed faces a near-impossible choice. The mandates point in opposite directions. Inflation is too high to ease.

The credit tightening calls for easing. This is the classic monetary policy dilemma that institutional rules and mandate structures make nearly impossible to resolve correctly – and it is precisely the dilemma that in 2008 turned a credit crisis into an economic catastrophe.

There is a way out of this dilemma – but it requires a framework the Fed does not have. A 4% NGDP level target would cut through the confusion between supply-side inflation and demand collapse.

Under such a framework, a central bank does not respond to rising oil prices by tightening – because oil price inflation does not represent excess nominal demand. It responds to falling nominal spending – which is the actual threat.

In 2007, the Fed had no such framework. It does not have one today either. As the German-American economist Rudi Dornbusch put it: “No postwar recovery has died in bed of old age – the Federal Reserve has murdered every one of them.”

And if Kevin Warsh – Trump’s candidate as the new Fed chairman –-is at the helm, it becomes even harder.

Warsh is ideologically sceptical of quantitative easing – the central bank’s purchases of bonds to pump liquidity into the economy when rates hit zero – and has historically viewed the Fed’s balance sheet expansion as a problem rather than a solution.

In the scenario where rates hit the zero lower bound and conventional monetary policy runs out of road, the political and ideological resistance to reaching for precisely this instrument will be maximal – at precisely the moment it is most needed.

Add to this a chaotic White House that is actively undermining the central bank’s credibility and institutional independence, and the picture is complete.

This is not a prediction. Most scenarios probably end with a gradual correction – losses at the weakest actors, tightening of standards, consolidation. That is the normal credit cycle.

But the pieces are placed in a way that is uncomfortably reminiscent of 2008. Only with a markedly worse political situation, a Fed chairman who is not Ben Bernanke, and a supply shock that Bernanke never had to contend with.