In 1978, Charles Kindleberger published “Manias, Panics and Crashes”, an instant classic history of investment booms and subsequent busts. Such booms can be divided between those that end up building something useful ( such as a railway system in mid-19th-century Britain, the United States, and elsewhere ); and those that do not ( such as the Netherlands’ infamous 17th-century tulip mania and the subprime-mortgage madness of the early 2000s ).
By any metric, the US and, by implication, the world, is now in an intense artificial intelligence ( AI ) speculative boom. But will all the investment pouring into the industry build something useful? To whom, and for what purpose? And if there is a downside, what will it look like?
Kindleberger’s work – and everything that has happened since 1978 – suggests that three salient questions should be used to assess investment booms.
First, does the boom involve more than just a run-up in asset prices ( such as happened with US housing prior to the 2008 global financial crisis )? On this front, today there is definitely a big wave of investment in plants and equipment ( such as data centres ) in the US and elsewhere. Moreover, investment in information technology infrastructure – an important input for firms and government – could boost productivity and therefore help underpin economic growth. ( An unfortunate corollary is a potentially significant environmental impact, owing especially to increased demand for electricity and water. )
Second, is the investment boom financed primarily by issuing debt ( a major factor in the 2008 crisis )? For AI, the answer is decidedly mixed. While the biggest companies involved do have sufficient positive cash flow to cover what has already been spent, much supplier finance is apparently already being provided by some tech companies ( to enable other companies to buy computer chips, for example ). The credit risks involved in these relationships are murky, to say the least. Some of the collateral involved may become obsolete before the loans are paid off.
And as capital expenditure grows, so, too, does the exposure of credit markets, the banking system, and potentially even the government ( though it cannot be persuasively argued that tech companies are “too big to fail” and thus need debt guarantees ). Last month, Meta closed the largest-ever private capital deal with Blue Owl to finance its Hyperion data centre, with US$27 billion channelled into an off-balance sheet special purpose vehicle.
And that is just a drop in the bucket: It is estimated that US$3 trillion to US$7 trillion will be invested in AI infrastructure in the next five years. Tech companies have indicated that they will tap the debt markets, including with novel and aggressive financing arrangements. Private credit in particular is expected to provide roughly US$800 billion in the next two to three years, and is estimated to have totalled US$450 billion as of early 2025. It remains to be seen whether and how these bets will pay off.
The third question may be the most important for this moment: How exactly will this technology be used? Conversations with senior executives of large-cap corporations across traditional sectors – companies commonly presumed to provide high demand for AI solutions – confirm that while all expect to achieve significant savings and efficiencies from AI, almost none can highlight with confidence additional sources of revenue ( such as new lines of business ).
For example, banks would most likely capture efficiencies in document processing, fraud detection, risk management, regulatory compliance, algorithmic investment and trading, and marketing and customer insights. Industrial companies will probably realize efficiencies by reducing employment in clerical roles, inventory and resource management, marketing, and field engineering.
If the people who are displaced by AI can quickly find new, productive, and ( ideally ) high-paying jobs, then we are on our way to an acceleration of productivity growth – with beneficial effects for living standards and public finances. This was the effect of the 19th-century railway boom, at least in countries where institutions were inclusive enough to allow ordinary people to create companies, acquire new skills, and participate in trade unions. But in the face of other big waves of automation, economies that could not quickly generate enough new work faced serious labour-market problems, and the economy-wide effects on productivity were sometimes also disappointing.
The AI boom is similar. Yes, there is excess. Yes, mistakes will be made by investors and executives. And yes, most stock gains ( and also losses ) are likely to accrue to people who are already wealthy, because share ownership is unevenly distributed.
Despite all this, no country, company, or citizen anywhere will benefit from sitting on the sidelines. It might feel safer to do nothing now and wait for better versions of the technology to emerge, but that is no way to build skills for the future and create more good jobs. Moreover, it is the inventors and owners of new technology who influence standards – both technical rules and ethical principles – and drive the relevant policy agenda.
The US political elite loves innovation, for competitive advantage and as a source of political donations. Fearing China, the US tech sector is going full speed ahead on scaling up AI with minimal guardrails. Everyone else in the US and around the world must think hard about how to play the game.
How can your community adopt AI in more responsible ways, for example to improve the delivery of government services? How can your private sector use AI to create more good jobs? How can you ensure sufficient privacy protections? How can you protect children and other vulnerable groups against serious harm?
The path of technology can be shaped, and the path of the AI revolution is being shaped now. From canals and railways through to the internet age, a hard but simple lesson stands out: If you, your company, or your country sits it out and waits for the dust to settle, you may not get what you want and need from the technology.
Simon Johnson is the 2024 Nobel laureate in economics is a professor at the MIT Sloan School of Management, the faculty director of MIT’s Shaping the Future of Work initiative, a co-chair of the CFA Institute Systemic Risk Council, an AI ambassador for the UK and a former chief economist at the International Monetary Fund; and Piero Novelli, a senior lecturer at MIT Sloan School of Management and Imperial College London, is Chair of the Supervisory Board of Euronext.
Corey Klemmer contributed to the drafting of this commentary.
Copyright: Project Syndicate