Is A.I. Actually a Bubble?

Artificial Intelligence (A.I.) News

Is A.I. Actually a Bubble?
CodingCompanies
  • 📰 NewYorker
  • ⏱ Reading Time:
  • 585 sec. here
  • 13 min. at publisher
  • 📊 Quality Score:
  • News: 244%
  • Publisher: 67%

The narrative of boom and bust is familiar—but also out of step with the possibilities of a new technology, Joshua Rothman argues.

In the following weeks, with further help from me and A.I., Peter made a game based on the light-cycle duels in the movie “Tron,” complete with music and a score-keeping system. He sketched the beginnings of a “library simulator,” and finished his own arcade game, Dot in Space, about a tiny spaceship travelling at warp speed.

Whenever he hit a potentially momentum-killing bump in the road, A.I. enabled us to roll through it. At my request, the systems began pointing us toward more sophisticated coding environments—Construct, GDevelop, Godot Engine, GameMaker—and suggesting more ambitious projects. Last weekend, he stayed up late, programming a polished version of Asteroids while wolfing down Cheerios and gulping from his water bottle as though it were an energy drink. Since Peter is a kid, and I’m a dad, all this can seem cute and quaint. Isn’t it nice that A.I. can help a young person learn to code, and an older one become a coding tutor? But consider what’s happening from a different perspective. In “The Wealth of Nations,” Adam Smith described the “acquired and useful abilities” of a worker as a kind of “fixed capital”—something akin to a hunk of real estate or piece of equipment. It wasn’t until the nineteen-sixties that an economist named Theodore Schultz coined the term “human capital” to describe the ongoing, dynamic process through which people invest in improving themselves. Schultz realized that individuals spend a lot of time, money, and effort becoming more capable. They go to night school, network, read self-help books, and tend to use their free time “to improve skills and knowledge.” The work of improving human capital often happens out of sight. But the “simple truth,” he argued, was “that people invest in themselves and that these investments are very large.” Schultz suggested that these investments, which improve “the quality of human effort,” might account “for most of the impressive rise in the real earnings per worker” that economists had observed in the preceding decades. Today, it’s obvious that companies and organizations benefit greatly from people with lots of human capital. Meetings are more useful when they involve knowledgeable participants; a product improves when the team building it possesses a wide range of skills. What’s less obvious is that companies and organizations simultaneously struggle to recognize and take advantage of changes in human capital. Suppose someone is hired to do one job, and then acquires skills that qualify her for another. Ideally, the organizational chart would shift around her as she becomes more capable; in practice, the job is often a prison. And when a worker breaks out of that prison, by getting a job elsewhere, she takes her human capital with her. For this reason, from the perspective of the company, it’s almost as though the ideal hire is someone who works feverishly to build up their human capital until their first day of work, and then suddenly slows down, becoming a highly skilled cog in the machine. Organizations want their workers to continue improving themselves—but not too fast, lest they outgrow the systems in which they’re enmeshed. Luckily for managers, building human capital takes a long time. Or, at least, it used to: artificial intelligence is, among other things, a technology that speeds up learning and increases capability. Millions of people now use large language models. They’re not all flirting with their chatbots; instead, they’ve discovered that, with the help of A.I., they can perform tasks they’ve never done before, and learn quickly about subjects they’ve previously found inaccessible. What happens when you suddenly increase the speed with which human capital can accrue? This is one of the challenges posed by A.I. to the business world, which is struggling to figure out what the technology is worth. Will that “spend” lead to a corresponding return? The simplest way for a company to answer that question is to think in terms of new products or staffing cuts, which could generate revenue or lower costs, respectively. In its new report on “enterprise” A.I., released this week, OpenAI offers a number of case studies focussed on products that replace human labor. A typical example is an A.I. voice agent, useful for customer-service calls; the company says one such agent is currently saving companies “hundreds of millions of dollars annually.” All this makes it seem as though worker replacement is the logical endpoint of corporate A.I. But it’s important to note that, both conceptually and as a matter of internal accounting, big companies often have difficulty figuring out how to integrate new technologies. In the nineteen-eighties and nineties, when I.T. departments were new, it was sometimes unclear how they could be internally justified. An I.T. department might spend millions each year on new computers, networking hardware, or productivity software. Did all that spending produce a return? How could its value be judged? If a large corporation installed a mainframe, it might replace some accountants. If an I.T. manager wanted to explain to her boss why computers mattered, the simplest thing she could say might have been that they could replace the typing pool. As time went on, however, it became clear that the costs and benefits of information technology far exceeded what could be accounted for in this way. Modern companies reorganized themselves around computers; in this new world, the point of I.T. departments wasn’t to replace computer-dependent workers but to enhance their effectiveness. Workers began demanding more from their I.T. departments. In a development known as “consumerization,” the tools used by tech-savvy employees at home—such as smartphones—became more advanced than the ones provided at work; employees, who wanted to do more, began demanding upgrades. The upshot is that, today, when I.T. “spend” is proposed, no one insists that those investments do anything so crude as replace workers. The important question is whether new investments help existing employees accomplish their agendas, and keep up with their competitors at other firms. The idea that the best use of A.I.—perhaps the only profitable use—is the direct replacement of workers combines two strains of thought: one stemming from speculations about A.I.’s future, and the other from the short-term, balance-sheet thinking that’s probably unavoidable when companies explore new technology. It is, meanwhile, profoundly at odds with the experiences many of us have while actually using A.I. Vast numbers of individuals pay for accounts with OpenAI, Anthropic, and other companies because they find that A.I. makes them more capable and productive. It is, from their perspective, a multiplier of human capital. If you have a fine-grained sense of what you want to accomplish—write software, analyze research, diagnose an illness, repair something in your house—A.I. can help you do it faster and better. Companies today spend a lot of money to train their employees; even highly qualified white-collar workers are exposed to online seminars and sent to expensive retreats, in the hopes that they will return improved. Suppose that A.I. makes some employees five or ten per cent more knowledgeable and capable. How much should a company pay for that cognitive boost? According to one narrative about A.I., the boost it provides will eventually be big enough to allow individual workers to replace teams. Some particularly optimistic observers suggest that, someday soon, we’ll see the first billion-dollar companies run by one or two A.I.-assisted individuals. Maybe there are some kinds of work for which this could be possible. But, if you’ve tried to use the technology to do your actual job, you’ve likely discovered its intrinsic limitations. A.I. systems aren’t smart or well-informed enough to make many important decisions; they lack critical context; they are disembodied, forgetful, unnatural, and sometimes glaringly stupid. Perhaps most significantly, they cannot be held accountable, and cannot learn on the job. They can aid you in the execution of your informed ambitions—but they cannot replace you. And so the situation, broadly speaking, is that, at many companies, trying to replace workers with A.I. will be a grave mistake—not only because A.I. cannot replace those workers, but because it actually makes them more valuable. The businesses that figure this out first will be the ones to thrive. When hype is at its height, anti-hype is both inevitable and valuable. The risk, however, is that it will become as extreme as the hype it hopes to puncture. I was in college from 1998 to 2002, at the apex of the first dot-com boom; I paid much of my college tuition by running a small startup with my roommates, mainly making websites and applications for other startups. Then, as now, countless companies offered products that didn’t add up. It was easy to predict that many of these businesses would fail, and that investors at all scales would lose a lot of money. Still, the underlying technology—the internet—was unquestionably powerful. It’s hard not to say the same about A.I. today. And yet, compared to the dot-com boom, the story of artificial intelligence is weirder. When the internet arrived, people weren’t sure how to make money with it. Even so, there was a sense in which the technology itself was somewhat complete. It seemed clear that connectivity would get faster and more pervasive; beyond that, the uses to which the internet might be put—streaming media, e-commerce, collaboration, cloud storage, and so on—were already broadly apparent. Over the following decades, the engineering efforts required to create the modern internet would be prodigious; it would take extraordinary ingenuity to build the cloud, for example. But, from the beginning, the fundamental nature of the internet itself was more or less settled. With A.I., it’s different. From a scientific perspective, the work of building and understanding A.I. is far from complete. Experts in the field differ on important issues, such as whether increases in the scale of today’s A.I. systems will deliver substantial increases in intelligence. They disagree on conceptual issues, too, such as what “intelligence” means. On the all-important question of whether today’s A.I. research will lead to the invention of systems capable of human-level thinking, they hold strong, divergent views. People who work in A.I. tend to articulate their opinions clearly and forcefully, and yet there is no consensus. Anyone who weaves a scenario is disagreeing with a large cohort of her colleagues. Researchers will be answering many questions about A.I. empirically, by trying to build better A.I. and seeing what works. The A.I. bubble, in short, is more than just a bubble—it’s a collision between scientific uncertainty and evolving business thinking. There are, at this moment, two big unknowns about artificial intelligence. First, we don’t know whether and how companies will succeed in getting value out of A.I.; they’re trying to figure that out, and they could get it wrong. Second, we don’t know how much smarter A.I. will become. About the first unknown, though, we have some clues. We can say, from firsthand experience, that having an A.I. available to you can be really useful; that it can help you learn; that it can make you more capable; that it can assist you in better utilizing your human capital, and even in expanding it. We can also say, with some confidence, that A.I. cannot do many of the important things people do—that, except in certain narrow circumstances, it is better at enabling human beings than at taking their place. Meanwhile, about the second question—whether A.I. will get a lot smarter, so smart that it transforms the world—we know very little. We are waiting to find out, and even the experts can’t agree. Our challenge is to act on what we know, and not to let our guesses about the future overrule it. ♦

We have summarized this news so that you can read it quickly. If you are interested in the news, you can read the full text here. Read more:

NewYorker /  🏆 90. in US

Coding Companies

 

United States Latest News, United States Headlines

Similar News:You can also read news stories similar to this one that we have collected from other news sources.

College Basketball Top 10, Bubble Team NET Rankings: Michigan, UConn are No. 1College Basketball Top 10, Bubble Team NET Rankings: Michigan, UConn are No. 1The top 10 men's and women's college basketball teams by NET ranking, as well as who is already fighting on the bubble, through week 5.
Read more »

Jake Paul Reveals Why Anthony Joshua Can’t Knock Him OutJake Paul Reveals Why Anthony Joshua Can’t Knock Him OutJake Paul made several bold claims about his upcoming bout against Anthony Joshua.
Read more »

Experimental gene therapy treatment created at UCLA gives 'bubble girl' a new lifeExperimental gene therapy treatment created at UCLA gives 'bubble girl' a new lifeAn experimental gene therapy treatment created at UCLA gave a 'bubble girl' born with a rare genetic disorder a new life.
Read more »

Unnecessary Worry Spreads After New Claim About Jake Paul Vs Anthony Joshua FightUnnecessary Worry Spreads After New Claim About Jake Paul Vs Anthony Joshua FightA former heavyweight champion claimed the Jake Paul vs. Anthony Joshua fight might be canceled. Here's why you shouldn't believe the rumors.
Read more »

EUR/USD rally halts as fears of an AI bubble hurt risk appetiteEUR/USD rally halts as fears of an AI bubble hurt risk appetiteEUR/USD is practically flat on Thursday, trading at 1.1695 at the moment of writing, right below the almost two-month highs above 1.1700 hit on Wednesday.
Read more »

BTC, Nasdaq Futures Drop as Oracle Earnings Stoke AI Bubble FearsBTC, Nasdaq Futures Drop as Oracle Earnings Stoke AI Bubble FearsOracle shares tanked after the firm revealed an earnings miss.
Read more »



Render Time: 2026-04-01 15:08:01