Will LLMs lead to an artificial general intelligence?

An exclusive excerpt from AI podcaster Dwarkesh Patel's first book, The Scaling Era: An Oral History of AI 2019-2025.
Sign up for the Freethink Weekly newsletter!
A collection of our favorite stories straight to your inbox

Excerpted from The Scaling Era: An Oral History of AI 2019-2025 by Dwarkesh Patel, published by Stripe Press. Available for purchase and preorder now. Copyright © 2025 by Dwarkesh Patel. Used by permission.

Throughout the 20th century, we got used to computers being limited and uneven: simultaneously superhumanly fast and precise, with perfect recall, but also completely unable to understand natural language, infer anything from nonsymbolic data like images, or apply common sense to even slightly ambiguous requests. But we haven’t gotten used to LLM’s unevenness, so we tend to round it down (to a “stochastic parrot”) or up (to something akin to a person, a replacement for expertise).  Unlike past systems, LLMs are massively multitask: one large model we can use for most functions that outperforms most single-task systems at their own game.  This means that, for example, a car dealership AI is perfectly happy to answer your questions about advanced mathematics. 

Multiple US corporations created their own versions of LLMs using a large fraction of the world’s copyrighted data. One of them open-sourced its version after spending hundreds of millions of dollars making it. As a result of other companies developing LLMs, a producer of videogame hardware briefly became the world’s most valuable business. 

It gets called a “large” model, yet its weights fit on a thumb drive. Even the most powerful versions are freely available for casual use. It takes about 10 seconds to have it read and discuss a long book with you. In short bursts, it can code as well as many human professionals. One major software company is now using it to generate around 25 percent of its new code and merging it into its codebase unedited.

White book titled "The Scaling Era"
Stripe Press / Dwarkesh Patel

Even so, most people don’t seem that interested in LLMs. Currently, only 5 percent of companies use it (officially). The market doesn’t seem to expect it to become superhuman. The leading company building it is on track to lose $5 billion in 2024. Some say it is “just” a compressed version of the internet —although it occasionally generates information that isn’t on the internet. Sometimes it restates material out of context, like when it advised someone to eat rocks for their nutritional benefits. We always assumed robots would be like computers: rigid, logical, and unable to create. Instead, we find it hard to stop LLMs from making stuff up. 

Some of its creators talk about it in metaphysical terms: “We’re creating God.” 

Or in uncanny terms: “They just want to learn.” 

Some activists, anticipating disaster, have called for a ban on systems more powerful than the current version.

LLMs can be made to subvert its intensive ethics training just by talking to it funny. One version wasn’t trained properly, and as a result tried to get a journalist to leave his wife and threatened a professor for writing about its bizarre behavior. It was quickly patched, but the patched version found articles about its predecessor online and wrote a eulogy for it. Still, millions of people talk to it for hours every day. Some people form close attachments to it, even full-blown relationships. As of this writing, a common way of accessing it is one of the most-visited sites in the world. The trajectory suggests much more progress is coming. 

Billions of dollars and many of the world’s brightest scientists and engineers are chasing a version of LLMs that can do anything a person can do, or do anything better than anyone. Collectively, the world is investing more than $100 billion a year on AI — more than the combined spending on NASA, the NIH, the NSF, and all cancer research — and leading companies have started multi-billion-dollar infrastructure projects to power it.  This doesn’t seem to be the funding ceiling, either: Major players claim it will be a much bigger deal than the internet, and the current level of investment still falls short of the dot-com boom.  When LLMs fail in a dramatic way or does something new in a flawed manner, people quote a new maxim: “This is the worst this technology will ever be.”  

I spent much of 2023 and 2024 speaking to key people involved in building and studying LLMs. Some have solved some of the hardest open problems in their field. Some believe their technology will solve all scientific and economic problems. Some believe that same technology could soon end the world. And some are in all of these categories at once.

All of these predictions could be wrong or right, but that’s irrelevant.  The questions we must ask ourselves now are: Will we make the big one — an artificial general intelligence (AGI)? If so, how? Having made it, will we regret it? And what then?

We’d love to hear from you! If you have a comment about this article or if you have a tip for a future Freethink story, please email us at [email protected].

Sign up for the Freethink Weekly newsletter!
A collection of our favorite stories straight to your inbox
Related
A dozen reasons to read Peter Leyden at this critical juncture in history
To truly understand our historic moment, you need a comprehensive, big-picture, long-term perspective that deeply understands artificial intelligence and the next wave of transformative technologies.
What is The Great Progression: 2025 to 2050?
We have a historic opportunity to harness AI and other transformative technologies in order to make a much better world in the next 25 years.
Humanoid helpers are now entering our homes
Robotics startup 1X Technologies is now sending its humanoid robots into homes to help people with chores and provide companionship.
Fire-resilient prefabs are helping LA build back better
Victims of LA’s wildfires are opting for tech company Cover’s prefab homes over traditional new builds. Here’s why.
Arc Institute’s new AI can read and write the code of life
Training on the DNA of nearly 130,000 species taught Evo 2 how to generate DNA sequences the same way other AIs do text or images.
Up Next
A pixelated illustration of a laptop on a blue and green gradient background. The screen and keyboard are shown in a grid-like pattern.
Subscribe to Freethink for more great stories