Blackboard equations on a blue background.
Enlarge
andresr via Getty Images

On Monday, OpenAI CEO Sam Altman outlined his vision for an AI-driven future of tech progress and global prosperity in a new personal blog post titled “The Intelligence Age.” The essay paints a picture of human advancement accelerated by AI, with Altman suggesting that superintelligent AI could emerge within the next decade.

“It is possible that we will have superintelligence in a few thousand days (!); it may take longer, but I’m confident we’ll get there,” he wrote.

OpenAI’s current goal is to create AGI (artificial general intelligence), which is a term for hypothetical technology that could match human intelligence in performing many tasks without the need for specific training. In contrast, superintelligence surpasses AGI, and it could be seen as a hypothetical level of machine intelligence that can dramatically outperform humans at any intellectual task, perhaps even to an unfathomable degree.

Superintelligence (sometimes called “ASI” for “artificial superintelligence”) is a popular but sometimes fringe topic among the machine learning community, and it has been for years—especially since controversial philosopher Nick Bostrom authored a book titled Superintelligence: Paths, Dangers, Strategies in 2014. Former OpenAI co-founder and Chief Scientist Ilya Sutskever left OpenAI in June to found a company with the term in its name: Safe Superintelligence. Meanwhile, Altman himself has been talking about developing superintelligence since at least last year.

So, just how long is “a few thousand days”? There’s no telling exactly. The likely reason Altman picked a vague number is because he doesn’t exactly know when ASI will arrive, but it sounds like he thinks it could happen within a decade. For comparison, 2,000 days is about 5.5 years, 3,000 days is around 8.2 years, and 4,000 days is almost 11 years.

It’s easy to criticize Altman’s vagueness here; no one can truly predict the future, but Altman, as CEO of OpenAI, is likely privy to AI research techniques coming down the pipeline that aren’t broadly known to the public. So even when couched with a broad time frame, the claim comes from a noteworthy source in the AI field—albeit one who is heavily invested in making sure that AI progress does not stall.

Not everyone shares Altman’s optimism and enthusiasm. Computer scientist and frequent AI critic Grady Booch quoted Altman’s “few thousand days” prediction and wrote on X, “I am so freaking tired of all the AI hype: it has no basis in reality and serves only to inflate valuations, inflame the public, garnet [sic] headlines, and distract from the real work going on in computing.”

Despite the criticism, it’s notable when the CEO of what is probably the defining AI company of the moment makes a broad prediction about future capabilities—even if that means he’s perpetually trying to raise money. Building infrastructure to power AI services is foremost on many tech CEO’s minds these days.

“If we want to put AI into the hands of as many people as possible,” Altman writes in his essay. “We need to drive down the cost of compute and make it abundant (which requires lots of energy and chips). If we don’t build enough infrastructure, AI will be a very limited resource that wars get fought over and that becomes mostly a tool for rich people.”

Altman’s vision for “The Intelligence Age”

OpenAI CEO Sam Altman walks on the House side of the US Capitol on January 11, 2024, in Washington, DC.
Enlarge / OpenAI CEO Sam Altman walks on the House side of the US Capitol on January 11, 2024, in Washington, DC.
Kent Nishimura/Getty Images

Elsewhere in the essay, Altman frames our present era as the dawn of “The Intelligence Age,” the next transformative technology era in human history, following the Stone Age, Agricultural Age, and Industrial Age. He credits the success of deep learning algorithms as the catalyst for this new era, stating simply: “How did we get to the doorstep of the next leap in prosperity? In three words: deep learning worked.”

The OpenAI chief envisions AI assistants becoming increasingly capable, eventually forming “personal AI teams” that can help individuals accomplish almost anything they can imagine. He predicts AI will enable breakthroughs in education, health care, software development, and other fields.

While acknowledging potential downsides and labor market disruptions, Altman remains optimistic about AI’s overall impact on society. He writes, “Prosperity alone doesn’t necessarily make people happy—there are plenty of miserable rich people—but it would meaningfully improve the lives of people around the world.”

Even with AI regulation like SB-1047 the hot topic of the day, Altman didn’t mention sci-fi dangers from AI in particular. On X, Bloomberg columnist Matthew Yglesias wrote, “Notable that @sama is no longer even paying lip service to existential risk concerns, the only downsides he’s contemplating are labor market adjustment issues. “

While enthusiastic about AI’s potential, Altman urges caution, too, but vaguely. He writes, “We need to act wisely but with conviction. The dawn of the Intelligence Age is a momentous development with very complex and extremely high-stakes challenges. It will not be an entirely positive story, but the upside is so tremendous that we owe it to ourselves, and the future, to figure out how to navigate the risks in front of us.”

Aside from the labor market disruptions, Altman does not say how the Intelligence Age will not entirely be positive, but he closes with an analogy of an outdated occupation that was lost due to technological changes.

“Many of the jobs we do today would have looked like trifling wastes of time to people a few hundred years ago, but nobody is looking back at the past, wishing they were a lamplighter,” he wrote. “If a lamplighter could see the world today, he would think the prosperity all around him was unimaginable. And if we could fast-forward a hundred years from today, the prosperity all around us would feel just as unimaginable.”

Similar Posts