Recently Leopold Aschenbrenner, a Columbia University graduate from 2021, revealed in a recent interview with podcaster Dwarkesh Patel that he was terminated in April for allegedly leaking confidential information.
Listen, I respect the work of Dwarkesh and can respect what some of the very young researchers who are claiming AGI will arrive by 2027 who undoubtedly want to profit from the movement, but Leopold is quite the promoter. Who knows, maybe he’s even the next Sam Altman 😬?
Look I mean, Situational awareness or situation awareness (SA) is the understanding of an environment, its elements, and how it changes with respect to time or other factors. Situational awareness is important for effective decision making in many environments. But if the AGI or Super-intelligence narrative is even true, it feels like civilization, governments, and humanity don’t even have a handle on it.
But how do you make sense of “situational awareness” in terms of AGI’s impact on humanity? The kid from Germany now living in San Francisco, appears to have the answers. None other than Leopold Aschenbrenner.
So many Roadblocks to AGI in Reality
We know that training data is running out, so why are all of these Venture Capitalists hyping AGI, which may not even be a fundamental sentient general purpose learning thing, but more of a business tool? The watered down definition of AGI that OpenAI, ML researchers and scientists are using in the 2020s, is not lost on us.
Recently Time wrote about Epoch AI, a research institute investigating the trajectory of AI for the benefit of society. Their recent paper is fairly interesting.
June 4th, 2024: Will we run out of data? Limits of LLM scaling based on human-generated data. How will we achieve AGI in 2027 if we functionally run out of high quality training data by 2026 or overtainining by 2028?
It’s doubtful LLMs are even the right architecture to reach general purpose learning AGI, but perhaps the sort of situational task agency the technological optimists hope to profit from. A lot of Venture Capital funding and Copilot era hype is riding on it.
The Manifesto and claims of Leopold Aschenbrenner are still highly problematic, you can read his Situational Awareness text here. Leo is doing sales of course for his recently founded AGI Investment Firm.
I don’t have yet know the name of his AGI Investment Firm.
His Manifesto isn’t even much less than the usual techno-optimism lobbying though, since he’s now actually starting an investment firm focused on AGI. Important people in San Francisco seem to think this is a good idea. A good investment decision? Time will tell.
Hi, I’m Leopold Aschenbrenner.
I recently founded an investment firm focused on AGI, with anchor investments from Patrick Collison, John Collison, Nat Friedman, and Daniel Gross. Before that, I worked on the Superalignment team at OpenAI.
In a previous life, I did research on long-run economic growth at Oxford’s Global Priorities Institute. I originally hail from Germany and now live in the great city of San Francisco, California.
The full podcast with Darkesh is fairly exuberant and fascinating, in an AI-bro sort of way. It’s also 4 hours and 30 minutes long. Here is a shorter exert.
How a US/China Superintelligence Arms Race Will Play Out – Leopold Aschenbrenner
Leopold’s colorful assessment of the geopolitical concerns around AGI.
Instead of claiming that the U.S. is a full two years ahead of China like Eric Schmidt does, Leopold claims we are perhaps just three months ahead.
Situational Awareness tries to frame how AGI might impact the decade ahead.
Increasingly technological optimists are using academic sources (and pseudo academic) to justify their business, marketing and Venture Capital narratives.
They aren’t really concerned about the potential impact of AGI on society, their primary goal is to profit from it directly. Even as some of these “superalignment” researchers leave OpenAI, they are hyping it and a weird brand of 2022 ChatGPT variety AGI, that seems to have more to do with replacing median workers than creating a sentient alien general purpose learning with any semblance of self-determination.
AGI is the central marketing pillar of OpenAI’s mission.
Infographics from the Manifesto
Hyping the “AGI by 2027” Narrative
AI Systems Surpassing Human Capability?
Training Compute is Rising Quickly, 2024
Efficiency Compute Doubling q 8 Months
Is Artificial General Intelligence going to arrive this decade of the 2020s?
I asked an AI researcher, lawyer and writer to analyze some of the contents of the Situational Awareness AGI manifesto, or whatever you want to call it.
This is what he had to say:
By , June, 2024.
A response to “Situational Awareness”
What did you do when you were in your early twenties?
For me, life was all about getting drunk on the weekends and going to university.
At this age, Leopold Aschenbrenner was working with Ilya Sutskever in OpenAI’s Superalignment team. Before that, he graduated with honors from Columbia University at 19, did research on economic growth at Oxford University, and has now founded an AGI-focused venture capital fund with other Silicon Valley insiders at the ripe old age of 24.
Very recently, and for reasons that are not entirely clear, Aschenbrenner was fired from OpenAI allegedly for revealing confidential information. Now unrestrained by OpenAI’s burdensome non-disparagement agreements, Aschenbrenner recently opened up about his views on the dangers of AI superintelligence in a 165-page manifesto titled “SITUATIONAL AWARENESS: The Decade Ahead”.
“Situational awareness” is a disturbing read. I see it as a testament to just how out of touch the techno-elite is becoming from the mortal peasants they are surveilling. The manifesto is not guided by facts, data, or rationality but by fantasy math, megalomania, and a religious faith in tech.
Let’s look at a few of Aschenbrenner’s central and most strange claims, which are presumably echoed among OpenAI employees and certain technology investors in Silicon Valley.
AI will continue to grow exponentially
“I make the following claim: it is strikingly plausible that by 2027, models will be able to do the work of an AI researcher/engineer. That doesn’t require believing in sci-fi; it just requires believing in straight lines on a graph. “
Then Aschenbrenner refers to the graph below.
Here are a few questions to consider:
Does more computer power equal stronger AI capabilities?
Can AI become profitable?
Can we really say that GPT-4 has the same level of intelligence as a “Smart High Schooler”?
A study shows that scaling is reaching its limits (training AI with more data and more compute leads to a diminishing increase in AI’s capabilities), seed funding in AI startups has sharply declined in the latest quarter, and investments in AI are humongous but the returns so far are comparably neglectable.
It’s a provocative statement to say that GPT-4 has the same level of intelligence as a smart high schooler. This is true only if we look at narrow benchmark tests but these kinds of tests are not accurate measures of intelligence. As Melanie Mitchell says:
“AI performance on benchmarks is not necessarily a good predictor for AI performance in the real world.
Most obviously, the questions and answers in the benchmark tests could be “contaminated” meaning they are included in the AI’s training data. As OpenAI refuses to share the sources of their model’s training data, it’s impossible to confirm or deny.
AI will replace workers
“By the end of this, I expect us to get something that looks a lot like a drop-in remote worker. An agent that joins your company, is onboarded like a new human hire, messages you and colleagues on Slack and uses your softwares, makes pull requests, and that, given big projects, can do the model-equivalent of a human going away for weeks to independently complete the project. You’ll probably need somewhat better base models than GPT-4 to unlock this, but possibly not even that much better—a lot of juice is in fixing the clear and basic ways models are still hobbled.”
Considering recent rates of progress, it’s not inconceivable that we could have something like an autonomous AI worker as imagined by Aschenbrenner within the next 1-2 years.
But is it desirable?
Aschenbrenner has likely never worked a normal job for a day in his life. Statements like these, though not intended, clearly show how highly the technological upper class is valuing normal jobs that allow regular people to make a living = not highly.
How would you feel about working next to an AI agent? A University of Pittsburgh study shows that stock workers working alongside robots were more prone to substance abuse and mental health disorders although the number of work injuries decreased.
That being said, nothing indicates that an AI agent can replace an average office worker in 1-2 years. Any normal person who has worked with GPT-4 or Google Gemini knows that these models are not remotely understanding or intuitive enough. Honestly, AI tools are not very useful for many tasks and frequent users of ChatGPT, Google Gemini, and Microsoft Copilot constitute a very small minority of users across countries and continents (stats from report by Reuters Institute and Oxford University, May 2024).
OpenAI’s Revenue
“Reports suggest OpenAI was at a $1B revenue run rate in August 2023, and a $2B revenue run rate in February 2024. That’s roughly a doubling every 6 months. If that trend holds, we should see a ~$10B annual run rate by late 2024/early 2025”
How can a grown-up person make a statement like this? These projections are literally pulled out of thin air. It’s like saying: “I can jump 1 meter today. If I train, I should be able to jump 1.5 meters in 6 months. If that progression holds, I should be able to jump 10 meters in 10 years.”
AGI as a supersecret, powerful weapon
“We’re developing the most powerful weapon mankind has ever created. The algorithmic secrets we are developing, right now, are literally the nation’s most important national defense secrets—the secrets that will be at the foundation of the US and her allies’ economic and military predominance by the end of the decade, the secrets that will determine whether we have the requisite lead to get AI safety right, the secrets that will determine the outcome of WWIII, the secrets that will determine the future of the free world.”
What?
Aschenbrenner is deeply concerned about the Chinese Communist Party developing AGI before the US or stealing the model weights or algorithmic secrets to this very important military weapon. To Aschenbrenner, the chatbots that OpenAI is currently building are more important to national security than nuclear weapons. He offers no explanation to a curious reader who may be wondering how AGI is “the most powerful weapon mankind has ever created”. It seems like worrying about overpopulation on Mars.
What I learned from reading “Situational Awareness” is that the next generation of privilege-blind Silicon Valley investors are even more out of touch than expected. It’s really sad and feels like a cosmic joke that the world’s most affluent class of people who legitimately have the resources to change the world for the better are living and thinking in a way that is completely divorced from reality.
Editor’s Last Points
What do you think, are Leopold Aschenbrenner claims helpful or enlightening? Is this even a good time to form yet another AI related investment fund? This as more startup choose to use the ‘AGI fallacy’ as part of their marketing narrative.
Listen, I probably read TechCrunch on Venture Capital more than the next guy, and a lot of the well-funded AI research and Generative AI startups are struggling even to generate revenue outside of OpenAI, because there’s no existing demand for many of their products. There’s a lot of competition too for Enterprise opt-in, which doesn’t necessarily result in great ROI yet.
A full 18 months after ChatGPT, we aren’t materially further in a supposed plight of AI changing everything. Techno-optimists would have you believe that Generative AI is as big an invention as the internet itself.
Layoffs Continue at BigTech
Companies like Tesla, Amazon, Google, TikTok, Snap and Microsoft have conducted sizable layoffs in the first months of 2024. Zoom wants to make digital twins of ourselves so they can attend meetings for us, many companies like Figma are using AI to improve cost management and so they will need less employees.
Software and Cloud players have to compete with Demand for Gen AI Now
Furthermore there’s evidence Software and Cloud companies will have to directly compete with Generative AI companies where AI might continue to erode capitalism and the number of companies in the public market. This means deal sizes are going down for many software and cloud names.
Artificial intelligence represents “a competing priority,” Veeva CEO Peter Gassner recently said.
Salesforce’s disappointing revenue and weak guidance sent the stock to its biggest drop since 2004.
A variety of Software and Cloud players in Q1 Earnings are showing disappointing projections and increasing competition for their products.
Customers were also becoming more hesitant to commit to multi-year deals in Software, CRM, Cloud and even RPA. Even as investment and ROI in Generative AI is for the most part speculative and sometimes even creates more work than the supposed productivity it boots or time it’s supposed to save.
When you look at the AI bubble, it would be fairly easy to make a data based argument against anything like AGI arriving anytime soon.
But be my guest to read the entertaining literature (165 slides) that Leo has founded his AGI Investment Fund (name TBA) around:
AGI’s Circle of Life Meme
Maybe Leopold Aschenbrenner will become like a technological Titan one day, or simply make the Midas List of VCs prophesizing AGI for profit. It’s officially a trend.
Read More in AI Supremacy