spruce.world

globe
< return to blog

It's Actually Just Going to Happen

A report on the situation re. AGI as I can figure it, by standing in San Francisco Bay, holding my thumb out, and squinting.

18.08.24 12.17pm

N.B. I interned at OpenAI this summer, along with the rest of my team at Driftwood. However, I wrote this entire blog post before I started, when I was not privy to any information about their research or deployment plans, so I could record my opinions and measure how they changed. If you're interested in how they did after a month working there, check the appendix. None of this blog post reveals any of that information (move along, twitter hypemen) - this is just my personal vibe check.

In the first line of Situational Awareness, Leopold Aschenbrenner says "you can see the future first in San Francisco". Well, shit, I thought when I read that - I guess I might as well find out for myself. The truth is, as always, a bit more nuanced: while it's true that this city is the only place in the world where you can find everyone who's right about AGI, it's unhappily also the place where there's the most noise. In fact, there's many people (particularly capital-F startup Founders) that are fanatically, convincingly wrong about the future, leading each other to pumped-up valuations and inevitably disappointing realities. One of the "points" of Driftwood in my mind was to cut through that noise - I'm extremely happy with the crowd at our events, particularly how many genuine ML practitioners there have been, and how few "what are you building" type conversations. I don't feel the same dread I sometimes do when I'm deep in the city.

Aschenbrenner outlines the ongoing ideological battle in AI futurism between “doomers” and “e/accs”, and correctly, in my opinion, identifies the former as impractical and unempirical, and the latter as stagnationists in disguise. He instead pitches “AGI Realism” as the opinion that the “smartest in the space … have converged on”. However, he defines AGI realism almost entirely in terms of national security, teeing up an argument in the wider essay to nationalize the US AGI effort.

Instead, I want to explore more deeply what “AGI Realism” means as a future. An important caveat here is that I'm not talking about superintelligence, as Aschenbrenner does. If we hit fast recursively self-improving intelligence soon, all bets are off, as everyone knows. In my opinion, it's unlikely that kind of superintelligence will ever be intentionally deployed as a product anyway. Whether the current AI summer will lead to that scenario is anyone's guess, and I certainly don't discount the possibility. In order for the next part to make sense, however, we have to assume that the current paradigm of research and deployment will continue until AGI, and at most the kind of definitionally-true, softcore superintelligence that hides within LLMs in their best moments, that exceeds human ability, but not by orders of magnitude.

The part of Aschenbrenner's blog post that stuck with me the most was:



This, to me, is the essential realisation everyone is going to have to make. In a sentence:

Yes, AGI is coming soon. And when it does, it will just be a regular product.

When OpenAI, DeepMind et al started, most assumed that the eventual AGI they would make would be some unitary agent-deity, an oracle that could answer anything about e.g. the frontiers of physics, a god locked in the basement. That was true, at least in public, well past 2019. This obviously sounds like incredible technology, and ChatGPT rightly gave a lot of people future shock when it arrived (as a side note, it's miraculous that OpenAI even survived that transition, which speaks to its uniqueness as a company). But how often does a new model release shock anyone today? The capabilities holes will be patched, long-term reliability will improve, characters will be tuned etc, and language models will gradually approach and surpass human skill on every task, and be able to do all economically valuable labour. Crucially, that AGI system will be available via an API endpoint like everything else.

Imagine for a moment there was a “smartphone futurist” community. Imagine that they found out, 20 years before anyone else, that computers were going to become small and touchscreen, and fit in your pocket. They started work on these super hard problems - fast cellular networking, social media, phone addiction. Some thought they could never be solved, and phones would irrevocably damage society. Then, one day, the thing actually just comes out, and it's sold by Apple and costs $299. It actually uses a lot of the pioneers' work. Everyone is future shocked - the phone sells out, it's a status symbol to carry one, etc. and one day, you get one, and you figure it's all right. The pioneers are rightly praised for their foresight. Five years later, the entire world's turned upside down, everyone has one, some of the problems the phone safetyists warned about have happened and a lot more they couldn't predict, but nothing is really the end of the world, and nobody really notices the change. It's not a utopia, and it's not a dystopia, but it's probably a little better on average than before.

Obviously, AGI is a bigger deal than smartphones. But then again, how big is it? Even if super smart digital remote workers supplemented every team in the US, that's not much for a team that already AI-manages their emails to shit in Outlook or uses Slack as their office communication. Arguably some of these jobs are already bullshit, and automation will blow right by, the same way the four-day workweek did. I'm not saying the advent of AGI won't be transformative, I'm saying it won't look transformative.

So then, in this world, what's left for all these people in San Francisco? Like AWS is to the Internet, AGI serving becomes a low-margin commodity business delivered at scale by big tech companies. It covers every use case well. Is marketing and interface really cool enough to spend your life working on? Are you still “part of the revolution” if you're doing free sales work for OpenAI?

Because of all this, the vibe might become a little more dour in San Francisco than it was. Employees at AI labs realise the future awaiting them - not that they helped build a cataclysmic machine devil, but that they are going to have a job at a big tech company. NVIDIA remains the best investment anyone could've made, both as a shovel-seller in the hype cycle and as the provider of the actual infrastructure for the long-term commodity business. Startup founders twiddle their thumbs on years-old GitHub repos, kids that just arrived find they might have missed the bus.

I hope we have an insane future, I hope we reach superintelligence soon after general intelligence, and safely, because it's much more fun. However, I suspect that this feeling I've had throughout my involvement with the community, that AGI might not be the end of the world or the start of a brand new one, but just another useful tool, might soon be coming to the fore for everyone. It seems like the fate of all world-changing technology to eventually become boring.

Anyway, that's my read, and we'll see if I'm right.

Appendix: I wrote this entire blog post before starting at OpenAI, as a way to write down my expectations, and measure how they updated. In general, I don't disagree much with past Spruce, however I think he in parts underestimated just how far the current research paradigms will go in the next two years, and perhaps didn't quite figure out all the second-order effects of what he's talking about. It is, in fact, going to be an insane future, and boring is the last thing I'd say about it.