“Artificial General Intelligence”: an extremely brief FAQ
(Crossposted from twitter for easier linking.) (Intended for a broad audience—experts already know all this.)
When I talk about future “Artificial General Intelligence” (AGI), what am I talking about? Here’s a handy diagram and FAQ:
“Are you saying that ChatGPT is a right-column thing?” No. Definitely not. I think the right-column thing does not currently exist. That’s why I said “future”! I am also not making any claims here about how soon it will happen, although see discussion in Section A here.
“Do you really expect researchers to try to build right-column AIs? Is there demand for it? Wouldn’t consumers / end-users strongly prefer to have left-column AIs?” For one thing, imagine an AI where you can give it seed capital and ask it to go found a new company, and it does so, just as skillfully as Earth’s most competent and experienced remote-only human CEO. And you can repeat this millions of times in parallel with millions of copies of this AI, and each copy costs $0.10/hour to run. You think nobody wants to have an AI that can do that? Really?? And also, just look around. Plenty of AI researchers and companies are trying to make this vision happen as we speak—and have been for decades. So maybe you-in-particular don’t want this vision to happen, but evidently many other people do, and they sure aren’t asking you for permission.
“If the right-column AIs don’t exist, why are we even talking about them? Won’t there be plenty of warning before they exist and are widespread and potentially powerful? Why can’t we deal with that situation when it actually arises?” First of all, exactly what will this alleged warning look like, and exactly how many years will we have following that warning, and how on earth are you so confident about any of this? Second of all … “we”? Who exactly is “we”, and what do you think “we” will do, and how do you know? By analogy, it’s very easy to say that “we” will simply stop emitting CO when climate change becomes a sufficiently obvious and immediate problem. And yet, here we are. Anyway, if you want the transition to a world of right-column AIs to go well (or to not happen in the first place), there’s already plenty of work that we can and should be doing right now, even before those AIs exist. Twiddling our thumbs and kicking the can down the road is crazy.
“The right column sounds like weird sci-fi stuff. Am I really supposed to take it seriously?” Yes it sounds like weird sci-fi stuff. And so did heavier-than-air flight in 1800. Sometimes things sound like sci-fi and happen anyway. In this case, the idea that future algorithms running on silicon chips will be able to do all the things that human brains can do—including inventing new science & tech from scratch, collaborating at civilization-scale, piloting teleoperated robots with great skill after very little practice, etc.—is not only a plausible idea but (I claim) almost certainly true. Human brains do not work by some magic forever beyond the reach of science.
“So what?” Well, I want everyone to be on the same page that this is a big friggin’ deal—an upcoming transition whose consequences for the world are much much bigger than the invention of the internet, or even the industrial revolution. A separate question is what (if anything) we ought to do with that information. Are there laws we should pass? Is there technical research we should do? I don’t think the answers are obvious, although I sure have plenty of opinions. That’s all outside the scope of this little post though.
- [Intro to brain-like-AGI safety] 1. What’s the problem & Why work on it now? by 26 Jan 2022 15:23 UTC; 155 points) (
- What do coherence arguments actually prove about agentic behavior? by 1 Jun 2024 9:37 UTC; 123 points) (
- [Intuitive self-models] 3. The Homunculus by 2 Oct 2024 15:20 UTC; 65 points) (
- [Intuitive self-models] 6. Awakening / Enlightenment / PNSE by 22 Oct 2024 13:23 UTC; 58 points) (
- [Intuitive self-models] 7. Hearing Voices, and Other Hallucinations by 29 Oct 2024 13:36 UTC; 46 points) (
- [Intro to brain-like-AGI safety] 11. Safety ≠ alignment (but they’re close!) by 6 Apr 2022 13:39 UTC; 35 points) (
- A path to human autonomy by 29 Oct 2024 3:02 UTC; 32 points) (
- Response to Dileep George: AGI safety warrants planning ahead by 8 Jul 2024 15:27 UTC; 27 points) (
- 17 Jun 2024 3:16 UTC; 18 points) 's comment on On the Dwarkesh/Chollet Podcast, and the cruxes of scaling to AGI by (EA Forum;
- “Real AGI” by 13 Sep 2024 14:13 UTC; 18 points) (
- What’s a better term now that “AGI” is too vague? by 28 May 2024 18:02 UTC; 15 points) (
- 12 Mar 2024 22:27 UTC; 15 points) 's comment on Some (problematic) aesthetics of what constitutes good work in academia by (
- 1 May 2024 3:10 UTC; 2 points) 's comment on Ironing Out the Squiggles by (
The question-asker here looks too much like a caricature. This might be more representative of people in the real world, but it still gives off a bad vibe here.
I recommend making the question-asker’s personality look more like the question-asker in Scott Alexander’s Superintelligence FAQ. Should be a quick fix.
Great image, BTW! I don’t think it’s the final form but it’s a great idea worthy of refinement.
Even so, one of the most common objections I hear is simply “it sounds like weird sci-fi stuff” and then people dismiss the idea as totally impossible. Honestly, this really seems to be how people react to it!
My thinking about this is that most people usually ask the question “how weird does something have to be until it’s not true anymore”, or less likely to be true, and don’t really realize that particle physics already demonstrated long ago that there just isn’t a limit at all.
I was like this for an embarrassingly long time; lightcones and Grabby Aliens, of course that was real, just look at it. But philosophy? Consciousness ethics? Nah, that’s a bunch of bunk, or at least someone else’s problem.
Thanks. I made some edits to the questions. I’m open to more suggestions.
This is already version 3 of that image (see v1,v2) but I’m also very open to suggestions on that too.
For what it’s worth, I’m skeptical that the dichotomy you set up is meaningful, or coherent. For example, I tend to think future AI will be both “like today’s AI but better” and “like the arrival of a new intelligent species on our planet”. I don’t see any contradiction in those statements.
To the extent the two columns evoke different images of future AI, I think it mostly reflects a smooth, quantitative difference: how many iterations of improvement are we talking? After you make the context windows sufficiently long, add a few more modalities, give them a robot body, and improve their reasoning skills, LLMs will just look a lot like “a new intelligent species on our planet”. Likewise, agency exists on a spectrum, and will likely be increased incrementally. The point at which you start to call an LLM an “agent” rather than a “tool” is subjective. This just seems natural to me, and I feel I see a clear path forward from current AI to the right-column AI.
I think you’re not the target audience for this post.
Pick a random person on the street who has used chatGPT, and ask them to describe a world in which we have AI that is “like chatGPT but better”. I think they’ll describe a world very much like today’s, but where chatGPT hallucinates less and writes better essays. I really don’t think they’ll describe the right-column world. If you’re imagining the right-column world, then great! Again, you’re not the target audience.