On The New York Times’s Ezra Klein Show podcast, Klein interviews OpenAI CEO Sam Altman on the future of AI (archived with transcript). (Note the nod to Slate Star Codex in the reading recommendations at the end.)
My favorite (“favorite”) part of the transcript was this:
EZRA KLEIN: Do you believe in 30 years we’re going to have self-intelligent systems going off and colonizing the universe?
SAM ALTMAN: Look, timelines are really hard. I believe that will happen someday. I think it doesn’t really matter if it’s 10 or 30 or 100 years. The fact that this is going to happen, that we’re going to help engineer, or merge with or something, our own descendants that are going to be capable of things that we literally cannot imagine. That somehow seems way more important than the tax rate or most other things.
“Help engineer, or merge with, or something.” Indeed! I just wish there were some way to get Altman to more seriously consider that … the specific details of the “help engineer [...] or something” actually matter? Details that Altman himself may be unusually well-positioned to affect?
Maybe part of what’s up, if not the deepest generator, is that Altman is embedded in softward engineer-land, and more generally maker-land. In maker-land, things don’t do impressive stuff without you specifically trying to get them to do that stuff; and the hard work is always getting it to do that stuff, even though figuring out what stuff you want is also hard. Sure, software can do weird, counterintuitive stuff; but if the stuff is impressive, it’s because a human was trying to get it to do that. (We can find some real world counterexamples, like stuff that “AI” has “invented”; but the general milieu still looks like this I think.)
Altman may also just not be as plugged into the on-the-ground AI details as you expect. He’s a startup CEO/growth hacker guy, not a DL researcher.
I remember vividly reading one of his tweets last year, enthusiastically talking about how he’d started chatting with GPT-3 and it was impressing him with its intelligence. It was an entirely unremarkable ‘My First GPT-3’ tweet, about something everyone who gets API access discovers quickly about how it defies one’s expectations… unremarkable aside from the fact that it was sent somewhere around June 20th 2020 - ie something like a week or two after I posted most of my initial parodies and demos, a month after the paper was uploaded, and who-knows-how-long after GPT-3 was trained.
I remember thinking, “has the CEO of OpenAI spent all this time overseeing and starting up the OA API business… without actually using GPT-3?” I can’t prove it, but it does explain why his early Twitter mentions were so perfunctory if, say, he had received the reports about how the new model was pretty useful and could be used for practical tasks, but, while he was busy running around overseeing productization & high-level initiatives like licensing the GPT-3 source to MS, no one took the time to emphasize the other parts like how freaky this GPT-3 thing was or what those meta-learning charts meant or that maybe he ought to try it out firsthand. (The default response to GPT-3 is, after all, to treat it as trivial and boring. Think of how many DL NLP researchers read the GPT-3 paper and dismissed it snidely before the API came out.)
Are you thinking of this tweet? I believe that was meant to be a joke. His actual position at the time appeared to be that GPT-3 is impressive but overhyped.
I don’t believe that was it. That was very obviously sarcastic, and it was in July which is a month after the period I am thinking of (plus no chatbot connection), which is an eternity—by late July, as people got into the API and saw it for themselves, more people than just me had been banging the drums about GPT-3 being important, and there was even some genuine GPT-3 overhyping going on, and that is what Sam-sama was pushing back with those late July 2020 tweets. If you want to try to dig it up, you’ll need to go further back than that.
Searching his twitter, he barely seems to have mentioned GPT at all in 2020. Maybe he deleted some of his tweets?
He definitely didn’t mention it much (which is part of what gave me that impression—in general, Sam’s public output is always very light about the details of OA research). I dunno about deleting. Twitter search is terrible; I long ago switched to searching my exported profile dump when I need to refind an old tweet of mine.
I think it’s more that in maker-land, the sign of the impact usually does not appear to matter much in terms of gaining wealth/influence/status. It appears that usually, if your project has a huge impact on the world—and you’re not going to jail—you win.
What are some examples of makers who gained wealth/influence/status by having a huge negative impact on the world?
The marketing company Salesforce was founded in Silicon Valley in ’99, and has been hugely successful. It’s often ranked as one of the best companies in the U.S. to work for. I went to one of their conferences recently, and the whole thing was a massive status display- they’d built an arcade with Salesforce-themed video games just for that one conference, and had a live performance by Gwen Stafani, among other things.
...But the marketing industry is one massive collective action problem. It consumes a vast amount of labor and resources, distorts the market in a way that harms healthy competition, creates incentives for social media to optimize for engagement rather than quality, and develops dangerous tools for propagandists, all while producing nothing of value in aggregate. Without our massive marketing industry, we’d have to pay a subscription fee or a tax for services like Google and Facebook, but everything else would be cheaper in a way that would necessarily dwarf that cost (since the vast majority of the cost of marketing doesn’t go to useful services)- and we’d probably have a much less sensationalist media on top of that.
People in Silicon Valley are absolutely willing to grant status to people who gained wealth purely through collective action problems.
(Not saying Altman hasn’t thought about arguments that AGI might do stuff we don’t try to get it to do; but I am making a hypothesis about what would hold weight in his priors.)
The instruction-following model Altman mentions is documented here. I didn’t notice it had been released!
Is there an explanation how it works somewhere?
I haven’t seen a writeup anywhere of how it was trained.
Neither have I. I vaguely recall a call for volunteers on the Slack very early on for crowdsourcing tasks/instruction-following prompts and completions, and I speculate this might be the origin: the instruction series may be simply a model finetuned on a small corpus of handcorrected or handwritten demonstrations of ‘following instructions’. If there’s any use of the fancy RL or preference learning work, they haven’t mentioned it that I’ve seen. (In the most recent finetuning paper, none of the examples look like generic ‘instructions’.)
(Altman retweeting Richard Ngo’s alignment curriculum seems like a positive development …)
I agree that that section of the quote is really disconcerting, but I’d bet that this episode is still net good for AI safety: If this convinces a significant number of people that AI will plausibly be extremely powerful and disruptive within the next decade, that seems like a significant step toward convincing them that similar AI could be more gravely dangerous.
Created some PredictionBook predictions based off of this:
In ten years, if I were to learn a new undergraduate level math subject, I’d choose to employ an AI system over a paid tutor.
In ten years, I will prefer speaking to an AI system over a human for legal advice.
In ten years, I will prefer speaking to an AI system over a human for medical advice.