Maybe part of what’s up, if not the deepest generator, is that Altman is embedded in softward engineer-land, and more generally maker-land. In maker-land, things don’t do impressive stuff without you specifically trying to get them to do that stuff; and the hard work is always getting it to do that stuff, even though figuring out what stuff you want is also hard. Sure, software can do weird, counterintuitive stuff; but if the stuff is impressive, it’s because a human was trying to get it to do that. (We can find some real world counterexamples, like stuff that “AI” has “invented”; but the general milieu still looks like this I think.)
Altman may also just not be as plugged into the on-the-ground AI details as you expect. He’s a startup CEO/growth hacker guy, not a DL researcher.
I remember vividly reading one of his tweets last year, enthusiastically talking about how he’d started chatting with GPT-3 and it was impressing him with its intelligence. It was an entirely unremarkable ‘My First GPT-3’ tweet, about something everyone who gets API access discovers quickly about how it defies one’s expectations… unremarkable aside from the fact that it was sent somewhere around June 20th 2020 - ie something like a week or two after I posted most of my initial parodies and demos, a month after the paper was uploaded, and who-knows-how-long after GPT-3 was trained.
I remember thinking, “has the CEO of OpenAI spent all this time overseeing and starting up the OA API business… without actually using GPT-3?” I can’t prove it, but it does explain why his early Twitter mentions were so perfunctory if, say, he had received the reports about how the new model was pretty useful and could be used for practical tasks, but, while he was busy running around overseeing productization & high-level initiatives like licensing the GPT-3 source to MS, no one took the time to emphasize the other parts like how freaky this GPT-3 thing was or what those meta-learning charts meant or that maybe he ought to try it out firsthand. (The default response to GPT-3 is, after all, to treat it as trivial and boring. Think of how many DL NLP researchers read the GPT-3 paper and dismissed it snidely before the API came out.)
I remember vividly reading one of his tweets last year, enthusiastically talking about how he’d started chatting with GPT-3 and it was impressing him with its intelligence.
Are you thinking of this tweet? I believe that was meant to be a joke. His actual position at the time appeared to be that GPT-3 is impressive but overhyped.
I don’t believe that was it. That was very obviously sarcastic, and it was in July which is a month after the period I am thinking of (plus no chatbot connection), which is an eternity—by late July, as people got into the API and saw it for themselves, more people than just me had been banging the drums about GPT-3 being important, and there was even some genuine GPT-3 overhyping going on, and that is what Sam-sama was pushing back with those late July 2020 tweets. If you want to try to dig it up, you’ll need to go further back than that.
He definitely didn’t mention it much (which is part of what gave me that impression—in general, Sam’s public output is always very light about the details of OA research). I dunno about deleting. Twitter search is terrible; I long ago switched to searching my exported profile dump when I need to refind an old tweet of mine.
In maker-land, things don’t do impressive stuff without you specifically trying to get them to do that stuff; and the hard work is always getting it to do that stuff, even though figuring out what stuff you want is also hard.
I think it’s more that in maker-land, the sign of the impact usually does not appear to matter much in terms of gaining wealth/influence/status. It appears that usually, if your project has a huge impact on the world—and you’re not going to jail—you win.
The marketing company Salesforce was founded in Silicon Valley in ’99, and has been hugely successful. It’s often ranked as one of the best companies in the U.S. to work for. I went to one of their conferences recently, and the whole thing was a massive status display- they’d built an arcade with Salesforce-themed video games just for that one conference, and had a live performance by Gwen Stafani, among other things.
...But the marketing industry is one massive collective action problem. It consumes a vast amount of labor and resources, distorts the market in a way that harms healthy competition, creates incentives for social media to optimize for engagement rather than quality, and develops dangerous tools for propagandists, all while producing nothing of value in aggregate. Without our massive marketing industry, we’d have to pay a subscription fee or a tax for services like Google and Facebook, but everything else would be cheaper in a way that would necessarily dwarf that cost (since the vast majority of the cost of marketing doesn’t go to useful services)- and we’d probably have a much less sensationalist media on top of that.
People in Silicon Valley are absolutely willing to grant status to people who gained wealth purely through collective action problems.
(Not saying Altman hasn’t thought about arguments that AGI might do stuff we don’t try to get it to do; but I am making a hypothesis about what would hold weight in his priors.)
Maybe part of what’s up, if not the deepest generator, is that Altman is embedded in softward engineer-land, and more generally maker-land. In maker-land, things don’t do impressive stuff without you specifically trying to get them to do that stuff; and the hard work is always getting it to do that stuff, even though figuring out what stuff you want is also hard. Sure, software can do weird, counterintuitive stuff; but if the stuff is impressive, it’s because a human was trying to get it to do that. (We can find some real world counterexamples, like stuff that “AI” has “invented”; but the general milieu still looks like this I think.)
Altman may also just not be as plugged into the on-the-ground AI details as you expect. He’s a startup CEO/growth hacker guy, not a DL researcher.
I remember vividly reading one of his tweets last year, enthusiastically talking about how he’d started chatting with GPT-3 and it was impressing him with its intelligence. It was an entirely unremarkable ‘My First GPT-3’ tweet, about something everyone who gets API access discovers quickly about how it defies one’s expectations… unremarkable aside from the fact that it was sent somewhere around June 20th 2020 - ie something like a week or two after I posted most of my initial parodies and demos, a month after the paper was uploaded, and who-knows-how-long after GPT-3 was trained.
I remember thinking, “has the CEO of OpenAI spent all this time overseeing and starting up the OA API business… without actually using GPT-3?” I can’t prove it, but it does explain why his early Twitter mentions were so perfunctory if, say, he had received the reports about how the new model was pretty useful and could be used for practical tasks, but, while he was busy running around overseeing productization & high-level initiatives like licensing the GPT-3 source to MS, no one took the time to emphasize the other parts like how freaky this GPT-3 thing was or what those meta-learning charts meant or that maybe he ought to try it out firsthand. (The default response to GPT-3 is, after all, to treat it as trivial and boring. Think of how many DL NLP researchers read the GPT-3 paper and dismissed it snidely before the API came out.)
Are you thinking of this tweet? I believe that was meant to be a joke. His actual position at the time appeared to be that GPT-3 is impressive but overhyped.
I don’t believe that was it. That was very obviously sarcastic, and it was in July which is a month after the period I am thinking of (plus no chatbot connection), which is an eternity—by late July, as people got into the API and saw it for themselves, more people than just me had been banging the drums about GPT-3 being important, and there was even some genuine GPT-3 overhyping going on, and that is what Sam-sama was pushing back with those late July 2020 tweets. If you want to try to dig it up, you’ll need to go further back than that.
Searching his twitter, he barely seems to have mentioned GPT at all in 2020. Maybe he deleted some of his tweets?
He definitely didn’t mention it much (which is part of what gave me that impression—in general, Sam’s public output is always very light about the details of OA research). I dunno about deleting. Twitter search is terrible; I long ago switched to searching my exported profile dump when I need to refind an old tweet of mine.
I think it’s more that in maker-land, the sign of the impact usually does not appear to matter much in terms of gaining wealth/influence/status. It appears that usually, if your project has a huge impact on the world—and you’re not going to jail—you win.
What are some examples of makers who gained wealth/influence/status by having a huge negative impact on the world?
The marketing company Salesforce was founded in Silicon Valley in ’99, and has been hugely successful. It’s often ranked as one of the best companies in the U.S. to work for. I went to one of their conferences recently, and the whole thing was a massive status display- they’d built an arcade with Salesforce-themed video games just for that one conference, and had a live performance by Gwen Stafani, among other things.
...But the marketing industry is one massive collective action problem. It consumes a vast amount of labor and resources, distorts the market in a way that harms healthy competition, creates incentives for social media to optimize for engagement rather than quality, and develops dangerous tools for propagandists, all while producing nothing of value in aggregate. Without our massive marketing industry, we’d have to pay a subscription fee or a tax for services like Google and Facebook, but everything else would be cheaper in a way that would necessarily dwarf that cost (since the vast majority of the cost of marketing doesn’t go to useful services)- and we’d probably have a much less sensationalist media on top of that.
People in Silicon Valley are absolutely willing to grant status to people who gained wealth purely through collective action problems.
(Not saying Altman hasn’t thought about arguments that AGI might do stuff we don’t try to get it to do; but I am making a hypothesis about what would hold weight in his priors.)