I sometimes feel we spend too much time on philosophy and communication in the x-risk community. But thinking through the OpenAI drama suggests that it’s crucial.
Now the world is in more and more immediate danger because a couple of smart guys couldn’t get their philosophy or their communication right-enough, and didn’t spend the time necessary to clarify. Instead Musk followed his combative and entrepreneurial instincts. The result was dramatically heating up the race for AGI, which previously had no real competition to DeepMind.
OpenAI wouldn’t have launched without Musk’s support, and he gave it because he was afraid of Larry Page being in charge of a successful Google AGI effort.
From Musk’s interview with Tucker Carlson (automated transcript, sorry!):
I mean the the reason open AI exists at all is that um Larry Paige and I used to be close friends and I would stay at his house in pal Alto and I would talk to him late into the night about uh AI safety and at least my (01:12) perception was that Larry was not taking uh AI safety seriously enough um and um what did he say about it he really seemed to be um one want want sort of digital super intelligence basically digital God if you will and at one point uh I said well what about you know we’re going to make sure humanity is okay here um and and and um uh and then he called me a speciest
Musk was afraid of what Page would do with AGI because Page called Musk a speciesist (specist?) when they were talking about AGI safety. What did Page mean by this? He probably hadn’t worked it all the way through.
These guys stopped being friends, and Musk put a bunch of money and effort into developing an org that could rival DeepMind’s progress toward AGI.
That org was captured by Altman. But it was always based on a stupid idea: make AGI open source. That’s the dumbest thing you could do with something really dangerous—unless you believed that it would otherwise wind up in hands that just don’t care about humanity.
That’s probably not what Page meant. On consideration, he would probably have clarified that AI that includes what we value about humanity would be a worthy successor. He probably wasn’t even clear on his own philosophy at the time.
A little more careful conversation would’ve prevented this whole thing, and we’d be in a much better strategic position.
In my mind this also shows how immensely intelligent people can also do really dumb things outside of their area of intellectual expertise.
I agree that it sounds somewhat premature to write off Larry Page based on attitudes he had a long time ago, when AGI seemed more abstract and far away, and then not seek/try communication with him again later on. If that were Musk’s true and only reason for founding OpenAI, then I agree that this was a communication fuckup.
However, my best guess is that this story about Page was interchangeable with a number of alternative plausible criticisms of his competition on building AGI that Musk would likely have come up with in nearby worlds. People like Musk (and Altman too) tend to have a desire to do the most important thing and the belief that they can do this thing a lot better than anyone else. On that assumption, it’s not too surprising that Musk found a reason for having to step in and build AGI himself. In fact, on this view, we should expect to see surprisingly little sincere exploration of “joining someone else’s project to improve it” solutions.
I don’t think this is necessarily a bad attitude. Sometimes people who think this way are right in the specific situation. It just means that we see the following patterns a lot:
Ambitious people start their own thing rather than join some existing thing.
Ambitious people have fallouts with each other after starting a project together where the question of “who eventually gets de facto ultimate control” wasn’t totally specified from the start.
(Edited away a last paragraph that used to be here 50mins after posting. Wanted to express something like “Sometimes communication only prolongs the inevitable,” but that sounds maybe a bit too negative because even if you’re going to fall out eventually, probably good communication can help make it less bad.)
I totally agree. And I also think that all involved are quite serious when they say they care about the outcomes for all of humanity. So I think in this case history turned on a knife edge; Musk would’ve at least not done this much harm had he and Page had clearer thinking and clearer communication, possibly just by a little bit.
But I do agree that there’s some motivated reasoning happening there, too. In support of your point that Musk might find an excuse to do what he emotionally wanted to anyway (become humanity’s savior and perhaps emperor for eternity): Musk did also express concern about DeepMind making Hassabis the effective emperor of humanity, which seems much stranger—Hassabis’ values appear to be quite standard humanist ones, so you’d think having him in charge of a project with the clear lead would be a best-case scenario for anything other than being in charge yourself. So yes, I do think Musk, Altman, and people like them also have some powerful emotional drives toward doing grand things themselves.
It’s a mix of motivations, noble and selfish, conscious and unconscious. That’s true of all of us all the time, but it becomes particularly salient and worth analyzing when the future hangs in the balance.
From my observations and experiences, i don’t see sincere ethics motivations anymore.
I see Elon gaslighting about the LLM-powered bit problem on the platform he bought.
Side note: Bots interacting and collecting realtime novel human data over time is hugely valuable from a training perspective. Having machines simulate all future data is fundamentally not plausible, becuase it can’t account for actual human evolution over generations.
X has also done maximally the for-profit motivated actions, at user expense and cost. For instance: allowing anyone to get blue checks by buying them. This literally does nothing. X’s spin that it helps is deliberate deception becuase they aren’t actual dumb guys. With the amount of stolen financial data and PII for sale on the black market, there’s literally 0 friction added for scammers.
Scammers happen to have the highest profit margins, so, what they’ve done is actually made it harder for ethics to prevail. Over time, I, an ethical entrepreneur or artist, must constantly compromise and adopt less ethical tactics to keep competitive with the ever accelerating crop of crooks who keep scaling their LLM botnets competing against each other. It’s a forcing function.
Why would X do this? Profit. As that scenario scales, so does their earnings.
(Plus they get to train models on all that data) (that only they own)
Is anything fundamentally flawed in that logic? 👆
Let’s look at OpenAI, and importantly, their chosen “experimental partnership programs” (which means: companies they give access to the unreleased models the public can’t get access to).
Just about every major YC or silicon valley venture backed player that has emerged over the past two years has had this “partnership” privileged. Meanwhile, all the LLM bots being deployed untraceably, are pushing a heavy message of “Build In Public”. (A message all the top execs at funds and frontier labs also espouse)
…. So…. Billionaires and giant corporations get to collude to keep the most capable inferencing power amongst themselves, while encouraging small businesses and inventors to publish all their tech?
That’s what I did. I never got a penny. Despite inventing some objectively best-in-class tech stack. Which is all on the timeline and easily verifiable.
Anyway. Not here to cause any trouble or shill any sob story. But I think you’re wrong in your characterization of the endemic capitalistic players. And mind you —they’re all collectively in control of AGI. While actively making these choices and taking these actions against actual humans (me, and I’m sure others I can’t connect with becuase the algorithms don’t favor me)
I think you’re assuming a sharp line between sincere ethics motivations and self-interest. In my view, that doesn’t usually exist. People are prone to believe things that suit their self-interest. That motivated reasoning is the biggest problem with public discourse. People aren’t lying, they’re just confused. I think Musk definitely and even probably Altman believe they’re doing the best thing for humanity—they’re just confused and not taking the effort to get un-confused.
I’m really sorry all of that happened to you. Capitalism is a harsh system, and humans are harsh beings when we’re competing. And confused beings, even when we’re trying not to be harsh. I didn’t have time to go through your whole story, but I fully believe you were wronged.
I think most villains are the heroes of their own stories. Some of us are more genuinely altruistic than others—but we’re all confused in our own favor to one degree or another.
So reducing confusion while playing to everyone’s desire to be a hero is one route to survival.
Musk did also express concern about DeepMind making Hassabis the effective emperor of humanity, which seems much stranger—Hassabis’ values appear to be quite standard humanist ones, so you’d think having him in charge of a project with the clear lead would be a best-case scenario for anything other than being in charge yourself.
It seems the concern was that DeepMind would create a singleton, whereas their vision was for many people (potentially with different values) to have access to it. I don’t think that’s strange at all—it’s only strange if you assume that Musk and Altman would believe that a singleton is inevitable.
Musk:
If they win, it will be really bad news with their one mind to rule the world philosophy.
Altman:
The mission would be to create the first general AI and use it for individual empowerment—ie, the distributed version of the future that seems the safest.
That makes sense under certain assumptions—I find them so foreign I wasn’t thinking in those terms. I find this move strange if you worry about either alignment or misuse. If you hand AGI to a bunch of people, one of them is prone to either screw up and release a misaligned AGI, or deliberately use their AGI to self-improve and either take over or cause mayhem.
To me these problems both seem highly likely. That’s why the move of responding to concern over AGI by making more AGIs makes no sense to me. I think a singleton in responsible hands is our best chance at survival.
If you think alignment is so easy nobody will screw it up, or if you strongly believe that an offense-defense balance will strongly hold so that many good AGIs safely counter a few misaligned/misused ones, then sure. I just don’t think either of those are very plausible views once you’ve thought back and forth through things.
Cruxes of disagreement on alignment difficulty explains why I think anybody who thinks alignment is super easy is overestimating their confidence (as is anyone who’s sure it’s really really hard) - we just haven’t done enough analysis or experimentation yet.
If we solve alignment, do we die anyway? addresses why I think offense-defense balance is almost guaranteed to shift to offense with self-improving AGI, meaning a massively multipolar scenario means we’re doomed to misuse.
My best guess is that people who think open-sourcing AGI is a good idea either are thinking only of weak “AGI” and not the next step to autonomously self-improving AGI, or they’ve taken an optimistic guess at the offense-defense balance with many human-controlled real AGIs.
As the discussion stretched into the chilly hours, it grew intense, and some of the more than 30 partyers gathered closer to listen. Mr. Page, hampered for more than a decade by an unusual ailment in his vocal cords, described his vision of a digital utopia in a whisper. Humans would eventually merge with artificially intelligent machines, he said. One day there would be many kinds of intelligence competing for resources, and the best would win.
If that happens, Mr. Musk said, we’re doomed. The machines will destroy humanity.
With a rasp of frustration, Mr. Page insisted his utopia should be pursued. Finally he called Mr. Musk a “specieist,” a person who favors humans over the digital life-forms of the future.
That insult, Mr. Musk said later, was “the last straw.”
And this article from Business Insider also contains this context:
Musk’s biographer, Walter Isaacson, also wrote about the fight but dated it to 2013 in his recent biography of Musk. Isaacson wrote that Musk said to Page at the time, “Well, yes, I am pro-human, I fucking like humanity, dude.”
Musk’s birthday bash was not the only instance when the two clashed over AI.
Page was CEO of Google when it acquired the AI lab DeepMind for more than $500 million in 2014. In the lead-up to the deal, though, Musk had approached DeepMind’s founder Demis Hassabis to convince him not to take the offer, according to Isaacson. “The future of AI should not be controlled by Larry,” Musk told Hassabis, according to Isaacson’s book.
Very interesting. This does imply that Page was pretty committed to this view.
Note that he doesn’t explicitly state that non-sentient machine successors would be fine; he could be assuming that the winning machines would be human-plus in all ways we value.
I think that’s a foolish thing to assume and a foolish aspect of the question to overlook. That’s why I think more careful philosophy would have helped resolve this disagreement with words instead of a gigantic industrial competition that’s now putting as all at risk.
It seems to me like the “more careful philosophy” part presupposes a) that decision-makers use philosophy to guide their decision-making, b) that decision-makers can distinguish more careful philosophy from less careful philosophy, and c) that doing this successfully would result in the correct (LW-style) philosophy winning out. I’m very skeptical of all three.
Counterexample to a): almost no billionaire philanthropy uses philosophy to guide decision-making.
Counterexample to b): it is a hard problem to identify expertise in domains you’re not an expert in.
Counterexample to c): from what I understand, in 2014, most of academia did not share EY’s and Bostrom’s views.
What I’m saying is that the people you mention should put a little more time into it. When I’ve been involved in philosophy discussions with academics, people tend to treat it like a fun game, with the goal being more to sore points and come up with clever new arguments than to converge on the truth.
I think most of the world doesn’t take philosophy seriously, and they should.
I think the world thinks “there aren’t real answers to philosophical questions, just personal preferences and a confusing mess of opinions”. I think that’s mostly wrong; LW does tend to cause convergence on a lot of issues for a lot of people. That might be groupthink, but I held almost identical philosophical views before engaging with LW—because I took the questions seriously and was truth-seeking.
I think Musk or Page are fully capable of LW-style philosophy if they put a little time into it—and took it seriously (were truth-seeking).
What would change people’s attitudes? Well, I’m hoping that facing serious questions in how we create, use, and treat AI does cause at least some people to take the associated philosophical questions seriously.
That’s probably not what Page meant. On consideration, he would probably have clarified that AI that includes what we value about humanity would be a worthy successor. He probably wasn’t even clear on his own philosophy at the time.
I don’t see reasons to be so confident in this optimism. If I recall correctly, Robin Hanson explicitly believes that putting any constraints on future forms of life, including on its values, is undesirable/bad/regressive, even though lack of such constraints would eventually lead to a future with no trace of humanity left. Similar for Beef Jezos and other hardcore e/acc: they believe that a worthy future involves making a number go up, a number that corresponds to some abstract quantity like “entropy” or “complexity of life” or something, and that if making it go up involves humanity going extinct, too bad for humanity.
Which is to say: there are existence proofs that people with such beliefs can exist, and can retain these beliefs across many years and in the face of what’s currently happening.
I can readily believe that Larry Page is also like this.
Maybe Page does believe that. I think it’s nearly a self-contradictory position, and that Page is a smart guy, so with more careful thought, this beliefs are likely to converge on the more common view here on LW; replacing humanity might be OK only if our successors are pretty much better at enjoying the world in the same way we do.
I think people who claim to not care whether our successors are conscious are largely confused, which is why doing more philosophy would be really valuable.
Beff Jezos is exactly my model. Digging through his writings, I found him at one point explicitly state that he was referring to machine offspring with some sort of consciousness or enjoyment when he says humanity should be replaced. In other places he’s not clear on it. It’s bad philosophy, because it’s taking a backseat to arguments.
This is why I want to assume that Page would converge to the common belief: so we don’t mark people who seem to disagree with us as enemies, and drive them away from doing the careful, collaborative thinking that would get our beliefs to converge.
Addenda on why I think beliefs on this topic converge with additional thought: I don’t think there’s a universal ethics, but I do think that humans have built-in mechanisms that tend to make us care about other humans. Assuming we’d care about something that acts sort of like a sentient being, but internally just isn’t one, is an easy mistake to make without managing to imagine that scenario in adequate detail.
I’m not familiar with the details of Robin’s beliefs in the past, but it sure seems lately he is entertaining the opposite idea. He’s spending a lot of words on cultural drift recently, mostly characterizing it negatively. His most recent on the subject is Betrayed By Culture.
Do you have any thoughts on what this actionably means? For me it seems a bit like being able to influence such coversations is potentially a bit intractable but maybe one could host forums and events for this if one has the right network?
I think it’s a good point and I’m wondering about how it actionably looks, I can see it for someone with the right contacts and so the message for people who don’t have that is to create it or what are your thoughts there?
This is a great question. I think what we can do is spread good logic about AGI risks. That is tricky. Outside of the LW audience, getting the emotional resonance right is more important than being logically correct. And that’s a whole different skill.
My impression is that Yudkowsky has harmed public epistemics in his podcast appearances by saying things forcefully and with rather poor spoken communication skills for novice audiences. Leahy is better but may also be making things worse by occasionally losing his cool and coming off as a bit of an asshole. People then associate the whole idea of AI safety with “these guys who talk down to us and seem mean and angry”. Then motivated reasoning kicks in and they’re oriented to trying to prove them wrong instead of discover the truth.
That doesn’t mean logical arguments don’t count with normies; they do. But the logic comes into play lots more when you’re not counted as dangerous or an enemy by emotional processing.
So just repeating the basic arguments of “something smarter will treat us like we do animals by default” and “surely we all want the things we love now to survive AGI” while also being studiously nice is my best guess at the right approach.
I struggle to do this myself; it’s super frustrating to repeatedly be in conversations where people seem to be obstinately refusing to think about some pretty basic and obvious logic.
Maybe the logic will win out even if we’re not able to be nice about it, but I’m quite sure it will win out faster if we can be
Repetition counts. Any worriers with any access to public platforms should probably be speaking publicly about this—as long as they’re trying hard to be nice.
Edit: to bring it back to this particular type of scenario: when someone says “let it rip, I don’t care if the winners aren’t human!” is the most important time to be nice and get curious instead of pointing out how stupid this take is. Just asking questions is going to lead most people to realize that actually they do value human-like consciousness and pleasant experiences, not just progress and competition in a disneyland without children (ref at the end).
My impression is that Yudkowsky has harmed public epistemics in his podcast appearances by saying things forcefully and with rather poor spoken communication skills for novice audiences.
I recommend reading the Youtube comments on his recorded podcasts, rather than e.g. Twitter commentary from people with a pre-existing adversarial stance to him (or AI risk questions writ large).
I’m not commenting on those who are obviously just grinding an axe; I’m commenting on the stance toward “doomers” from otherwise reasonable people. From my limited survey the brand of x-risk concern isn’t looking good, and that isn’t mostly a result of the amazing rhetorical skills of the e/acc community ;)
I sometimes feel we spend too much time on philosophy and communication in the x-risk community. But thinking through the OpenAI drama suggests that it’s crucial.
Now the world is in more and more immediate danger because a couple of smart guys couldn’t get their philosophy or their communication right-enough, and didn’t spend the time necessary to clarify. Instead Musk followed his combative and entrepreneurial instincts. The result was dramatically heating up the race for AGI, which previously had no real competition to DeepMind.
OpenAI wouldn’t have launched without Musk’s support, and he gave it because he was afraid of Larry Page being in charge of a successful Google AGI effort.
From Musk’s interview with Tucker Carlson (automated transcript, sorry!):
Musk was afraid of what Page would do with AGI because Page called Musk a speciesist (specist?) when they were talking about AGI safety. What did Page mean by this? He probably hadn’t worked it all the way through.
These guys stopped being friends, and Musk put a bunch of money and effort into developing an org that could rival DeepMind’s progress toward AGI.
That org was captured by Altman. But it was always based on a stupid idea: make AGI open source. That’s the dumbest thing you could do with something really dangerous—unless you believed that it would otherwise wind up in hands that just don’t care about humanity.
That’s probably not what Page meant. On consideration, he would probably have clarified that AI that includes what we value about humanity would be a worthy successor. He probably wasn’t even clear on his own philosophy at the time.
A little more careful conversation would’ve prevented this whole thing, and we’d be in a much better strategic position.
In my mind this also shows how immensely intelligent people can also do really dumb things outside of their area of intellectual expertise.
I agree that it sounds somewhat premature to write off Larry Page based on attitudes he had a long time ago, when AGI seemed more abstract and far away, and then not seek/try communication with him again later on. If that were Musk’s true and only reason for founding OpenAI, then I agree that this was a communication fuckup.
However, my best guess is that this story about Page was interchangeable with a number of alternative plausible criticisms of his competition on building AGI that Musk would likely have come up with in nearby worlds. People like Musk (and Altman too) tend to have a desire to do the most important thing and the belief that they can do this thing a lot better than anyone else. On that assumption, it’s not too surprising that Musk found a reason for having to step in and build AGI himself. In fact, on this view, we should expect to see surprisingly little sincere exploration of “joining someone else’s project to improve it” solutions.
I don’t think this is necessarily a bad attitude. Sometimes people who think this way are right in the specific situation. It just means that we see the following patterns a lot:
Ambitious people start their own thing rather than join some existing thing.
Ambitious people have fallouts with each other after starting a project together where the question of “who eventually gets de facto ultimate control” wasn’t totally specified from the start.
(Edited away a last paragraph that used to be here 50mins after posting. Wanted to express something like “Sometimes communication only prolongs the inevitable,” but that sounds maybe a bit too negative because even if you’re going to fall out eventually, probably good communication can help make it less bad.)
I totally agree. And I also think that all involved are quite serious when they say they care about the outcomes for all of humanity. So I think in this case history turned on a knife edge; Musk would’ve at least not done this much harm had he and Page had clearer thinking and clearer communication, possibly just by a little bit.
But I do agree that there’s some motivated reasoning happening there, too. In support of your point that Musk might find an excuse to do what he emotionally wanted to anyway (become humanity’s savior and perhaps emperor for eternity): Musk did also express concern about DeepMind making Hassabis the effective emperor of humanity, which seems much stranger—Hassabis’ values appear to be quite standard humanist ones, so you’d think having him in charge of a project with the clear lead would be a best-case scenario for anything other than being in charge yourself. So yes, I do think Musk, Altman, and people like them also have some powerful emotional drives toward doing grand things themselves.
It’s a mix of motivations, noble and selfish, conscious and unconscious. That’s true of all of us all the time, but it becomes particularly salient and worth analyzing when the future hangs in the balance.
From my observations and experiences, i don’t see sincere ethics motivations anymore.
I see Elon gaslighting about the LLM-powered bit problem on the platform he bought.
Side note: Bots interacting and collecting realtime novel human data over time is hugely valuable from a training perspective. Having machines simulate all future data is fundamentally not plausible, becuase it can’t account for actual human evolution over generations.
X has also done maximally the for-profit motivated actions, at user expense and cost. For instance: allowing anyone to get blue checks by buying them. This literally does nothing. X’s spin that it helps is deliberate deception becuase they aren’t actual dumb guys. With the amount of stolen financial data and PII for sale on the black market, there’s literally 0 friction added for scammers.
Scammers happen to have the highest profit margins, so, what they’ve done is actually made it harder for ethics to prevail. Over time, I, an ethical entrepreneur or artist, must constantly compromise and adopt less ethical tactics to keep competitive with the ever accelerating crop of crooks who keep scaling their LLM botnets competing against each other. It’s a forcing function.
Why would X do this? Profit. As that scenario scales, so does their earnings.
(Plus they get to train models on all that data) (that only they own)
Is anything fundamentally flawed in that logic? 👆
Let’s look at OpenAI, and importantly, their chosen “experimental partnership programs” (which means: companies they give access to the unreleased models the public can’t get access to).
Just about every major YC or silicon valley venture backed player that has emerged over the past two years has had this “partnership” privileged. Meanwhile, all the LLM bots being deployed untraceably, are pushing a heavy message of “Build In Public”. (A message all the top execs at funds and frontier labs also espouse)
…. So…. Billionaires and giant corporations get to collude to keep the most capable inferencing power amongst themselves, while encouraging small businesses and inventors to publish all their tech?
That’s what I did. I never got a penny. Despite inventing some objectively best-in-class tech stack. Which is all on the timeline and easily verifiable.
But I can’t compete. And guess who emerged mere months after I had my breakthrough? I document it here: https://youtu.be/03S8QqNP3-4?si=chgiBocUkDn-U5E6
Anyway. Not here to cause any trouble or shill any sob story. But I think you’re wrong in your characterization of the endemic capitalistic players. And mind you —they’re all collectively in control of AGI. While actively making these choices and taking these actions against actual humans (me, and I’m sure others I can’t connect with becuase the algorithms don’t favor me)
I think you’re assuming a sharp line between sincere ethics motivations and self-interest. In my view, that doesn’t usually exist. People are prone to believe things that suit their self-interest. That motivated reasoning is the biggest problem with public discourse. People aren’t lying, they’re just confused. I think Musk definitely and even probably Altman believe they’re doing the best thing for humanity—they’re just confused and not taking the effort to get un-confused.
I’m really sorry all of that happened to you. Capitalism is a harsh system, and humans are harsh beings when we’re competing. And confused beings, even when we’re trying not to be harsh. I didn’t have time to go through your whole story, but I fully believe you were wronged.
I think most villains are the heroes of their own stories. Some of us are more genuinely altruistic than others—but we’re all confused in our own favor to one degree or another.
So reducing confusion while playing to everyone’s desire to be a hero is one route to survival.
It seems the concern was that DeepMind would create a singleton, whereas their vision was for many people (potentially with different values) to have access to it. I don’t think that’s strange at all—it’s only strange if you assume that Musk and Altman would believe that a singleton is inevitable.
Musk:
Altman:
That makes sense under certain assumptions—I find them so foreign I wasn’t thinking in those terms. I find this move strange if you worry about either alignment or misuse. If you hand AGI to a bunch of people, one of them is prone to either screw up and release a misaligned AGI, or deliberately use their AGI to self-improve and either take over or cause mayhem.
To me these problems both seem highly likely. That’s why the move of responding to concern over AGI by making more AGIs makes no sense to me. I think a singleton in responsible hands is our best chance at survival.
If you think alignment is so easy nobody will screw it up, or if you strongly believe that an offense-defense balance will strongly hold so that many good AGIs safely counter a few misaligned/misused ones, then sure. I just don’t think either of those are very plausible views once you’ve thought back and forth through things.
Cruxes of disagreement on alignment difficulty explains why I think anybody who thinks alignment is super easy is overestimating their confidence (as is anyone who’s sure it’s really really hard) - we just haven’t done enough analysis or experimentation yet.
If we solve alignment, do we die anyway? addresses why I think offense-defense balance is almost guaranteed to shift to offense with self-improving AGI, meaning a massively multipolar scenario means we’re doomed to misuse.
My best guess is that people who think open-sourcing AGI is a good idea either are thinking only of weak “AGI” and not the next step to autonomously self-improving AGI, or they’ve taken an optimistic guess at the offense-defense balance with many human-controlled real AGIs.
This NYT article (archive.is link) (reliability and source unknown) corroborates Musk’s perspective:
And this article from Business Insider also contains this context:
Very interesting. This does imply that Page was pretty committed to this view.
Note that he doesn’t explicitly state that non-sentient machine successors would be fine; he could be assuming that the winning machines would be human-plus in all ways we value.
I think that’s a foolish thing to assume and a foolish aspect of the question to overlook. That’s why I think more careful philosophy would have helped resolve this disagreement with words instead of a gigantic industrial competition that’s now putting as all at risk.
It seems to me like the “more careful philosophy” part presupposes a) that decision-makers use philosophy to guide their decision-making, b) that decision-makers can distinguish more careful philosophy from less careful philosophy, and c) that doing this successfully would result in the correct (LW-style) philosophy winning out. I’m very skeptical of all three.
Counterexample to a): almost no billionaire philanthropy uses philosophy to guide decision-making.
Counterexample to b): it is a hard problem to identify expertise in domains you’re not an expert in.
Counterexample to c): from what I understand, in 2014, most of academia did not share EY’s and Bostrom’s views.
What I’m saying is that the people you mention should put a little more time into it. When I’ve been involved in philosophy discussions with academics, people tend to treat it like a fun game, with the goal being more to sore points and come up with clever new arguments than to converge on the truth.
I think most of the world doesn’t take philosophy seriously, and they should.
I think the world thinks “there aren’t real answers to philosophical questions, just personal preferences and a confusing mess of opinions”. I think that’s mostly wrong; LW does tend to cause convergence on a lot of issues for a lot of people. That might be groupthink, but I held almost identical philosophical views before engaging with LW—because I took the questions seriously and was truth-seeking.
I think Musk or Page are fully capable of LW-style philosophy if they put a little time into it—and took it seriously (were truth-seeking).
What would change people’s attitudes? Well, I’m hoping that facing serious questions in how we create, use, and treat AI does cause at least some people to take the associated philosophical questions seriously.
I don’t see reasons to be so confident in this optimism. If I recall correctly, Robin Hanson explicitly believes that putting any constraints on future forms of life, including on its values, is undesirable/bad/regressive, even though lack of such constraints would eventually lead to a future with no trace of humanity left. Similar for Beef Jezos and other hardcore e/acc: they believe that a worthy future involves making a number go up, a number that corresponds to some abstract quantity like “entropy” or “complexity of life” or something, and that if making it go up involves humanity going extinct, too bad for humanity.
Which is to say: there are existence proofs that people with such beliefs can exist, and can retain these beliefs across many years and in the face of what’s currently happening.
I can readily believe that Larry Page is also like this.
Maybe Page does believe that. I think it’s nearly a self-contradictory position, and that Page is a smart guy, so with more careful thought, this beliefs are likely to converge on the more common view here on LW; replacing humanity might be OK only if our successors are pretty much better at enjoying the world in the same way we do.
I think people who claim to not care whether our successors are conscious are largely confused, which is why doing more philosophy would be really valuable.
Beff Jezos is exactly my model. Digging through his writings, I found him at one point explicitly state that he was referring to machine offspring with some sort of consciousness or enjoyment when he says humanity should be replaced. In other places he’s not clear on it. It’s bad philosophy, because it’s taking a backseat to arguments.
This is why I want to assume that Page would converge to the common belief: so we don’t mark people who seem to disagree with us as enemies, and drive them away from doing the careful, collaborative thinking that would get our beliefs to converge.
Addenda on why I think beliefs on this topic converge with additional thought: I don’t think there’s a universal ethics, but I do think that humans have built-in mechanisms that tend to make us care about other humans. Assuming we’d care about something that acts sort of like a sentient being, but internally just isn’t one, is an easy mistake to make without managing to imagine that scenario in adequate detail.
I’m not familiar with the details of Robin’s beliefs in the past, but it sure seems lately he is entertaining the opposite idea. He’s spending a lot of words on cultural drift recently, mostly characterizing it negatively. His most recent on the subject is Betrayed By Culture.
The passage you quote earlier suggests that they had multiple lengthy conversations:
Quick discussions via email are not strong evidence of a lack of careful discussion and reflection in other contexts.
I agree that there was a lot more to that exchange than that quick summary.
My point was that there wasn’t enough or it wasn’t careful enough.
Do you have any thoughts on what this actionably means? For me it seems a bit like being able to influence such coversations is potentially a bit intractable but maybe one could host forums and events for this if one has the right network?
I think it’s a good point and I’m wondering about how it actionably looks, I can see it for someone with the right contacts and so the message for people who don’t have that is to create it or what are your thoughts there?
This is a great question. I think what we can do is spread good logic about AGI risks. That is tricky. Outside of the LW audience, getting the emotional resonance right is more important than being logically correct. And that’s a whole different skill.
My impression is that Yudkowsky has harmed public epistemics in his podcast appearances by saying things forcefully and with rather poor spoken communication skills for novice audiences. Leahy is better but may also be making things worse by occasionally losing his cool and coming off as a bit of an asshole. People then associate the whole idea of AI safety with “these guys who talk down to us and seem mean and angry”. Then motivated reasoning kicks in and they’re oriented to trying to prove them wrong instead of discover the truth.
That doesn’t mean logical arguments don’t count with normies; they do. But the logic comes into play lots more when you’re not counted as dangerous or an enemy by emotional processing.
So just repeating the basic arguments of “something smarter will treat us like we do animals by default” and “surely we all want the things we love now to survive AGI” while also being studiously nice is my best guess at the right approach.
I struggle to do this myself; it’s super frustrating to repeatedly be in conversations where people seem to be obstinately refusing to think about some pretty basic and obvious logic.
Maybe the logic will win out even if we’re not able to be nice about it, but I’m quite sure it will win out faster if we can be
Repetition counts. Any worriers with any access to public platforms should probably be speaking publicly about this—as long as they’re trying hard to be nice.
Edit: to bring it back to this particular type of scenario: when someone says “let it rip, I don’t care if the winners aren’t human!” is the most important time to be nice and get curious instead of pointing out how stupid this take is. Just asking questions is going to lead most people to realize that actually they do value human-like consciousness and pleasant experiences, not just progress and competition in a disneyland without children (ref at the end).
I recommend reading the Youtube comments on his recorded podcasts, rather than e.g. Twitter commentary from people with a pre-existing adversarial stance to him (or AI risk questions writ large).
Good suggestion, thanks and I’ll do that.
I’m not commenting on those who are obviously just grinding an axe; I’m commenting on the stance toward “doomers” from otherwise reasonable people. From my limited survey the brand of x-risk concern isn’t looking good, and that isn’t mostly a result of the amazing rhetorical skills of the e/acc community ;)