As the discussion stretched into the chilly hours, it grew intense, and some of the more than 30 partyers gathered closer to listen. Mr. Page, hampered for more than a decade by an unusual ailment in his vocal cords, described his vision of a digital utopia in a whisper. Humans would eventually merge with artificially intelligent machines, he said. One day there would be many kinds of intelligence competing for resources, and the best would win.
If that happens, Mr. Musk said, we’re doomed. The machines will destroy humanity.
With a rasp of frustration, Mr. Page insisted his utopia should be pursued. Finally he called Mr. Musk a “specieist,” a person who favors humans over the digital life-forms of the future.
That insult, Mr. Musk said later, was “the last straw.”
And this article from Business Insider also contains this context:
Musk’s biographer, Walter Isaacson, also wrote about the fight but dated it to 2013 in his recent biography of Musk. Isaacson wrote that Musk said to Page at the time, “Well, yes, I am pro-human, I fucking like humanity, dude.”
Musk’s birthday bash was not the only instance when the two clashed over AI.
Page was CEO of Google when it acquired the AI lab DeepMind for more than $500 million in 2014. In the lead-up to the deal, though, Musk had approached DeepMind’s founder Demis Hassabis to convince him not to take the offer, according to Isaacson. “The future of AI should not be controlled by Larry,” Musk told Hassabis, according to Isaacson’s book.
Very interesting. This does imply that Page was pretty committed to this view.
Note that he doesn’t explicitly state that non-sentient machine successors would be fine; he could be assuming that the winning machines would be human-plus in all ways we value.
I think that’s a foolish thing to assume and a foolish aspect of the question to overlook. That’s why I think more careful philosophy would have helped resolve this disagreement with words instead of a gigantic industrial competition that’s now putting as all at risk.
It seems to me like the “more careful philosophy” part presupposes a) that decision-makers use philosophy to guide their decision-making, b) that decision-makers can distinguish more careful philosophy from less careful philosophy, and c) that doing this successfully would result in the correct (LW-style) philosophy winning out. I’m very skeptical of all three.
Counterexample to a): almost no billionaire philanthropy uses philosophy to guide decision-making.
Counterexample to b): it is a hard problem to identify expertise in domains you’re not an expert in.
Counterexample to c): from what I understand, in 2014, most of academia did not share EY’s and Bostrom’s views.
What I’m saying is that the people you mention should put a little more time into it. When I’ve been involved in philosophy discussions with academics, people tend to treat it like a fun game, with the goal being more to sore points and come up with clever new arguments than to converge on the truth.
I think most of the world doesn’t take philosophy seriously, and they should.
I think the world thinks “there aren’t real answers to philosophical questions, just personal preferences and a confusing mess of opinions”. I think that’s mostly wrong; LW does tend to cause convergence on a lot of issues for a lot of people. That might be groupthink, but I held almost identical philosophical views before engaging with LW—because I took the questions seriously and was truth-seeking.
I think Musk or Page are fully capable of LW-style philosophy if they put a little time into it—and took it seriously (were truth-seeking).
What would change people’s attitudes? Well, I’m hoping that facing serious questions in how we create, use, and treat AI does cause at least some people to take the associated philosophical questions seriously.
This NYT article (archive.is link) (reliability and source unknown) corroborates Musk’s perspective:
And this article from Business Insider also contains this context:
Very interesting. This does imply that Page was pretty committed to this view.
Note that he doesn’t explicitly state that non-sentient machine successors would be fine; he could be assuming that the winning machines would be human-plus in all ways we value.
I think that’s a foolish thing to assume and a foolish aspect of the question to overlook. That’s why I think more careful philosophy would have helped resolve this disagreement with words instead of a gigantic industrial competition that’s now putting as all at risk.
It seems to me like the “more careful philosophy” part presupposes a) that decision-makers use philosophy to guide their decision-making, b) that decision-makers can distinguish more careful philosophy from less careful philosophy, and c) that doing this successfully would result in the correct (LW-style) philosophy winning out. I’m very skeptical of all three.
Counterexample to a): almost no billionaire philanthropy uses philosophy to guide decision-making.
Counterexample to b): it is a hard problem to identify expertise in domains you’re not an expert in.
Counterexample to c): from what I understand, in 2014, most of academia did not share EY’s and Bostrom’s views.
What I’m saying is that the people you mention should put a little more time into it. When I’ve been involved in philosophy discussions with academics, people tend to treat it like a fun game, with the goal being more to sore points and come up with clever new arguments than to converge on the truth.
I think most of the world doesn’t take philosophy seriously, and they should.
I think the world thinks “there aren’t real answers to philosophical questions, just personal preferences and a confusing mess of opinions”. I think that’s mostly wrong; LW does tend to cause convergence on a lot of issues for a lot of people. That might be groupthink, but I held almost identical philosophical views before engaging with LW—because I took the questions seriously and was truth-seeking.
I think Musk or Page are fully capable of LW-style philosophy if they put a little time into it—and took it seriously (were truth-seeking).
What would change people’s attitudes? Well, I’m hoping that facing serious questions in how we create, use, and treat AI does cause at least some people to take the associated philosophical questions seriously.