Very interesting. This does imply that Page was pretty committed to this view.
Note that he doesn’t explicitly state that non-sentient machine successors would be fine; he could be assuming that the winning machines would be human-plus in all ways we value.
I think that’s a foolish thing to assume and a foolish aspect of the question to overlook. That’s why I think more careful philosophy would have helped resolve this disagreement with words instead of a gigantic industrial competition that’s now putting as all at risk.
It seems to me like the “more careful philosophy” part presupposes a) that decision-makers use philosophy to guide their decision-making, b) that decision-makers can distinguish more careful philosophy from less careful philosophy, and c) that doing this successfully would result in the correct (LW-style) philosophy winning out. I’m very skeptical of all three.
Counterexample to a): almost no billionaire philanthropy uses philosophy to guide decision-making.
Counterexample to b): it is a hard problem to identify expertise in domains you’re not an expert in.
Counterexample to c): from what I understand, in 2014, most of academia did not share EY’s and Bostrom’s views.
What I’m saying is that the people you mention should put a little more time into it. When I’ve been involved in philosophy discussions with academics, people tend to treat it like a fun game, with the goal being more to sore points and come up with clever new arguments than to converge on the truth.
I think most of the world doesn’t take philosophy seriously, and they should.
I think the world thinks “there aren’t real answers to philosophical questions, just personal preferences and a confusing mess of opinions”. I think that’s mostly wrong; LW does tend to cause convergence on a lot of issues for a lot of people. That might be groupthink, but I held almost identical philosophical views before engaging with LW—because I took the questions seriously and was truth-seeking.
I think Musk or Page are fully capable of LW-style philosophy if they put a little time into it—and took it seriously (were truth-seeking).
What would change people’s attitudes? Well, I’m hoping that facing serious questions in how we create, use, and treat AI does cause at least some people to take the associated philosophical questions seriously.
Very interesting. This does imply that Page was pretty committed to this view.
Note that he doesn’t explicitly state that non-sentient machine successors would be fine; he could be assuming that the winning machines would be human-plus in all ways we value.
I think that’s a foolish thing to assume and a foolish aspect of the question to overlook. That’s why I think more careful philosophy would have helped resolve this disagreement with words instead of a gigantic industrial competition that’s now putting as all at risk.
It seems to me like the “more careful philosophy” part presupposes a) that decision-makers use philosophy to guide their decision-making, b) that decision-makers can distinguish more careful philosophy from less careful philosophy, and c) that doing this successfully would result in the correct (LW-style) philosophy winning out. I’m very skeptical of all three.
Counterexample to a): almost no billionaire philanthropy uses philosophy to guide decision-making.
Counterexample to b): it is a hard problem to identify expertise in domains you’re not an expert in.
Counterexample to c): from what I understand, in 2014, most of academia did not share EY’s and Bostrom’s views.
What I’m saying is that the people you mention should put a little more time into it. When I’ve been involved in philosophy discussions with academics, people tend to treat it like a fun game, with the goal being more to sore points and come up with clever new arguments than to converge on the truth.
I think most of the world doesn’t take philosophy seriously, and they should.
I think the world thinks “there aren’t real answers to philosophical questions, just personal preferences and a confusing mess of opinions”. I think that’s mostly wrong; LW does tend to cause convergence on a lot of issues for a lot of people. That might be groupthink, but I held almost identical philosophical views before engaging with LW—because I took the questions seriously and was truth-seeking.
I think Musk or Page are fully capable of LW-style philosophy if they put a little time into it—and took it seriously (were truth-seeking).
What would change people’s attitudes? Well, I’m hoping that facing serious questions in how we create, use, and treat AI does cause at least some people to take the associated philosophical questions seriously.