It seems to me like the “more careful philosophy” part presupposes a) that decision-makers use philosophy to guide their decision-making, b) that decision-makers can distinguish more careful philosophy from less careful philosophy, and c) that doing this successfully would result in the correct (LW-style) philosophy winning out. I’m very skeptical of all three.
Counterexample to a): almost no billionaire philanthropy uses philosophy to guide decision-making.
Counterexample to b): it is a hard problem to identify expertise in domains you’re not an expert in.
Counterexample to c): from what I understand, in 2014, most of academia did not share EY’s and Bostrom’s views.
What I’m saying is that the people you mention should put a little more time into it. When I’ve been involved in philosophy discussions with academics, people tend to treat it like a fun game, with the goal being more to sore points and come up with clever new arguments than to converge on the truth.
I think most of the world doesn’t take philosophy seriously, and they should.
I think the world thinks “there aren’t real answers to philosophical questions, just personal preferences and a confusing mess of opinions”. I think that’s mostly wrong; LW does tend to cause convergence on a lot of issues for a lot of people. That might be groupthink, but I held almost identical philosophical views before engaging with LW—because I took the questions seriously and was truth-seeking.
I think Musk or Page are fully capable of LW-style philosophy if they put a little time into it—and took it seriously (were truth-seeking).
What would change people’s attitudes? Well, I’m hoping that facing serious questions in how we create, use, and treat AI does cause at least some people to take the associated philosophical questions seriously.
It seems to me like the “more careful philosophy” part presupposes a) that decision-makers use philosophy to guide their decision-making, b) that decision-makers can distinguish more careful philosophy from less careful philosophy, and c) that doing this successfully would result in the correct (LW-style) philosophy winning out. I’m very skeptical of all three.
Counterexample to a): almost no billionaire philanthropy uses philosophy to guide decision-making.
Counterexample to b): it is a hard problem to identify expertise in domains you’re not an expert in.
Counterexample to c): from what I understand, in 2014, most of academia did not share EY’s and Bostrom’s views.
What I’m saying is that the people you mention should put a little more time into it. When I’ve been involved in philosophy discussions with academics, people tend to treat it like a fun game, with the goal being more to sore points and come up with clever new arguments than to converge on the truth.
I think most of the world doesn’t take philosophy seriously, and they should.
I think the world thinks “there aren’t real answers to philosophical questions, just personal preferences and a confusing mess of opinions”. I think that’s mostly wrong; LW does tend to cause convergence on a lot of issues for a lot of people. That might be groupthink, but I held almost identical philosophical views before engaging with LW—because I took the questions seriously and was truth-seeking.
I think Musk or Page are fully capable of LW-style philosophy if they put a little time into it—and took it seriously (were truth-seeking).
What would change people’s attitudes? Well, I’m hoping that facing serious questions in how we create, use, and treat AI does cause at least some people to take the associated philosophical questions seriously.