The area where I’d be most excited to see philosophical work is “when should we be sad if AI takes over, vs. being happy for it?” This seems like a natural ethical question that could have significant impacts on prioritization. Moreover, if the answer is “we should be fine with some kinds of AI taking over” then we can try to create that kind of AI as an alternative to creating aligned AI.
The area where I’d be most excited to see philosophical work is “when should we be sad if AI takes over, vs. being happy for it?” This seems like a natural ethical question that could have significant impacts on prioritization. Moreover, if the answer is “we should be fine with some kinds of AI taking over” then we can try to create that kind of AI as an alternative to creating aligned AI.