That is not my experience at all. Maybe it is because my friends from outside of the AI community are also outside of the tech bubble, but I’ve seen a lot of pessimism recently with the future of AI. In fact, they seem to easily both the orthogonality and the instrumentality thesis. Although I avoid delving into this topic of human extinction, since I don’t want to harm anyone’s mental health, the rare times were this topic comes up they seem to easily agree that this is a non-trivial possibility.
I guess the main reason is that, since they are outside of the tech bubble, they don’t seem to think that worrying about AI risk is being a Luddite, not truly understanding AI, or something like that. Moreover, since none of them works in AI, they don’t take any personal offense with the suggestion that capabilities advance may greatly harm humanity.
That is not my experience at all. Maybe it is because my friends from outside of the AI community are also outside of the tech bubble, but I’ve seen a lot of pessimism recently with the future of AI. In fact, they seem to easily both the orthogonality and the instrumentality thesis. Although I avoid delving into this topic of human extinction, since I don’t want to harm anyone’s mental health, the rare times were this topic comes up they seem to easily agree that this is a non-trivial possibility.
I guess the main reason is that, since they are outside of the tech bubble, they don’t seem to think that worrying about AI risk is being a Luddite, not truly understanding AI, or something like that. Moreover, since none of them works in AI, they don’t take any personal offense with the suggestion that capabilities advance may greatly harm humanity.