One hypothesis I’ve had is that people with more MIRI-like views tend to be more arrogant themselves. A possible mechanism is that the idea that the world is going to end and that they are the only ones who can save is appealing in a way that shifts their views on certain questions and changes the way they think about AI (e.g. they need less explanation that they are some of the most important people ever, so they spend less time considering why AI might go well by default).
[ETA: In case it wasn’t clear, I am positing subconscious patterns correlated with arrogance that lead to MIRI-like views]
One hypothesis I’ve had is that people with more MIRI-like views tend to be more arrogant themselves. A possible mechanism is that the idea that the world is going to end and that they are the only ones who can save is appealing in a way that shifts their views on certain questions and changes the way they think about AI (e.g. they need less explanation that they are some of the most important people ever, so they spend less time considering why AI might go well by default).
[ETA: In case it wasn’t clear, I am positing subconscious patterns correlated with arrogance that lead to MIRI-like views]