You claim we should pay more attention to small probabilities?
Not really, but I stopped talking about that openly because I simply have no reason besides my intuition not to take small probabilities into account. If you asked me for what I believe to be the right thing to do as you would ask me about what sort of ice cream I like the most, I would answer that risks from AI are something that one should keep at the back of one’s mind until further notice. I would say that someone who tries to mitigate risks from AI now is like someone who would have tried to stop global warming back in the 16th century. But that’s not a position I could support with evidence or arguments other than the existence of problems like ‘The Infinitarian Challenge to Aggregative Ethics’ or ‘Pascal’s Mugging’. Those problems hint at the possibility that something is very wrong with the whole business of low-probability risks. But is that enough? I have no idea.
In most of my submissions on LW I am trying to provoke feedback to learn about the underlying reasons and thought processes that led people to accept the framework of beliefs that are being supported here and why others, who are not associated with this community, think it is bogus. Sadly it always results in both sides calling each other “idiots”.
Not really, but I stopped talking about that openly because I simply have no reason besides my intuition not to take small probabilities into account. If you asked me for what I believe to be the right thing to do as you would ask me about what sort of ice cream I like the most, I would answer that risks from AI are something that one should keep at the back of one’s mind until further notice. I would say that someone who tries to mitigate risks from AI now is like someone who would have tried to stop global warming back in the 16th century. But that’s not a position I could support with evidence or arguments other than the existence of problems like ‘The Infinitarian Challenge to Aggregative Ethics’ or ‘Pascal’s Mugging’. Those problems hint at the possibility that something is very wrong with the whole business of low-probability risks. But is that enough? I have no idea.
In most of my submissions on LW I am trying to provoke feedback to learn about the underlying reasons and thought processes that led people to accept the framework of beliefs that are being supported here and why others, who are not associated with this community, think it is bogus. Sadly it always results in both sides calling each other “idiots”.