I don’t understand in what way the linked comment says anything about interference from alternative Everett branches. Did you mean to link to something else?
I’m not sure what the majority view is on less wrong, but none of the people I have met in real life advocate making decisions based on (very) small probabilities of (very) large utility fluctuations. I think AI has probability at least 1% of destroying most human value under the status quo. I think 1% is a large enough number that it’s reasonable to care a lot, although it’s also small enough that it’s reasonable not to care. However, I also think that the probability is at least 20%, and that is large enough that I think it is unreasonable not to care (assuming that preservation of humanity is one of your principle terminal values, which it may or may not be).
Does this mean that I’m going to drop out of college to work at SingInst? No, because that closes a lot of doors. Does it mean that I’m seriously reconsidering my career path? Yes, and I am reasonably likely to act on those considerations.
I think AI has probability at least 1% of destroying most human value under the status quo. I think 1% is a large enough number that it’s reasonable to care a lot, although it’s also small enough that it’s reasonable not to care. However, I also think that the probability is at least 20%
Without machine intelligence, every single human alive today dies.
One wonders how that value carnage would be quantified—using the same scale.
However, I also think that the probability is at least 20%, and that is large enough that I think it is unreasonable not to care (assuming that preservation of humanity is one of your principle terminal values, which it may or may not be).
I agree.
I’m not sure what the majority view is on less wrong, but none of the people I have met in real life advocate making decisions based on (very) small probabilities of (very) large utility fluctuations.
No, I think some people here use the +20% estimate on risks from AI and act according to some implications of logical implications. See here, which is the post the comment I linked to talked about. I have chosen that post because it resembled ideas put forth in another post on lesswrong that has been banned because of the perceived risks and because people got nightmares due to it.
I don’t understand in what way the linked comment says anything about interference from alternative Everett branches. Did you mean to link to something else?
I’m not sure what the majority view is on less wrong, but none of the people I have met in real life advocate making decisions based on (very) small probabilities of (very) large utility fluctuations. I think AI has probability at least 1% of destroying most human value under the status quo. I think 1% is a large enough number that it’s reasonable to care a lot, although it’s also small enough that it’s reasonable not to care. However, I also think that the probability is at least 20%, and that is large enough that I think it is unreasonable not to care (assuming that preservation of humanity is one of your principle terminal values, which it may or may not be).
Does this mean that I’m going to drop out of college to work at SingInst? No, because that closes a lot of doors. Does it mean that I’m seriously reconsidering my career path? Yes, and I am reasonably likely to act on those considerations.
Without machine intelligence, every single human alive today dies.
One wonders how that value carnage would be quantified—using the same scale.
I agree.
No, I think some people here use the +20% estimate on risks from AI and act according to some implications of logical implications. See here, which is the post the comment I linked to talked about. I have chosen that post because it resembled ideas put forth in another post on lesswrong that has been banned because of the perceived risks and because people got nightmares due to it.