I am trying to decide how to allocate my charitable donations between GiveWell’s top charities and MIRI, and I need a probability estimate to make an informed decision. Could you help me?
Background on my moral system: I place a greater value on reducing high doses of suffering of conscious entities than merely preventing death. An unexpected, instant, painless death is unfortunate, but I would prefer it to a painful and chronic condition.
Given my beliefs, it follows logically that I would pay a relatively large amount to save a conscious entity from prolonged torture.
The possibility of an AI torturing many conscious entities has been mentioned1 on this site, and I assume that funding MIRI will help reduce its probability. But what is its current probability?
Obviously a difficult question, but it seems to me that I need an estimate and there is no way around it. I don’t even know where to start...suggestions?
Seeking Estimates for P(Hell)
I am trying to decide how to allocate my charitable donations between GiveWell’s top charities and MIRI, and I need a probability estimate to make an informed decision. Could you help me?
Background on my moral system: I place a greater value on reducing high doses of suffering of conscious entities than merely preventing death. An unexpected, instant, painless death is unfortunate, but I would prefer it to a painful and chronic condition.
Given my beliefs, it follows logically that I would pay a relatively large amount to save a conscious entity from prolonged torture.
The possibility of an AI torturing many conscious entities has been mentioned1 on this site, and I assume that funding MIRI will help reduce its probability. But what is its current probability?
Obviously a difficult question, but it seems to me that I need an estimate and there is no way around it. I don’t even know where to start...suggestions?
1 http://lesswrong.com/lw/1pz/the_ai_in_a_box_boxes_you/