The marginal effect that donating a dollar to SIAI has on the probabilities of of friendly AI being developed, and of human extinction.
P(eventual human extinction) looks enormous—since the future will be engineered. It depends on exactly what you mean, though. For example, is it still “extinction” if a future computer sucks the last million remaining human brains into the matrix. Or if it keeps their DNA around for the sake of its historical significance?
Also, what is a “friendly AI”? Say a future machine intelligence looks back on history—and tries do decide whether what happened was “friendly”. Is there some decision process they could use to determine this? If so, what is it?
At any rate, the whole analysis here seems misconceived. The “extinction of all humans” could be awful—or wonderful—depending on the circumstances and on the perspective of the observer. Values are not really objective facts that can be estimated and agreed upon.
For example, is it still “extinction” if a future computer sucks the last million remaining human brains into the matrix. Or if it keeps their DNA around for the sake of its historical significance?
Or if all humans have voluntarily [1] changed into things we can’t imagine?
[1] I sloppily assume that choice hasn’t changed too much.
P(eventual human extinction) looks enormous—since the future will be engineered. It depends on exactly what you mean, though. For example, is it still “extinction” if a future computer sucks the last million remaining human brains into the matrix. Or if it keeps their DNA around for the sake of its historical significance?
Also, what is a “friendly AI”? Say a future machine intelligence looks back on history—and tries do decide whether what happened was “friendly”. Is there some decision process they could use to determine this? If so, what is it?
At any rate, the whole analysis here seems misconceived. The “extinction of all humans” could be awful—or wonderful—depending on the circumstances and on the perspective of the observer. Values are not really objective facts that can be estimated and agreed upon.
Or if all humans have voluntarily [1] changed into things we can’t imagine?
[1] I sloppily assume that choice hasn’t changed too much.