That’s basically it. What’s missing here is probabilities. I don’t need FAI research to have a high enough probability of helping to be considered “reasonable” in order to believe that it is still the best action. Similarly, I don’t need to believe that AGI will be developed in the next one or even few hundred years for it to be urgent. Basically the expected value is dominated by the negative utility if we do nothing (loss of virtually all utility forever) and my belief that UFAI is the default occurrence (high probability). I do however believe that AGI could be developed soon; it simply adds to the urgency.
Cool, glad I understood. Yes, the argument could be made more specific with probabilities. At this stage, I’m deliberately being vague because that allows for more flexibility—ie. there are multiple ways you can assign probabilities and values to the premises such that they will support the conclusion and I don’t want to specify just one of them at the expense of others.
If I get to the end of the project I plan to consider the argument in detail in which case I will start to give more specific (though certainly not precise) probabilities for different premises.
That’s basically it. What’s missing here is probabilities. I don’t need FAI research to have a high enough probability of helping to be considered “reasonable” in order to believe that it is still the best action. Similarly, I don’t need to believe that AGI will be developed in the next one or even few hundred years for it to be urgent. Basically the expected value is dominated by the negative utility if we do nothing (loss of virtually all utility forever) and my belief that UFAI is the default occurrence (high probability). I do however believe that AGI could be developed soon; it simply adds to the urgency.
Cool, glad I understood. Yes, the argument could be made more specific with probabilities. At this stage, I’m deliberately being vague because that allows for more flexibility—ie. there are multiple ways you can assign probabilities and values to the premises such that they will support the conclusion and I don’t want to specify just one of them at the expense of others.
If I get to the end of the project I plan to consider the argument in detail in which case I will start to give more specific (though certainly not precise) probabilities for different premises.