P3 says that FAI research had a reasonable chance of success. Presumably you believe that at least a weak version of P3 must be true because otherwise there’s no expected value to researching FAI (unless you just enjoy reading the research?)
Something similar can be said in terms of P1. As I note below the main argument, you can weaken P1 but you surely need at least a weakened version of P1.
Is that what you mean? That you only need a very weak version of P1 and P3 for C2 to follow. So if AI is even slightly possible and FAI has even a small chance of success then C2 follows anyway.
If so, that’s fine but then you put a lot of weight on P2 as well as an unstated premise (P2a. Global catastrophic risks are very bad) as well as on another unstated premise (P4a. We should decide on funding based on expected value calculations [even in cases with small probabilities/high gains]).
Further, note that the more you lower the expected value of FAI research, the harder it becomes to support P4. There are lots of things we would like to fund—other global catastrophic risk research, literature, nice food, understanding the universe etc—and FAI research needs to have a high enough expected value that we should spend our time on this research rather than on these other things. As such, the expected value doesn’t just need to be high enough that in an ideal world we would want to do FAI research but high enough that we ought to do the research in this world.
If that’s what you’re saying, that’s fine but by putting so much weight on fewer premises, you risk failing to convince other people to also accept the importance of FAI research. If that’s not what you mean then I’d love to get a better sense of what you’re saying.
That’s basically it. What’s missing here is probabilities. I don’t need FAI research to have a high enough probability of helping to be considered “reasonable” in order to believe that it is still the best action. Similarly, I don’t need to believe that AGI will be developed in the next one or even few hundred years for it to be urgent. Basically the expected value is dominated by the negative utility if we do nothing (loss of virtually all utility forever) and my belief that UFAI is the default occurrence (high probability). I do however believe that AGI could be developed soon; it simply adds to the urgency.
Cool, glad I understood. Yes, the argument could be made more specific with probabilities. At this stage, I’m deliberately being vague because that allows for more flexibility—ie. there are multiple ways you can assign probabilities and values to the premises such that they will support the conclusion and I don’t want to specify just one of them at the expense of others.
If I get to the end of the project I plan to consider the argument in detail in which case I will start to give more specific (though certainly not precise) probabilities for different premises.
I’m not sure I follow -
P3 says that FAI research had a reasonable chance of success. Presumably you believe that at least a weak version of P3 must be true because otherwise there’s no expected value to researching FAI (unless you just enjoy reading the research?)
Something similar can be said in terms of P1. As I note below the main argument, you can weaken P1 but you surely need at least a weakened version of P1.
Is that what you mean? That you only need a very weak version of P1 and P3 for C2 to follow. So if AI is even slightly possible and FAI has even a small chance of success then C2 follows anyway.
If so, that’s fine but then you put a lot of weight on P2 as well as an unstated premise (P2a. Global catastrophic risks are very bad) as well as on another unstated premise (P4a. We should decide on funding based on expected value calculations [even in cases with small probabilities/high gains]).
Further, note that the more you lower the expected value of FAI research, the harder it becomes to support P4. There are lots of things we would like to fund—other global catastrophic risk research, literature, nice food, understanding the universe etc—and FAI research needs to have a high enough expected value that we should spend our time on this research rather than on these other things. As such, the expected value doesn’t just need to be high enough that in an ideal world we would want to do FAI research but high enough that we ought to do the research in this world.
If that’s what you’re saying, that’s fine but by putting so much weight on fewer premises, you risk failing to convince other people to also accept the importance of FAI research. If that’s not what you mean then I’d love to get a better sense of what you’re saying.
That’s basically it. What’s missing here is probabilities. I don’t need FAI research to have a high enough probability of helping to be considered “reasonable” in order to believe that it is still the best action. Similarly, I don’t need to believe that AGI will be developed in the next one or even few hundred years for it to be urgent. Basically the expected value is dominated by the negative utility if we do nothing (loss of virtually all utility forever) and my belief that UFAI is the default occurrence (high probability). I do however believe that AGI could be developed soon; it simply adds to the urgency.
Cool, glad I understood. Yes, the argument could be made more specific with probabilities. At this stage, I’m deliberately being vague because that allows for more flexibility—ie. there are multiple ways you can assign probabilities and values to the premises such that they will support the conclusion and I don’t want to specify just one of them at the expense of others.
If I get to the end of the project I plan to consider the argument in detail in which case I will start to give more specific (though certainly not precise) probabilities for different premises.