I can’t endorse treating the parts of an argument that lack strong evidence (e.g. funding SIAI is the best way to help FAI) as justifications for ignoring the parts that have strong evidence (e.g. FAI is the highest EV priority around). In a case like that, the rational thing to do is to investigate more or find a third alternative, not to go on with business as usual.
I agree with the first sentence but don’t know if the second sentence is always true. Even if my calculations show that solving friendly AI will avert the most probable cause of human extinction, I might estimate that any investigations into it will very likely turn out to be fruitless and success to be virtually impossible.
If I was 90% sure that humanity is facing extinction as a result of badly done AI but my confidence that averting the risk is possible was only .1% while I estimated another existential risk to kill off humanity with a 5% probability and my confidence in averting it was 1%, shouldn’t I concentrate on the less probable but solvable risk?
In other words, the question is not just how much evidence I have in favor of risks from AI but how certain I can be to mitigate it compared to other existential risks.
Could you outline your estimations of the expected value of contributing to the SIAI and that a negative Singularity can be averted as a result of work done by the SIAI?
In practice, when I seen a chance to do high return work on other x-risks, such as synthetic bio, I do such work. It can’t always be done publicly though. It doesn’t seem likely at all to me that UFAI isn’t a solvable problem, given enough capable people working hard on it for a couple decades, and at the margin it’s by far the least well funded major x-risk, so the real question, IMHO, is simply what organization has the best chance of actually turning funds into a solution. SIAI, FHI or build your own org, but saying it’s impossible without checking is just being lazy/stingy, and is particularly non-credible from someone who isn’t making a serious effort on any other x-risk either.
If I was 90% sure that humanity is facing extinction as a result of badly done AI but my confidence that averting the risk is possible was only .1% while I estimated another existential risk to kill off humanity with a 5% probability and my confidence in averting it was 1%, shouldn’t I concentrate on the less probable but solvable risk?
I don’t think so—assuming we are trying to maximise p(save all humans).
It appears that at least one of us is making a math mistake.
I agree with the first sentence but don’t know if the second sentence is always true. Even if my calculations show that solving friendly AI will avert the most probable cause of human extinction, I might estimate that any investigations into it will very likely turn out to be fruitless and success to be virtually impossible.
If I was 90% sure that humanity is facing extinction as a result of badly done AI but my confidence that averting the risk is possible was only .1% while I estimated another existential risk to kill off humanity with a 5% probability and my confidence in averting it was 1%, shouldn’t I concentrate on the less probable but solvable risk?
In other words, the question is not just how much evidence I have in favor of risks from AI but how certain I can be to mitigate it compared to other existential risks.
Could you outline your estimations of the expected value of contributing to the SIAI and that a negative Singularity can be averted as a result of work done by the SIAI?
In practice, when I seen a chance to do high return work on other x-risks, such as synthetic bio, I do such work. It can’t always be done publicly though. It doesn’t seem likely at all to me that UFAI isn’t a solvable problem, given enough capable people working hard on it for a couple decades, and at the margin it’s by far the least well funded major x-risk, so the real question, IMHO, is simply what organization has the best chance of actually turning funds into a solution. SIAI, FHI or build your own org, but saying it’s impossible without checking is just being lazy/stingy, and is particularly non-credible from someone who isn’t making a serious effort on any other x-risk either.
I don’t think so—assuming we are trying to maximise p(save all humans).
It appears that at least one of us is making a math mistake.
It’s not clear whether “confidence in averting” means P(avert disaster) or P(avert disaster|disaster).
Likewise. ETA: on what I take as the default meaning of “confidence in averting” in this context, P(avert disaster|disaster otherwise impending).