Yes, if we’re talking about the overall chance of winning, but I was talking about the chance of winning through a specific scenario (directly building FAI). If the chance of that is tiny, why did your cost/benefit analysis of the proposed course of action (encouraging open FAI research) focus completely on it?
I see, I’m guessing you view the “second round” (post-WBE/human intelligence improvement) as not being similarly unlikely to eventually win. I agree that if the first round (working on FAI now, pre-WBE) has only a tiny chance of winning, while the second has a non-tiny chance (taking into account the probability of no catastrophe till the second round and it being dominated by a FAI project rather than random AGI), then it’s better to sacrifice the first round to make the second round healthier. But I also only see a tiny chance of winning the second round, mostly because of the increasing UFAI risk and the difficulty of winning a race that grants you the advantages of the second round, rather than producing an UFAI really fast.
(Another thread of this conversation is here.)
I see, I’m guessing you view the “second round” (post-WBE/human intelligence improvement) as not being similarly unlikely to eventually win. I agree that if the first round (working on FAI now, pre-WBE) has only a tiny chance of winning, while the second has a non-tiny chance (taking into account the probability of no catastrophe till the second round and it being dominated by a FAI project rather than random AGI), then it’s better to sacrifice the first round to make the second round healthier. But I also only see a tiny chance of winning the second round, mostly because of the increasing UFAI risk and the difficulty of winning a race that grants you the advantages of the second round, rather than producing an UFAI really fast.