Basically they don’t buy the “AI inevitably goes foom and inevitably takes over”. They see definite probabilities of these happening, but their estimates are closer to 50% than to 100%.
They estimate a variety of of conditional statements (“AI possible this century”, “if AI then FOOM”, “if FOOM then DOOM”, etc...) with magnitudes between 20% and 80% (I had the figures somewhere, but can’t find them). I think when it was all multiplied out it was in the 10-20% range.
And I didn’t say they thought other things were more worrying; just that AI wasn’t the single overwhelming risk/reward factor that SIAI (and me) believe it to be.
Basically they don’t buy the “AI inevitably goes foom and inevitably takes over”. They see definite probabilities of these happening, but their estimates are closer to 50% than to 100%.
They estimate it at 50%???
And there are other things they are more concerned about?
What are those other things?
They estimate a variety of of conditional statements (“AI possible this century”, “if AI then FOOM”, “if FOOM then DOOM”, etc...) with magnitudes between 20% and 80% (I had the figures somewhere, but can’t find them). I think when it was all multiplied out it was in the 10-20% range.
And I didn’t say they thought other things were more worrying; just that AI wasn’t the single overwhelming risk/reward factor that SIAI (and me) believe it to be.