Main disagreement with FHI people is that I’m more worried about AI than they are (I’m probably up with the SIAI folks on this).
Where can we read FHI’s analysis of AI risk? Why are they not as worried as you and SIAI people? Has there ever been a debate between FHI and SIAI on this? What threats are they most worried about? What technologies do they want to push or slow down?
AI is high on the list—one of the top risks, even if their objective assessment is lower than SIAI. Nuclear war, synthetic biology, nanotech, pandemics, social collapse: these are the other ones we’re looking it.
Basically they don’t buy the “AI inevitably goes foom and inevitably takes over”. They see definite probabilities of these happening, but their estimates are closer to 50% than to 100%.
They estimate a variety of of conditional statements (“AI possible this century”, “if AI then FOOM”, “if FOOM then DOOM”, etc...) with magnitudes between 20% and 80% (I had the figures somewhere, but can’t find them). I think when it was all multiplied out it was in the 10-20% range.
And I didn’t say they thought other things were more worrying; just that AI wasn’t the single overwhelming risk/reward factor that SIAI (and me) believe it to be.
A wild guess. FHI believes that the best what can reasonably be done about existential risks at this point in time is to do research into existential risks, including possible unknown unknowns, and into strategies to reduce current existential risks. This somewhat agrees with their FAQ:
Research into existential risk and analysis of potential countermeasures is a very strong candidate for being the currently most cost-effective way to reduce existential risk. This includes research into some methodological problems and into certain strategic questions that pertain to existential risk. Similarly, actions that contribute indirectly to producing more high-quality analysis on existential risk and a capacity later to act on the result of such analysis could also be extremely cost-effective. This includes, for example, donating money to existential risk research, supporting organizations and networks that engage in fundraising for existential risks work, and promoting wider awareness of the topic and its importance.
In other words, FHI seems to focus on meta issues, existential risks in general, rather than associated specifics.
Where can we read FHI’s analysis of AI risk? Why are they not as worried as you and SIAI people? Has there ever been a debate between FHI and SIAI on this? What threats are they most worried about? What technologies do they want to push or slow down?
AI is high on the list—one of the top risks, even if their objective assessment is lower than SIAI. Nuclear war, synthetic biology, nanotech, pandemics, social collapse: these are the other ones we’re looking it.
Basically they don’t buy the “AI inevitably goes foom and inevitably takes over”. They see definite probabilities of these happening, but their estimates are closer to 50% than to 100%.
They estimate it at 50%???
And there are other things they are more concerned about?
What are those other things?
They estimate a variety of of conditional statements (“AI possible this century”, “if AI then FOOM”, “if FOOM then DOOM”, etc...) with magnitudes between 20% and 80% (I had the figures somewhere, but can’t find them). I think when it was all multiplied out it was in the 10-20% range.
And I didn’t say they thought other things were more worrying; just that AI wasn’t the single overwhelming risk/reward factor that SIAI (and me) believe it to be.
A wild guess. FHI believes that the best what can reasonably be done about existential risks at this point in time is to do research into existential risks, including possible unknown unknowns, and into strategies to reduce current existential risks. This somewhat agrees with their FAQ:
In other words, FHI seems to focus on meta issues, existential risks in general, rather than associated specifics.