it seems like you’re arguing that hard takeoff is inevitable, which as far as I’m aware has never been shown convincingly.
So when did the goalposts get moved to proving that hard takeoff is inevitable?
The claim that research into FAI theory is useful requires only that it be shown that uFAI might be dangerous. Showing that is pretty much a slam dunk.
The claim that research into FAI theory is urgent requires only that it be shown that hard takeoff might be possible (with a probability > 2% or so).
And, as the nightmare scenarios of de Garis suggest, even if the fastest possible takeoff turns out to take years to accomplish, such a soft, but reckless, takeoff may still be difficult to stop short of war.
Good point. Certainly the research strategy that SIAI seems to currently be pursuing is not the only possible approach to Friendly AI, and FAI is not the only approach to human-value-positive AI. I would like to see more attention paid to a balance-of-power approach—relying on AIs to monitor other AIs for incipient megalomania.
Calls to slow down, not publish, not fund seem common in the name of friendliness.
However, unless those are internationally coordinated, a highly likely effect will be to ensure that superintelligence is developed elsewhere.
What is needed most—IMO—is for good researchers to be first. So—advising good researchers to slow down in the name of safety is probably one of the very worst possible things that spectators can do.
So when did the goalposts get moved to proving that hard takeoff is inevitable?
It doesn’t even seem hard to prevent. Topple civilization for example. It’s something that humans have managed to achieve regularly thus far and it is entirely possible that we would never recover sufficiently to construct a hard takeoff scenario if we nuked ourselves back to another dark age.
So when did the goalposts get moved to proving that hard takeoff is inevitable?
The claim that research into FAI theory is useful requires only that it be shown that uFAI might be dangerous. Showing that is pretty much a slam dunk.
The claim that research into FAI theory is urgent requires only that it be shown that hard takeoff might be possible (with a probability > 2% or so).
And, as the nightmare scenarios of de Garis suggest, even if the fastest possible takeoff turns out to take years to accomplish, such a soft, but reckless, takeoff may still be difficult to stop short of war.
Assuming there aren’t better avenues to ensuring a positive hard takeoff.
Good point. Certainly the research strategy that SIAI seems to currently be pursuing is not the only possible approach to Friendly AI, and FAI is not the only approach to human-value-positive AI. I would like to see more attention paid to a balance-of-power approach—relying on AIs to monitor other AIs for incipient megalomania.
Calls to slow down, not publish, not fund seem common in the name of friendliness.
However, unless those are internationally coordinated, a highly likely effect will be to ensure that superintelligence is developed elsewhere.
What is needed most—IMO—is for good researchers to be first. So—advising good researchers to slow down in the name of safety is probably one of the very worst possible things that spectators can do.
It doesn’t even seem hard to prevent. Topple civilization for example. It’s something that humans have managed to achieve regularly thus far and it is entirely possible that we would never recover sufficiently to construct a hard takeoff scenario if we nuked ourselves back to another dark age.