Good point. Certainly the research strategy that SIAI seems to currently be pursuing is not the only possible approach to Friendly AI, and FAI is not the only approach to human-value-positive AI. I would like to see more attention paid to a balance-of-power approach—relying on AIs to monitor other AIs for incipient megalomania.
Calls to slow down, not publish, not fund seem common in the name of friendliness.
However, unless those are internationally coordinated, a highly likely effect will be to ensure that superintelligence is developed elsewhere.
What is needed most—IMO—is for good researchers to be first. So—advising good researchers to slow down in the name of safety is probably one of the very worst possible things that spectators can do.
Assuming there aren’t better avenues to ensuring a positive hard takeoff.
Good point. Certainly the research strategy that SIAI seems to currently be pursuing is not the only possible approach to Friendly AI, and FAI is not the only approach to human-value-positive AI. I would like to see more attention paid to a balance-of-power approach—relying on AIs to monitor other AIs for incipient megalomania.
Calls to slow down, not publish, not fund seem common in the name of friendliness.
However, unless those are internationally coordinated, a highly likely effect will be to ensure that superintelligence is developed elsewhere.
What is needed most—IMO—is for good researchers to be first. So—advising good researchers to slow down in the name of safety is probably one of the very worst possible things that spectators can do.