Throttling an AI to human intelligence is like aiming your brand new superweapon at the world with the safety catch on. Potentially interesting, but really not worth the risk.
Besides, Eliezer would probably say that the F in FAI is the point of the code, not a module bolted into the code. There’s no ‘building the AI and tweaking the morality’. Either it’s spot on when it’s switched on, or it’s unsafe.
David,
Throttling an AI to human intelligence is like aiming your brand new superweapon at the world with the safety catch on. Potentially interesting, but really not worth the risk.
Besides, Eliezer would probably say that the F in FAI is the point of the code, not a module bolted into the code. There’s no ‘building the AI and tweaking the morality’. Either it’s spot on when it’s switched on, or it’s unsafe.