It’s true that we’re mostly in this situation because certain people heard about the arguments for risk and either came up with terrible solutions to them or smelled a potent fount of personal power.
A little new to the AI Alignment Field building effort, would you put head researchers at OpenAI in this category?
A little new to the AI Alignment Field building effort, would you put head researchers at OpenAI in this category?