Somebody else might be able to answer better than me. I don’t know exactly what each researcher is working on right now.
“AI safety are now more focused on incidental catastrophic harms caused by a superintelligence on its way to achieve goals”
Basically, yes. The fear isn’t that AI will wipe out humanity because someone gave it the goal ‘kill all humans’.
For a huge number of innocent sounding goals ‘incapacitate all humans and other AIs’ is a really sensible precaution to take if all you care about is getting your chances of failure down to zero. As is hiding the fact that you intend to do harm until the very last moment.
“rather than making sure artificial intelligence will understand and care about human values?”
If you solved that then presumably the first bit solves itself. So they’re definitely linked.
Somebody else might be able to answer better than me. I don’t know exactly what each researcher is working on right now.
“AI safety are now more focused on incidental catastrophic harms caused by a superintelligence on its way to achieve goals”
Basically, yes. The fear isn’t that AI will wipe out humanity because someone gave it the goal ‘kill all humans’.
For a huge number of innocent sounding goals ‘incapacitate all humans and other AIs’ is a really sensible precaution to take if all you care about is getting your chances of failure down to zero. As is hiding the fact that you intend to do harm until the very last moment.
“rather than making sure artificial intelligence will understand and care about human values?”
If you solved that then presumably the first bit solves itself. So they’re definitely linked.