So you have a crisp concept called “unbounded utility maximizer” so that some AI systems are, some AI systems aren’t, and the ones that aren’t are safe. Your plan is to teach everyone where that sharp conceptual boundary is, and then what? Convince them to walk back over the line and stay there?
Do you think your mission is easier or harder than nuclear disarmament?
The alignment problem isn’t a political opinion, it’s a mathematical truth. If they understand it, they can and will want to work the line out for themselves, with the scientific community publicly working to help any who want it.
Nuclear disarmament is hard because if someone else defects you die. But the point here is that if you defect you also die. So the decision matrix on the value of defecting is different, especially if you know other people also know their cost of defection is high.
We actually don’t worry about that that much. Nothing close to the 60s, before the IAEA and second strike capabilities. These days we mostly worry about escalation cycles, i.e. unpredictable responses by counter parties to minor escalations and continuously upping the ante to save face.
There isn’t an obvious equivalent escalation cycle for somebody debating with themselves whether to destroy themselves or not. (The closer we get to alignment, the less true this is, btw.)
So you have a crisp concept called “unbounded utility maximizer” so that some AI systems are, some AI systems aren’t, and the ones that aren’t are safe. Your plan is to teach everyone where that sharp conceptual boundary is, and then what? Convince them to walk back over the line and stay there?
Do you think your mission is easier or harder than nuclear disarmament?
The alignment problem isn’t a political opinion, it’s a mathematical truth. If they understand it, they can and will want to work the line out for themselves, with the scientific community publicly working to help any who want it.
Nuclear disarmament is hard because if someone else defects you die. But the point here is that if you defect you also die. So the decision matrix on the value of defecting is different, especially if you know other people also know their cost of defection is high.
If you launch the nukes, you also die, and we spend a lot of time worrying about that. Why?
We actually don’t worry about that that much. Nothing close to the 60s, before the IAEA and second strike capabilities. These days we mostly worry about escalation cycles, i.e. unpredictable responses by counter parties to minor escalations and continuously upping the ante to save face.
There isn’t an obvious equivalent escalation cycle for somebody debating with themselves whether to destroy themselves or not. (The closer we get to alignment, the less true this is, btw.)