A UFAI that doesn’t go around eating stars to make paper-clips is probably already someone’s attempted FAI. Bringing arbitrarily large sums of mass-energy and negentropy under one’s control is a Basic AI Drive, so you have to program the utility function to actually penalize it.
Only if the AI has goals that both require additional energy, and don’t have a small, bounded success condition.
For example, if an UFAI for humans has a goal that requires humans to be there, but is not allowed to create/lead to the creation of more, then if all humans are already dead it won’t do anything.
A UFAI that doesn’t go around eating stars to make paper-clips is probably already someone’s attempted FAI. Bringing arbitrarily large sums of mass-energy and negentropy under one’s control is a Basic AI Drive, so you have to program the utility function to actually penalize it.
Only if the AI has goals that both require additional energy, and don’t have a small, bounded success condition.
For example, if an UFAI for humans has a goal that requires humans to be there, but is not allowed to create/lead to the creation of more, then if all humans are already dead it won’t do anything.