Potential source of misunderstanding: we do have stated ‘terminal goals’, sometimes. But these goals do not function in the same way that a paperclipper utility function maximizes paperclips- there are a very weird set of obstacles, which this site generally deals with under headings like ‘akrasia’ or ‘superstimulus’. Asking a human about their ‘terminal goal’ is roughly equivalent to the question ‘what would you want, if you could want anything?’ It’s a form of emulation.
But these goals do not function in the same way that a paperclipper utility function maximizes paperclips
Sure, because humans are not utility maximizers.
The question, however, is whether terminal goals exist. A possible point of confusion is that I think of humans as having multiple, inconsistent terminal goals.
Huh? Why not?
Potential source of misunderstanding: we do have stated ‘terminal goals’, sometimes. But these goals do not function in the same way that a paperclipper utility function maximizes paperclips- there are a very weird set of obstacles, which this site generally deals with under headings like ‘akrasia’ or ‘superstimulus’. Asking a human about their ‘terminal goal’ is roughly equivalent to the question ‘what would you want, if you could want anything?’ It’s a form of emulation.
Sure, because humans are not utility maximizers.
The question, however, is whether terminal goals exist. A possible point of confusion is that I think of humans as having multiple, inconsistent terminal goals.
Here’s an example of a terminal goal: to survive.