Makes sense in terms of explaining the different intuition, yes, and is essentially how I think about it.
The second heuristic about manipulation, then, seems useful in practice (more agents will try to exploit us than satisfy us), but isn’t it much weaker, considering the actual wireheading scenario? The first heuristic actually addresses the conflict (although maybe the wrong way), but the second just ignores it.
I agree; the second heuristic doesn’t apply particularly well to this scenario. Some terminal values seem to come from a part of the brain which isn’t open to introspection, so I’d expect them to arise as a result of evolutionary kludges and random cultural influences rather than necessarily making any logical sense.
The thing is, once we have a value system that’s reasonably stable (i.e. what we want is the same as what we want to want) then we don’t want to change our preferences even if we can’t explain where they arise from.
Makes sense in terms of explaining the different intuition, yes, and is essentially how I think about it.
The second heuristic about manipulation, then, seems useful in practice (more agents will try to exploit us than satisfy us), but isn’t it much weaker, considering the actual wireheading scenario? The first heuristic actually addresses the conflict (although maybe the wrong way), but the second just ignores it.
I agree; the second heuristic doesn’t apply particularly well to this scenario. Some terminal values seem to come from a part of the brain which isn’t open to introspection, so I’d expect them to arise as a result of evolutionary kludges and random cultural influences rather than necessarily making any logical sense.
The thing is, once we have a value system that’s reasonably stable (i.e. what we want is the same as what we want to want) then we don’t want to change our preferences even if we can’t explain where they arise from.