This has great potential, thanks! But wouldn’t Alfred be motivated to present to virtual Hugh whatever stimulus resulted in vH’s selecting the highest approval response, even if that means eg hypnosis, brainwashing? I don’t see how “turtles all the way down” can solve this, because every level can solve the problem for the level above but finds the problem on its own level.
You only have trouble if there is a goal-directed level beneath the lowest approval-directed level. The idea is to be approval-directed at the lowest levels where it makes sense (and below that you are using heuristics, algorithms, etc., in the same way that a goal-directed agent eventually bottoms out with useful heuristics or algorithms).
This has great potential, thanks! But wouldn’t Alfred be motivated to present to virtual Hugh whatever stimulus resulted in vH’s selecting the highest approval response, even if that means eg hypnosis, brainwashing? I don’t see how “turtles all the way down” can solve this, because every level can solve the problem for the level above but finds the problem on its own level.
You only have trouble if there is a goal-directed level beneath the lowest approval-directed level. The idea is to be approval-directed at the lowest levels where it makes sense (and below that you are using heuristics, algorithms, etc., in the same way that a goal-directed agent eventually bottoms out with useful heuristics or algorithms).