Is the distinction between 2 and 3 that “dog” is an imprecise concept, while “diamond” is precise? FWIW, 2 and 3 currently sound very similar to me, if 2 is ‘maximize the number of dogs’ and 3 is ‘maximize the number of diamonds’.
Feat #2 is: Design a mind which cares about anything at all in reality which isn’t a shallow sensory phenomenon which is directly observable by the agent. Like, maybe I have a mind-training procedure, where I don’t know what the final trained mind will value (dogs, diamonds, trees having particular kinds of cross-sections at year 5 of their growth), but I’m damn sure the AI will care about something besides its own sensory signals. Such a procedure would accomplish feat #2, but not #3.
Feat #3 is: Design a mind which cares about a particular kind of object. We could target the mind-training process to care about diamonds, or about dogs, or about trees, but to solve this problem, we have to ensure the trained mind significantly cares about one kind of real-world entity in particular. Therefore, feat #3 is strictly harder than feat #2.
If you could reliably build a dog maximizer, I think that would also be a massive win and would maybe mean that the alignment problem is mostly-solved. (Indeed, I’m inclined to think that’s a harder feat than building a diamond maximizer
I actually think that the dog- and diamond-maximization problems are about equally hard, and, to be totally honest, neither seems that bad[1] in the shard theory paradigm.
Surprisingly, I weakly suspect the harder part is getting the agent to maximize real-worlddogs in expectation, not getting the agent to maximize real-world dogs in expectation. I think “figure out how to build a mind which cares about the number of real-world dogs, such that the mind intelligently selects plans which lead to a lot of dogs” is significantly easier than building a dog-maximizer.
Feat #2 is: Design a mind which cares about anything at all in reality which isn’t a shallow sensory phenomenon which is directly observable by the agent. Like, maybe I have a mind-training procedure, where I don’t know what the final trained mind will value (dogs, diamonds, trees having particular kinds of cross-sections at year 5 of their growth), but I’m damn sure the AI will care about something besides its own sensory signals. Such a procedure would accomplish feat #2, but not #3.
Feat #3 is: Design a mind which cares about a particular kind of object. We could target the mind-training process to care about diamonds, or about dogs, or about trees, but to solve this problem, we have to ensure the trained mind significantly cares about one kind of real-world entity in particular. Therefore, feat #3 is strictly harder than feat #2.
I actually think that the dog- and diamond-maximization problems are about equally hard, and, to be totally honest, neither seems that bad[1] in the shard theory paradigm.
Surprisingly, I weakly suspect the harder part is getting the agent to maximize real-world dogs in expectation, not getting the agent to maximize real-world dogs in expectation. I think “figure out how to build a mind which cares about the number of real-world dogs, such that the mind intelligently selects plans which lead to a lot of dogs” is significantly easier than building a dog-maximizer.
I appreciate that this claim is hard to swallow. In any case, I want to focus on inferentially-closer questions first, like how human values form.