Partisans of the other “hard problem” are also quick to tell people that the things they call research are not in fact targeting the problem at all. (I wonder if it’s something about the name...)
Much like the other hard problem, it’s easy to get wrapped up in a particular picture of what properties a solution “must” have, and construct boundaries between your hard problem and all those other non-hard problems.
Turning the universe to diamond is a great example. It’s totally reasonable that it could be strictly easier to build an AI to turn the world into diamond than it is to build an AI that is superhuman at doing good things, so that anyone claiming to have ideas about the latter should have even better ideas about the former. But that could also not be the case—the most likely way I see this happening is if if solving the hard left turn problem has details that depend on how you want to load the values, and so genuinely hard-problem-addressing work on value learning could nonetheless not be useful for specifying simple goals. (It may only help you get the diamond-universe AI “the hard way”—by doing the entire value leaning process except with a different target!)
Partisans of the other “hard problem” are also quick to tell people that the things they call research are not in fact targeting the problem at all. (I wonder if it’s something about the name...)
Much like the other hard problem, it’s easy to get wrapped up in a particular picture of what properties a solution “must” have, and construct boundaries between your hard problem and all those other non-hard problems.
Turning the universe to diamond is a great example. It’s totally reasonable that it could be strictly easier to build an AI to turn the world into diamond than it is to build an AI that is superhuman at doing good things, so that anyone claiming to have ideas about the latter should have even better ideas about the former. But that could also not be the case—the most likely way I see this happening is if if solving the hard left turn problem has details that depend on how you want to load the values, and so genuinely hard-problem-addressing work on value learning could nonetheless not be useful for specifying simple goals. (It may only help you get the diamond-universe AI “the hard way”—by doing the entire value leaning process except with a different target!)