It seems very hard for me to imagine how one could create a procedure that wasn’t biased towards a particular value system. E.g. Stuart Armstrong has written about how humans can be assigned any values whatsoever—you have to decide what parts of their behavior are because of genuine preferences and what parts are because of irrationality, and what values that implies. And the way you decide what’s correct behavior and what’s irrationality seems like the kind of a choice that will depend on your own values. Even something like “this seems like the simplest way of assigning preferences” presupposes that it is valuable to pick a procedure based on its simplicity—though the post argues that even simplicity would fail to distinguish between several alternative ways of assigning preferences.
Of course, just because we can’t be truly unbiased doesn’t mean we couldn’t be less biased, so maybe something like “pick the simplest system that produces sensible agents, distinguishing between ties at random” could arguably be the least biased alternative. But human values seem quite complex; if there was some simple and unbiased solution that would produce convergent values to all AIs that implemented it, it might certainly have something in common with what we call values, but that’s not a very high bar. There’s a sense in which all the bacteria share the same goal, “making more (surviving) copies of yourself is the only thing that matters”, and I’d expect the convergent value system to end up as being something like that. That has some resemblance to human values, since many humans also care about having offspring, but not very much.
It seems very hard for me to imagine how one could create a procedure that wasn’t biased towards a particular value system. E.g. Stuart Armstrong has written about how humans can be assigned any values whatsoever—you have to decide what parts of their behavior are because of genuine preferences and what parts are because of irrationality, and what values that implies. And the way you decide what’s correct behavior and what’s irrationality seems like the kind of a choice that will depend on your own values. Even something like “this seems like the simplest way of assigning preferences” presupposes that it is valuable to pick a procedure based on its simplicity—though the post argues that even simplicity would fail to distinguish between several alternative ways of assigning preferences.
Of course, just because we can’t be truly unbiased doesn’t mean we couldn’t be less biased, so maybe something like “pick the simplest system that produces sensible agents, distinguishing between ties at random” could arguably be the least biased alternative. But human values seem quite complex; if there was some simple and unbiased solution that would produce convergent values to all AIs that implemented it, it might certainly have something in common with what we call values, but that’s not a very high bar. There’s a sense in which all the bacteria share the same goal, “making more (surviving) copies of yourself is the only thing that matters”, and I’d expect the convergent value system to end up as being something like that. That has some resemblance to human values, since many humans also care about having offspring, but not very much.