It seems like the list mostly explains away the evidence that “human’s can’t currently prevent value drift” since the points apply much less to AIs. (I don’t know if you agree.)
As you mention, (1) probably applies less to AIs (for better or worse).
(2) applies to AIs in the sense that many features of AIs’ environments will be determined by what tasks they need to accomplish, rather than what will lead to minimal value drift. But the reason to focus on the environment in the human case is that it’s the ~only way to affect our values. By contrast, we have much more flexibility in designing AIs, and it’s plausible that we can design them so that their values aren’t very sensitive to their environments. Also, if we know that particular types of inputs are dangerous, the AIs’ environment could be controllable in the sense that less-susceptible AIs could monitor for such inputs, and filter out the dangerous ones.
(3): “can’t change the trajectory of general value drift by much” seems less likely to apply to AIs (or so I’m arguing). “Most people are selfish and don’t care about value drift except to the extent that it harms them directly” means that human value drift is pretty safe (since people usually maintain some basic sense of self-preservation) but that AI value drift is scary (since it could lead your AI to totally disempower you).
(4) As you noted in the OP, AI could change really fast, so you might need to control value-drift just to survive a few years. (And once you have those controls in place, it might be easy to increase the robustness further, though this isn’t super obvious.)
(5) For better or worse, people will probably care less about this in the AI case. (If the threat-model is “random drift away from the starting point”, it seems like it would be for the better.)
Since the space of possible AIs is much larger than the space of humans, there are more degrees of freedom along which AI values can change.
I don’t understand this point. We (or AIs that are aligned with us) get to pick from that space, and so we can pick the AIs that have least trouble with value drift. (Subject to other constraints, like competitiveness.)
(Imagine if AGI is built out of transformers. You could then argue “since the space of possible non-transformers is much larger than the space of transformers, there are more degrees of freedom along which non-transformer values can change”. And humans are non-transformers, so we should be expected to have more trouble with value drift. Obviously this argument doesn’t work, but I don’t see the relevant disanalogy to your argument.)
Creating new AIs is often cheaper than creating new humans, and so people might regularly spin up new AIs to perform particular functions, while discounting the long-term effect this has on value drift (since the costs are mostly borne by civilization in general, rather than them in particular)
Why are the costs mostly borne by civilizaiton in general? If I entrust some of my property to an AI system, and it changes values, that seems bad for me in particular?
Maybe the argument is something like: As long as law-and-order is preserved, things are not so bad for me even if my AI’s values start drifting. But if there’s a critical mass of misaligned AIs, they can launch a violent coup against the humans and the aligned AIs. And my contribution to the coup-probability is small?
It seems like the list mostly explains away the evidence that “human’s can’t currently prevent value drift” since the points apply much less to AIs. (I don’t know if you agree.)
As you mention, (1) probably applies less to AIs (for better or worse).
(2) applies to AIs in the sense that many features of AIs’ environments will be determined by what tasks they need to accomplish, rather than what will lead to minimal value drift. But the reason to focus on the environment in the human case is that it’s the ~only way to affect our values. By contrast, we have much more flexibility in designing AIs, and it’s plausible that we can design them so that their values aren’t very sensitive to their environments. Also, if we know that particular types of inputs are dangerous, the AIs’ environment could be controllable in the sense that less-susceptible AIs could monitor for such inputs, and filter out the dangerous ones.
(3): “can’t change the trajectory of general value drift by much” seems less likely to apply to AIs (or so I’m arguing). “Most people are selfish and don’t care about value drift except to the extent that it harms them directly” means that human value drift is pretty safe (since people usually maintain some basic sense of self-preservation) but that AI value drift is scary (since it could lead your AI to totally disempower you).
(4) As you noted in the OP, AI could change really fast, so you might need to control value-drift just to survive a few years. (And once you have those controls in place, it might be easy to increase the robustness further, though this isn’t super obvious.)
(5) For better or worse, people will probably care less about this in the AI case. (If the threat-model is “random drift away from the starting point”, it seems like it would be for the better.)
I don’t understand this point. We (or AIs that are aligned with us) get to pick from that space, and so we can pick the AIs that have least trouble with value drift. (Subject to other constraints, like competitiveness.)
(Imagine if AGI is built out of transformers. You could then argue “since the space of possible non-transformers is much larger than the space of transformers, there are more degrees of freedom along which non-transformer values can change”. And humans are non-transformers, so we should be expected to have more trouble with value drift. Obviously this argument doesn’t work, but I don’t see the relevant disanalogy to your argument.)
Why are the costs mostly borne by civilizaiton in general? If I entrust some of my property to an AI system, and it changes values, that seems bad for me in particular?
Maybe the argument is something like: As long as law-and-order is preserved, things are not so bad for me even if my AI’s values start drifting. But if there’s a critical mass of misaligned AIs, they can launch a violent coup against the humans and the aligned AIs. And my contribution to the coup-probability is small?