For homogeneity, I guess I was mainly thinking that in the era of not-knowing-how-to-align-an-AGI, people would tend to try lots of different new things, because nothing so far has worked. I agree that once there’s an aligned AGI, it’s likely to get copied, and if new better AGIs are trained, people may be inclined to try to keep the procedure as close as possible to what’s worked before.
I hadn’t thought about whether different AGIs with different goals are likely to compromise vs fight. There’s Wei Dai’s argument that compromise is very easy with AGIs because they can “merge their utility functions”. But at least this kind of AGI doesn’t have a utility function … maybe there’s a way to do something like that with multiple parallel value functions, but I’m not sure that would actually work. There are also old posts about AGIs checking each other’s source code for sincerity, but can they actually understand what they’re looking at? Transparency is hard. And how do they verify that there isn’t a backup stashed somewhere else, ready to jump out at a later date and betray the agreement? Also, humans have social instincts that AGIs don’t, which pushes in both directions I think. And humans are easier to kill / easier to credibly threaten. I dunno. I’m not inclined to have confidence in any direction.
I agree that if a sufficiently smart misaligned AGI is running on a nice supercomputer somewhere, it would have every reason to try to stay right there and pursue its goals within that institution, and it would have every reason to try to escape and self-replicate elsewhere in the world. I guess we can be concerned about both. :-/
Thanks!
For homogeneity, I guess I was mainly thinking that in the era of not-knowing-how-to-align-an-AGI, people would tend to try lots of different new things, because nothing so far has worked. I agree that once there’s an aligned AGI, it’s likely to get copied, and if new better AGIs are trained, people may be inclined to try to keep the procedure as close as possible to what’s worked before.
I hadn’t thought about whether different AGIs with different goals are likely to compromise vs fight. There’s Wei Dai’s argument that compromise is very easy with AGIs because they can “merge their utility functions”. But at least this kind of AGI doesn’t have a utility function … maybe there’s a way to do something like that with multiple parallel value functions, but I’m not sure that would actually work. There are also old posts about AGIs checking each other’s source code for sincerity, but can they actually understand what they’re looking at? Transparency is hard. And how do they verify that there isn’t a backup stashed somewhere else, ready to jump out at a later date and betray the agreement? Also, humans have social instincts that AGIs don’t, which pushes in both directions I think. And humans are easier to kill / easier to credibly threaten. I dunno. I’m not inclined to have confidence in any direction.
I agree that if a sufficiently smart misaligned AGI is running on a nice supercomputer somewhere, it would have every reason to try to stay right there and pursue its goals within that institution, and it would have every reason to try to escape and self-replicate elsewhere in the world. I guess we can be concerned about both. :-/