Why do you think this problem needs to be solved now? Couldn’t the idealized version of yourself spend the first few years to figure out how best to protect again value drift during the rest of the available time? It seems to me that a more urgent problem is, given that a person thinking alone for even a few years would likely go crazy, how do we set up the initial social dynamics for a group of virtual humans?
Because I’ve already found problems with these systems in the past few years, problems that other people did not expect there to be. If one of them had been put into such a setup then, I expect that it would have failed. Sure, if current me was put in the system, maybe I could find a few more problems and patch them, because I expect to find them.
But I wouldn’t trust many others, and I barely trust myself. Because the difference is large between what the setup will be in practice, and what current research is in practice. The more we can solve these issues ahead of time, the more we can delegate.
The others are mainly oral, with people coming up with plans that involve simulating humans for long periods of time, me doing the equivalent of saying “have you considered value drift” and (often) the reaction from the other revealing that no, they had not considered value drift.
Because the difference is large between what the setup will be in practice, and what current research is in practice.
What are the most important differences that you foresee?
The most important differences I foresee are the unforseen :-) I mean that seriously, because anything that is easy to foresee will possibly be patched before implementation.
But if we look at how research happens nowadays, it has a variety of different approaches and institutional cultures, certain levels of feedback both from within the AI safety community and the surrounding world, grounding our morality and keeping us connected to the flow of culture (such as it is).
Most of the simulation ideas do away with that. If someone suggested that the best idea for AI safety would be to lock up AI safety researchers in an isolated internet-free house for ten years and see what they came up with, we’d be all over the flaws in this plan (and not just the opportunity costs). But replace that physical, grounded idea with a similar one that involves “simulation”, and suddenly people flip into far mode and are more willing to accept it. In practice, a simulation is likely to be far more alien and alienating that just locking people up in a house. We have certain levels of control in a simulation that we wouldn’t have in reality, but even that could hurt—I’m not sure how I would react if I knew my mind and emotions and state of tiredness were open to manipulation.
So what I’m mainly trying to say is that using simulations (or predictions about simulations) to do safety work is a difficult and subtle project, and needs to be thoroughly planned out with, at minimum, a lot of psychologists and some anthropologists. I think it can be done, but not glibly and not easily.
The others are mainly oral, with people coming up with plans that involve simulating humans for long periods of time, me doing the equivalent of saying “have you considered value drift” and (often) the reaction from the other revealing that no, they had not considered value drift.
Ah, value drift has been on my mind for so long that it’s surprising to me that people could be thinking about simulating humans for long periods of time without thinking about value drift. Thanks for the update!
The most important differences I foresee are the unforseen :-) I mean that seriously, because anything that is easy to foresee will possibly be patched before implementation.
I guess my perspective here is that pretty soon we’ll be forced to live in a real environment that will be quite alien / drift-inducing already, so maybe it wouldn’t be so hard to construct a virtual environment that would be better in comparison, so the risk-minimizing thing to do would be to put yourself in such an environment as soon as possible and then work on further risk reduction from there. (See this recent news as another sign pointing to that coming soon.)
Most of the simulation ideas do away with that.
Yeah I agree that getting the social aspect right is probably the hardest part, and we might need more than a small group of virtual humans to do that.
So what I’m mainly trying to say is that using simulations (or predictions about simulations) to do safety work is a difficult and subtle project, and needs to be thoroughly planned out with, at minimum, a lot of psychologists and some anthropologists. I think it can be done, but not glibly and not easily.
for the second point, can’t you pass the recursive buck almost as easily there?
How so? If you set up a group of virtual humans to think about some problem, you have to decide, at least initially, who to bring into the group, how they can interact with each other, how the final output gets determined (if they don’t all agree on one answer), and under what circumstances the rules can be changed. If you do it wrong, you could get bad social dynamics before the group can figure out how to fix or improve the setup.
Also, on a more minor note, I expect that if I try and preserve myself from value drift, using only the resources I had in the simulation—I expect to fail. Social dynamics might work though, so we do need to think about those.
Why do you think this problem needs to be solved now? Couldn’t the idealized version of yourself spend the first few years to figure out how best to protect again value drift during the rest of the available time? It seems to me that a more urgent problem is, given that a person thinking alone for even a few years would likely go crazy, how do we set up the initial social dynamics for a group of virtual humans?
Because I’ve already found problems with these systems in the past few years, problems that other people did not expect there to be. If one of them had been put into such a setup then, I expect that it would have failed. Sure, if current me was put in the system, maybe I could find a few more problems and patch them, because I expect to find them.
But I wouldn’t trust many others, and I barely trust myself. Because the difference is large between what the setup will be in practice, and what current research is in practice. The more we can solve these issues ahead of time, the more we can delegate.
I don’t know which problems/systems you’re referring to. Maybe you could cite these in the post to give more motivation?
What are the most important differences that you foresee?
The main one is when I realised the problems with CEV: https://www.lesswrong.com/posts/vgFvnr7FefZ3s3tHp/mahatma-armstrong-ceved-to-death
The others are mainly oral, with people coming up with plans that involve simulating humans for long periods of time, me doing the equivalent of saying “have you considered value drift” and (often) the reaction from the other revealing that no, they had not considered value drift.
The most important differences I foresee are the unforseen :-) I mean that seriously, because anything that is easy to foresee will possibly be patched before implementation.
But if we look at how research happens nowadays, it has a variety of different approaches and institutional cultures, certain levels of feedback both from within the AI safety community and the surrounding world, grounding our morality and keeping us connected to the flow of culture (such as it is).
Most of the simulation ideas do away with that. If someone suggested that the best idea for AI safety would be to lock up AI safety researchers in an isolated internet-free house for ten years and see what they came up with, we’d be all over the flaws in this plan (and not just the opportunity costs). But replace that physical, grounded idea with a similar one that involves “simulation”, and suddenly people flip into far mode and are more willing to accept it. In practice, a simulation is likely to be far more alien and alienating that just locking people up in a house. We have certain levels of control in a simulation that we wouldn’t have in reality, but even that could hurt—I’m not sure how I would react if I knew my mind and emotions and state of tiredness were open to manipulation.
So what I’m mainly trying to say is that using simulations (or predictions about simulations) to do safety work is a difficult and subtle project, and needs to be thoroughly planned out with, at minimum, a lot of psychologists and some anthropologists. I think it can be done, but not glibly and not easily.
Ah, value drift has been on my mind for so long that it’s surprising to me that people could be thinking about simulating humans for long periods of time without thinking about value drift. Thanks for the update!
I guess my perspective here is that pretty soon we’ll be forced to live in a real environment that will be quite alien / drift-inducing already, so maybe it wouldn’t be so hard to construct a virtual environment that would be better in comparison, so the risk-minimizing thing to do would be to put yourself in such an environment as soon as possible and then work on further risk reduction from there. (See this recent news as another sign pointing to that coming soon.)
Yeah I agree that getting the social aspect right is probably the hardest part, and we might need more than a small group of virtual humans to do that.
I think this framing makes sense.
I agree with both individual points but… for the second point, can’t you pass the recursive buck almost as easily there?
At least “what should I have thought about already for outsourcing questions to emulations?” seems like a pretty good first question to ask.
How so? If you set up a group of virtual humans to think about some problem, you have to decide, at least initially, who to bring into the group, how they can interact with each other, how the final output gets determined (if they don’t all agree on one answer), and under what circumstances the rules can be changed. If you do it wrong, you could get bad social dynamics before the group can figure out how to fix or improve the setup.
Also, on a more minor note, I expect that if I try and preserve myself from value drift, using only the resources I had in the simulation—I expect to fail. Social dynamics might work though, so we do need to think about those.