Stopping or delaying AI development feels more like trying to interfere with an already-running process, whereas there are no existing norms on what we use AI for that we would have to fight against, and debates on those norms are already beginning. For new things, I expect the public to be particularly risk-averse.
Do you think that at the time when AI development wasn’t an already-running process, and AI was still a new thing that the public could be expected to be risk-averse about (when would you say that was?), the argument “working on alignment isn’t urgent because humans can probably coordinate to stop AI development” would have been a good one?
Relatedly, it is a lot easier to make norms/laws/regulations now that bind our future selves.
Same question here. Back when “don’t develop AI” was still a binding on our future selves, should we have expected that we will coordinate to stop AI development, and it’s just bad luck that we haven’t succeeded in doing that?
Looking at the things governments and corporations say, it seems like they would be likely to do things like this.
Can you be more specific? What global agreement do you think would be reached, that is both realistic and would solve the kinds of problems that I’m worried about (e.g., unintentional corruption of humans by “aligned” AIs who give humans too much power or options that they can’t handle, and deliberate manipulation of humans by unaligned AIs or AIs aligned to other users)?
I think it would help me if you suggested some ways that technical solutions could help with these problems.
For example, create an AI that can help the user with philosophical questions at least as much as technical questions. (This could be done for example by figuring out how to better use Iterated Amplification to answer philosophical questions, or how to do imitation learning of human philosophers, or how to apply inverse reinforcement learning to philosophical reasoning.) Then the user could ask questions like “Am I likely to be corrupted by access to this technology? What can I do to prevent that while still taking advantage of it?” Or “Is this just an extremely persuasive attempt at manipulation or an actually good moral argument?”
As another example, solve metaethics and build that into the AI so that the AI can figure out or learn the actual terminal values of the user, which would make it easier to protect the user from manipulation and self-corruption. And even if the human user is corrupted, the AI still has the correct utility function, and when it has made enough technological progress it can uncorrupt the human.
I view a lot of strategy research (eg. from FHI and OpenAI) as figuring this out from the social side, and some of my optimism is based on conversations with those researchers.
Can you point me to any relevant results that have been written down, or explain what you learned from those conversations?
On the technical side, I feel quite stuck (for the reasons above), though I haven’t tried hard enough to say that it’s too difficult to do.
To address this and the question (from the parallel thread) of whether you should personally work on this, I think we need people to either solve the technical problems or at least to collectively try hard enough to convincingly say that it’s too difficult to do. (Otherwise who is going to convince policymakers to adopt the very costly social solutions? Who is going to convince people to start/join a social movement to influence policymakers to consider those costly social solutions? The fact that those things tend to take a lot of time seems like sufficient reason for urgency on the technical side, even if you expect the social solutions to be feasible.) Who are these people going to be, especially the first ones to join the field and help grow it? Probably existing AI alignment researchers, right? (I can probably make stronger arguments in this direction but I don’t want to be too “pushy” so I’ll stop here.)
Do you think that at the time when AI development wasn’t an already-running process, and AI was still a new thing that the public could be expected to be risk-averse about (when would you say that was?), the argument “working on alignment isn’t urgent because humans can probably coordinate to stop AI development” would have been a good one?
Same question here. Back when “don’t develop AI” was still a binding on our future selves, should we have expected that we will coordinate to stop AI development, and it’s just bad luck that we haven’t succeeded in doing that?
Can you be more specific? What global agreement do you think would be reached, that is both realistic and would solve the kinds of problems that I’m worried about (e.g., unintentional corruption of humans by “aligned” AIs who give humans too much power or options that they can’t handle, and deliberate manipulation of humans by unaligned AIs or AIs aligned to other users)?
For example, create an AI that can help the user with philosophical questions at least as much as technical questions. (This could be done for example by figuring out how to better use Iterated Amplification to answer philosophical questions, or how to do imitation learning of human philosophers, or how to apply inverse reinforcement learning to philosophical reasoning.) Then the user could ask questions like “Am I likely to be corrupted by access to this technology? What can I do to prevent that while still taking advantage of it?” Or “Is this just an extremely persuasive attempt at manipulation or an actually good moral argument?”
As another example, solve metaethics and build that into the AI so that the AI can figure out or learn the actual terminal values of the user, which would make it easier to protect the user from manipulation and self-corruption. And even if the human user is corrupted, the AI still has the correct utility function, and when it has made enough technological progress it can uncorrupt the human.
Can you point me to any relevant results that have been written down, or explain what you learned from those conversations?
To address this and the question (from the parallel thread) of whether you should personally work on this, I think we need people to either solve the technical problems or at least to collectively try hard enough to convincingly say that it’s too difficult to do. (Otherwise who is going to convince policymakers to adopt the very costly social solutions? Who is going to convince people to start/join a social movement to influence policymakers to consider those costly social solutions? The fact that those things tend to take a lot of time seems like sufficient reason for urgency on the technical side, even if you expect the social solutions to be feasible.) Who are these people going to be, especially the first ones to join the field and help grow it? Probably existing AI alignment researchers, right? (I can probably make stronger arguments in this direction but I don’t want to be too “pushy” so I’ll stop here.)