I have a problem with calling this a “semi-open FAI problem”, because even if Eliezer’s proposed solution turns out to be correct, it’s still a wide open problem to develop arguments that can allow us to be confident enough in it to incorporate it into an FAI design. This would be true even if nobody can see any holes in it or have any better ideas, and doubly true given that some FAI researchers consider a different approach (which assumes that there is no such thing as “reality-fluid”, that everything in the multiverse just exists and as a matter of preference we do not / can not care about all parts of it in equal measure, #4 in this post) to be at least as plausible as Eliezer’s current approach.
In my view, we could make act-based agents without answering this or any similar questions. So I’m much less interested in answering them then I used to be. (There are possible approaches that do have to answer all of these questions, but at this point they seem very much less promising to me.)
We’ve briefly discussed this issue in the abstract, but I’m curious to get your take in a concrete case. Does that seem right to you? Do you think that we need to understand issues like this one, and have confidence in that understanding, prior to building powerful AI systems?
FAI designs that require high confidence solutions to many philosophical problems also do not seem very promising to me at this point. I endorse looking for alternative approaches.
I agree that act-based agents seem to require fewer high confidence solutions to philosophical problems. My main concern with act-based agents is that these designs will be in competition with fully autonomous AGIs (either alternative designs, or act-based agents that evolve into full autonomy due to inadequate care of their owners/users) to colonize the universe. The dependence on humans and lack of full autonomy in act-based agents seem likely to cause a significant weakness in at least one crucial area of this competition, such as general speed/efficiency/creativity, warfare (conventional, cyber, psychological, biological, nano, etc.), cooperation/coordination, self-improvement, and space travel. So even if these agents turn out to be “safe”, I’m not optimistic that we “win” in the long run.
My own idea is to aim for FAI designs that can correct their philosophical errors, autonomously, the same way that we humans can. Ideally, we’d fully understand how humans reason about philosophical problems and how philosophy normatively ought to be done before programming or teaching that to an AI. But realistically, due to time pressure, we might have to settle for something suboptimal like teaching through examples of human philosophical reasoning. Of course there’s lots of ways for this kind of AI to go wrong as well, so I also consider it to be a long shot.
Do you think that we need to understand issues like this one, and have confidence in that understanding, prior to building powerful AI systems?
Let me ask you a related question. Suppose act-based designs are as successful as you expect them to be. We still need to understand issues like the one described in Eliezer’s post (or solve the meta-problem of understanding philosophical reasoning) at some point, right? When do you think that will be? In other words, how much time do you think successfully creating act-based agents buys us?
Suppose act-based designs are as successful as you expect them to be.
It’s not so much that I have confidence in these approaches, but that I think (1) they are the most natural to explore at the moment, and (2) issues that seem like they can be cleanly avoided for these approaches seem less likely to be fundamental obstructions in general.
We still need to understand issues like the one described in Eliezer’s post (or solve the meta-problem of understanding philosophical reasoning) at some point, right? When do you think that will be?
Whenever such issues bear directly on our decision-making in such a way that making errors would be really bad. For example, when we encounter a situation where we face a small probability of a very large payoff, then it matters how well we understand the particular tradeoff at hand. The goal / best case is that the development of AI doesn’t depend on sorting out these kinds of considerations for its own sake, only insofar as the AI has to actually make critical choices that depend on these considerations.
The dependence on humans and lack of full autonomy in act-based agents seem likely to cause a significant weakness in at least one crucial area of this competition,
I wrote a little bit about efficiency here. I don’t see why an approval-directed agent would be at a serious disadvantage compared to an RL agent (though I do see why an imitation learner would be at a disadvantage by default, and why an approval-directed agent may be unsatisfying from a safety perspective for non-philosophical reasons).
Ideally you would synthesize data in advance in order to operate without access to counterfactual human feedback at runtime—it’s not clear if this is possible, but it seems at least plausible. But it’s also not clear to me it is necessary, as long as we can tolerate very modest (<1%) overhead from oversight.
Of course if such a period goes on long enough then it will be a problem, but that is a slow-burning problem that a superintelligent civilization can address at its leisure. In terms of technical solutions, anything we can think of now will easily be thought of in this future scenario. It seems like the only thing we really lose is the option of technological relinquishment or serious slow-down, which don’t look very attractive/feasible at the moment.
The goal / best case is that the development of AI doesn’t depend on sorting out these kinds of considerations for its own sake, only insofar as the AI has to actually make critical choices that depend on these considerations.
Isn’t a crucial consideration here how soon after the development of AI they will be faced with such choices? If the answer is “soon” then it seems that we should try to solve the problems ahead of time or try to delay AI. What’s your estimate? And what do you think the first such choices will be?
What’s your estimate? And what do you think the first such choices will be?
I think that we are facing some issues all of the time (e.g. some of these questions probably bear on “how much should we prioritize fast technological development?” or “how concerned should we be with physics disasters?” or so on), but that it will be a long time before we face really big expected costs from getting these wrong. My best guess is that we will get to do many-centuries-of-current-humanity worth of thinking before we really need to get any of these questions right.
I don’t have a clear sense of what the first choices will be. My view is largely coming from not seeing any serious candidates for critical choices.
Anything to do with expansion into space looks like it will be very far away in subjective time (though perhaps not far in calendar time). Maybe there is some stuff with simulations, or value drift, but neither of those look very big in expectation for now. Maybe all of these issues together make 5% difference in expectation over the next few hundred subjective issues? (Though this is a pretty unstable estimate.)
How did you arrive at the conclusion that we’re not facing big expected costs with these questions? It seems to me that for example the construction of large nuclear arsenals and lack of sufficient safeguards against nuclear war has already caused a large expected cost, and may have been based on one or more incorrect philosophical understandings (e.g., to the question of, what is the right amount of concern for distant strangers and future people). Similarly with “how much should we prioritize fast technological development?” But this is just from intuition since I don’t really know how to compute expected costs when the uncertainties involved have a large moral or normative component.
My best guess is that we will get to do many-centuries-of-current-humanity worth of thinking before we really need to get any of these questions right.
Do you expect technological development to have plateaued by then (i.e., AIs will have invented essentially all technologies feasible in this universe)? If so, do you think there won’t be any technologies among them that would let some group of people/AIs unilaterally alter the future of the universe according to their understanding of what is normative? (For example, intentionally or accidentally destroy civilization, or win a decisive war against the rest of the world.) Or do you think something like a world government will have been created to control the use of such technologies?
How did you arrive at the conclusion that we’re not facing big expected costs with these questions?
There are lots of things we don’t know, and my default presumption is for errors to be non-astronomically-costly, until there are arguments otherwise.
I agree that philosophical problems have some stronger claim to causing astronomical damage, and so I am more scared of philosophical errors than e.g. our lack of effective public policy, our weak coordination mechanisms, global warming, the dismal state of computer security.
But I don’t see really strong arguments for philosophical errors causing great damage, and so I’m skeptical that we are facing big expected costs (big compared to the biggest costs we can identify and intervene on, amongst them AI safety).
That is, there seems to be a pretty good case that AI may be built soon, and that we lack the understanding to build AI systems that do what we want, that we will nevertheless build AI systems to help us get what we want in the short term, and that in the long run this will radically reduce the value of the universe. The cases for philosophical errors causing damage are overall much more speculative, have lower stakes, and are less urgent.
the construction of large nuclear arsenals and lack of sufficient safeguards against nuclear war has already caused a large expected cost, and may have been based on one or more incorrect philosophical understandings
I agree that philosophical progress would very slightly decrease the probability of nuclear trouble, but this looks like a very small effect. (Orders of magnitude smaller than the effects from say increased global peace and stability, which I’d probably list as a higher priority right now than resolving philosophical uncertainty.) It’s possible we disagree about the mechanics of this particular situation.
Do you expect technological development to have plateaued by then (i.e., AIs will have invented essentially all technologies feasible in this universe)?
No. I think that 200 years of subjective time probably amounts 5-10 more doublings of the economy, and that technological change is a plausible reason that philosophical error would eventually become catastrophic.
I said “best guess” but this really is a pretty wild guess about the relevant timescales.
intentionally or accidentally destroy civilization
As with the special case of nuclear weapons, I think that philosophical error is a relatively small input into world-destruction.
win a decisive war against the rest of the world
I don’t expect this to cause philosophical errors to become catastrophic. I guess the concern is that the war will be won by someone who doesn’t much care about the future, thereby increasing the probability that resources are controlled by someone who prefers not undergo any further reflection? I’m willing to talk about this scenario more, but at face value the prospect of a decisive military victory wouldn’t bump philosophical error above AI risk as a concern for me.
I’m open to ending up with a more pessimistic view about the consequences of philosophical error, either by thinking through more possible scenarios in which it causes damage or by considering more abstract arguments.
But if I end up with a view more like yours, I don’t know if it would change my view on AI safety. It still feels like the AI control problem is a different issue which can be considered separately.
I have a problem with calling this a “semi-open FAI problem”, because even if Eliezer’s proposed solution turns out to be correct, it’s still a wide open problem to develop arguments that can allow us to be confident enough in it to incorporate it into an FAI design. This would be true even if nobody can see any holes in it or have any better ideas, and doubly true given that some FAI researchers consider a different approach (which assumes that there is no such thing as “reality-fluid”, that everything in the multiverse just exists and as a matter of preference we do not / can not care about all parts of it in equal measure, #4 in this post) to be at least as plausible as Eliezer’s current approach.
You’re right. Edited.
In my view, we could make act-based agents without answering this or any similar questions. So I’m much less interested in answering them then I used to be. (There are possible approaches that do have to answer all of these questions, but at this point they seem very much less promising to me.)
We’ve briefly discussed this issue in the abstract, but I’m curious to get your take in a concrete case. Does that seem right to you? Do you think that we need to understand issues like this one, and have confidence in that understanding, prior to building powerful AI systems?
FAI designs that require high confidence solutions to many philosophical problems also do not seem very promising to me at this point. I endorse looking for alternative approaches.
I agree that act-based agents seem to require fewer high confidence solutions to philosophical problems. My main concern with act-based agents is that these designs will be in competition with fully autonomous AGIs (either alternative designs, or act-based agents that evolve into full autonomy due to inadequate care of their owners/users) to colonize the universe. The dependence on humans and lack of full autonomy in act-based agents seem likely to cause a significant weakness in at least one crucial area of this competition, such as general speed/efficiency/creativity, warfare (conventional, cyber, psychological, biological, nano, etc.), cooperation/coordination, self-improvement, and space travel. So even if these agents turn out to be “safe”, I’m not optimistic that we “win” in the long run.
My own idea is to aim for FAI designs that can correct their philosophical errors, autonomously, the same way that we humans can. Ideally, we’d fully understand how humans reason about philosophical problems and how philosophy normatively ought to be done before programming or teaching that to an AI. But realistically, due to time pressure, we might have to settle for something suboptimal like teaching through examples of human philosophical reasoning. Of course there’s lots of ways for this kind of AI to go wrong as well, so I also consider it to be a long shot.
Let me ask you a related question. Suppose act-based designs are as successful as you expect them to be. We still need to understand issues like the one described in Eliezer’s post (or solve the meta-problem of understanding philosophical reasoning) at some point, right? When do you think that will be? In other words, how much time do you think successfully creating act-based agents buys us?
It’s not so much that I have confidence in these approaches, but that I think (1) they are the most natural to explore at the moment, and (2) issues that seem like they can be cleanly avoided for these approaches seem less likely to be fundamental obstructions in general.
Whenever such issues bear directly on our decision-making in such a way that making errors would be really bad. For example, when we encounter a situation where we face a small probability of a very large payoff, then it matters how well we understand the particular tradeoff at hand. The goal / best case is that the development of AI doesn’t depend on sorting out these kinds of considerations for its own sake, only insofar as the AI has to actually make critical choices that depend on these considerations.
I wrote a little bit about efficiency here. I don’t see why an approval-directed agent would be at a serious disadvantage compared to an RL agent (though I do see why an imitation learner would be at a disadvantage by default, and why an approval-directed agent may be unsatisfying from a safety perspective for non-philosophical reasons).
Ideally you would synthesize data in advance in order to operate without access to counterfactual human feedback at runtime—it’s not clear if this is possible, but it seems at least plausible. But it’s also not clear to me it is necessary, as long as we can tolerate very modest (<1%) overhead from oversight.
Of course if such a period goes on long enough then it will be a problem, but that is a slow-burning problem that a superintelligent civilization can address at its leisure. In terms of technical solutions, anything we can think of now will easily be thought of in this future scenario. It seems like the only thing we really lose is the option of technological relinquishment or serious slow-down, which don’t look very attractive/feasible at the moment.
Isn’t a crucial consideration here how soon after the development of AI they will be faced with such choices? If the answer is “soon” then it seems that we should try to solve the problems ahead of time or try to delay AI. What’s your estimate? And what do you think the first such choices will be?
I think that we are facing some issues all of the time (e.g. some of these questions probably bear on “how much should we prioritize fast technological development?” or “how concerned should we be with physics disasters?” or so on), but that it will be a long time before we face really big expected costs from getting these wrong. My best guess is that we will get to do many-centuries-of-current-humanity worth of thinking before we really need to get any of these questions right.
I don’t have a clear sense of what the first choices will be. My view is largely coming from not seeing any serious candidates for critical choices.
Anything to do with expansion into space looks like it will be very far away in subjective time (though perhaps not far in calendar time). Maybe there is some stuff with simulations, or value drift, but neither of those look very big in expectation for now. Maybe all of these issues together make 5% difference in expectation over the next few hundred subjective issues? (Though this is a pretty unstable estimate.)
How did you arrive at the conclusion that we’re not facing big expected costs with these questions? It seems to me that for example the construction of large nuclear arsenals and lack of sufficient safeguards against nuclear war has already caused a large expected cost, and may have been based on one or more incorrect philosophical understandings (e.g., to the question of, what is the right amount of concern for distant strangers and future people). Similarly with “how much should we prioritize fast technological development?” But this is just from intuition since I don’t really know how to compute expected costs when the uncertainties involved have a large moral or normative component.
Do you expect technological development to have plateaued by then (i.e., AIs will have invented essentially all technologies feasible in this universe)? If so, do you think there won’t be any technologies among them that would let some group of people/AIs unilaterally alter the future of the universe according to their understanding of what is normative? (For example, intentionally or accidentally destroy civilization, or win a decisive war against the rest of the world.) Or do you think something like a world government will have been created to control the use of such technologies?
There are lots of things we don’t know, and my default presumption is for errors to be non-astronomically-costly, until there are arguments otherwise.
I agree that philosophical problems have some stronger claim to causing astronomical damage, and so I am more scared of philosophical errors than e.g. our lack of effective public policy, our weak coordination mechanisms, global warming, the dismal state of computer security.
But I don’t see really strong arguments for philosophical errors causing great damage, and so I’m skeptical that we are facing big expected costs (big compared to the biggest costs we can identify and intervene on, amongst them AI safety).
That is, there seems to be a pretty good case that AI may be built soon, and that we lack the understanding to build AI systems that do what we want, that we will nevertheless build AI systems to help us get what we want in the short term, and that in the long run this will radically reduce the value of the universe. The cases for philosophical errors causing damage are overall much more speculative, have lower stakes, and are less urgent.
I agree that philosophical progress would very slightly decrease the probability of nuclear trouble, but this looks like a very small effect. (Orders of magnitude smaller than the effects from say increased global peace and stability, which I’d probably list as a higher priority right now than resolving philosophical uncertainty.) It’s possible we disagree about the mechanics of this particular situation.
No. I think that 200 years of subjective time probably amounts 5-10 more doublings of the economy, and that technological change is a plausible reason that philosophical error would eventually become catastrophic.
I said “best guess” but this really is a pretty wild guess about the relevant timescales.
As with the special case of nuclear weapons, I think that philosophical error is a relatively small input into world-destruction.
I don’t expect this to cause philosophical errors to become catastrophic. I guess the concern is that the war will be won by someone who doesn’t much care about the future, thereby increasing the probability that resources are controlled by someone who prefers not undergo any further reflection? I’m willing to talk about this scenario more, but at face value the prospect of a decisive military victory wouldn’t bump philosophical error above AI risk as a concern for me.
I’m open to ending up with a more pessimistic view about the consequences of philosophical error, either by thinking through more possible scenarios in which it causes damage or by considering more abstract arguments.
But if I end up with a view more like yours, I don’t know if it would change my view on AI safety. It still feels like the AI control problem is a different issue which can be considered separately.