Not knowing v is supposed to help with these situations: without knowing the values you want to minimise harm to, your better option is to not do too much.
My intended point with that example was to question what it means for v to be at 0, 1, or −1. If v is defined to be always non-negative (something like “estimate the volume of the future that is ‘different’ in some meaningful way”), then flipping the direction of v makes sense. But if v is some measure of how happy Bob is, then flipping the direction of v means that we’re trying to find a plan that will satisfy both someone that likes Bob and hates Bob. Is that best done by setting the happiness value near 0? If so, what level of Bob’s happiness is 0? What if it’s worse than it is without any action on the agent’s part?
Perhaps the solution there is to just say “yeah, we only care about things that are metrics (i.e. 0 is special and natural),” but I think that’s unsatisfying because it only allows for negative externalities, and we might want to incorporate both positive and negative externalities into our reasoning.
Is that best done by setting the happiness value near 0?
0 is not the default; the default is the expected v, given that M(εu+v) is unleashed upon the world. That event will (counterfactually) happen, and neither M(εu+v) nor M(u-v) can change it. M(εu+v) will not allow an S(u) that costs it v-utility; given that, M(u-v) knows that it cannot reduce the expected v, so will try best to build S(u) to affect it the least.
If you prefer, since the plans could be vetoed by someone who hates Bob, all Bob-helping plans will get vetoed. Therefore the agent who likes Bob must be careful that they don’t inadvertently hurt Bob, because there is an asymmetry of impact as those will get accepted.
Keeping the agent ignorant of v (or “Bob”) is purely to prevent something like “S(u) rampages out of control, but then fine tunes the universe to undo any expected impact on Bob’s happiness”.
0 is not the default; the default is the expected v, given that M(εu+v) is unleashed upon the world.
Now that I have time to actually work through the math, I agree that 0 is not a special point for v; it’s a special point for Δv (which seems reasonable).
But I’m not sure what the second M is doing, now. A S design that satisfies M(u-v) more than default is one where Δ(u-v)>0, or Δu>Δv (1). A S design that satisfies M(εu+v) more than default is one where Δ(εu+v)>0, or εΔu>-Δv (2). If you look at the 2d graph of Δu and Δv, the point of constraint (1) is to block off the southeastern half of the graph (cases where our negative externality outweighs our improvement), and the point of constraint (2) is to block off the “southwestern” half (rotated by ε).
Constraint 1 seems reasonable—don’t do more negative externalities than you accrue in benefits. Constraint 2 seems weird, because the cases it cuts off are the cases where S does more positive externalities than it loses in benefits. This is sort of an anti-first law, in that the agent will choose inaction or pursuing its duties instead of helping out others—but only when it helps too much! A mail delivery robot might be willing to deliver one less piece of mail in order to prevent one blind pedestrian from walking in front of a truck, but not be willing to deliver one less piece of mail in order to prevent two blind pedestrians from walking in front of a truck, because that would have counterfactually caused it to not be made in the first place (and thus goes against its inborn moral sense).
[Edit]I suppose the underlying principle here might be “timidity”—the agent doesn’t trust itself to get right any plan which has a larger impact than some threshold, and so has a tightly bounded utility function in some way. But this doesn’t look like the right way to bound it.[/Edit]
(If we have defined all possible vs such that Δv≥0, then constraint 2 is never active, because we’re only considering the right half of that graph.)
Keeping the agent ignorant of v (or “Bob”) is purely to prevent something like “S(u) rampages out of control, but then fine tunes the universe to undo any expected impact on Bob’s happiness”.
Suppose among the human population there lives one morally relevant person (or, if you prefer, 36 of them). The AI knows that it is very important that they not be disturbed—but not who they are.
Contrast this to the case where the AI thinks that all humans are morally relevant, with an importance of not disturbing a person that’s about 1/N of the importance assigned in the previous case. What’s the difference between the two cases? To first order, it looks like nothing; to second order, it looks like the first case might have some bizarreness about summing up disturbances across people that the second case won’t have.
That is, I don’t think we can just say “the agent is ignorant of v, so it does the right thing by default.” That sounds like trying to extract useful work out of ignorance! The agent’s prior over v—that is, what sort of externalities are worth preventing—will determine what prohibitions or reservations are baked into S, and it seems really strange to me to trust that the uncertainty will take care of it. If we don’t have the right reference class to begin with, being uncertain will include lots of things from the wrong reference class, and S will make crazy tradeoffs. But if we have the right reference class, we might as well go with it.
This is reminding me of Jainism, actually—I had just been focusing on building a robot with ahimsa, but I think also trying to incorporate anekantavada would lead to a suggestion like this one.
A S design that satisfies M(u-v) more than default is one where Δ(u-v)>0, or Δu>Δv (1). A S design that satisfies M(εu+v) more than default is one where Δ(εu+v)>0, or εΔu>-Δv (2).
That sounds like trying to extract useful work out of ignorance!
I am trying to extract work from ignorance. The same way that I did with “resource gathering”. An AI that is ignorant of its utility will try and gather power and resources, and preserve flexibility—that’s a kind of behaviour you can get mainly from an ignorant AI.
all possible vs such that Δv≥0
Unlikely, because I’d generally design with equal chances of v and -v (or at least comparable chances).
This is sort of an anti-first law, in that the agent will choose inaction or pursuing its duties instead of helping out others—but only when it helps too much!
We don’t know that v is nice—in fact, it’s likely nasty. With -v also being nasty. So we don’t want either of them to be strongly maximised, in fact.
What happens here is that as Δu increases and S(u) uses up resources, the probability that Δv will remain bounded (in plus or minus) decreases strongly. So the best way of keeping Δv bounded is not to burn up much resources towards Δu.
If we don’t have the right reference class to begin with
I’m assuming we don’t. And it’s much easier to define a category V such that we are fairly confident that there is a good utility/reference class in V, than to pick it out. But reduced impact kind of behaviour might even help if we cannot define V. Even if we can’t say exactly that some humans are morally valuable, killing a lot of humans is likely to be disruptive for a lot of utility functions (in a positive or negative direction), so we get reduced impact from that.
Unlikely, because I’d generally design with equal chances of v and -v (or at least comparable chances).
We don’t know that v is nice—in fact, it’s likely nasty. With -v also being nasty. So we don’t want either of them to be strongly maximised, in fact.
I think we have different intuitions about what it means to estimate Δv over an uncertain set / the constraints we’re putting on v. I’m imagining integrating Δvdv, and so if there is any v whose negative is also in the set with the same probability, then the two will cancel out completely, neither of them affecting the end result.
It seems to me like the property you want comes from having non-negative vs, which might have opposite inputs. That is, instead of v_1 being “Bob’s utility function” and v_2 being “Bob’s utility function, with a minus sign in front,” v_3 would be “positive changes to Bob’s utility function that I caused” and v_4 would be “negative changes to Bob’s utility function that I caused.” If we assign equal weight to only v_1 and v_2, it looks like there is no change to Bob’s utility function that will impact our decision-making, since when we integrate over our uncertainty the two balance out.
We’ve defined v_3 and v_4 to be non-negative, though. If we pull Bob’s sweater to rescue him from the speeding truck, v_3 is positive (because we’ve saved Bob) and v_4 is positive (because we’ve damaged his sweater). So we’ll look for plans that reduce both (which is most easily done by not intervening, and letting Bob be hit by the truck). If we want the agent to save Bob, we need to include that in u, and if we do so it’ll try to save Bob in the way with minimal other effects.
What happens here is that as Δu increases and S(u) uses up resources, the probability that Δv will remain bounded (in plus or minus) decreases strongly. So the best way of keeping Δv bounded is not to burn up much resources towards Δu.
Agreed that an AI that tries to maximize “profit” instead of “revenue” is the best place to look for a reduced impact AI (I also think that reduced impact AI is the best name for this concept, btw). I don’t think I’m seeing yet how this plan is a good representation of “cost.” It seems that in order to produce minimal activity, we need to put effort into balancing our weights on possible vs such that inaction looks better than action.
(I think this is easier to formulate in terms of effort spent than consequences wrought, but clearly we want to measure “inaction” in terms of consequences, not actions. It might be very low cost for the RIAI to send a text message to someone, but then that someone might do a lot of things that impact a lot of people and preferences, and we would rather if the RIAI just didn’t send the message.)
And it’s much easier to define a category V such that we are fairly confident that there is a good utility/reference class in V, than to pick it out.
It seems to me that any aggregation procedure over a category V is equivalent to a particular utility v*, and so the implausibility that a particular utility function v’ is the right one to pick applies as strongly to v*. For this to not be the case, we need to know something nontrivial about our category V or our aggregation procedure. (I also think we can, given an aggregation procedure or a category, work back from v’ to figure out at least one implied category or aggregation procedure given some benign assumptions.)
The point here is that M(u-v) might not know what v is, but M(εu+v) certainly does, and this is not the same as maximising an unknown utility function.
The point here is that M(u-v) might not know what v is, but M(εu+v) certainly does, and this is not the same as maximising an unknown utility function.
Ah, okay. I think I see better what you’re getting at. My intuition is that there’s a mapping to minimization of a reasonable aggregation of the set of non-negative utilities, but I think I should actually work through some examples before I make any long comments.
Do you disagree with my description of the “resource gathering agent”:
I don’t think I had read that article until now, but no objections come to mind.
My intuition is that there’s a mapping to minimization of a reasonable aggregation of the set of non-negative utilities
That would be useful to know, if you can find examples. Especially ones where all v and -v have the same probability (which is my current favourite requirement in this area).
My intended point with that example was to question what it means for v to be at 0, 1, or −1. If v is defined to be always non-negative (something like “estimate the volume of the future that is ‘different’ in some meaningful way”), then flipping the direction of v makes sense. But if v is some measure of how happy Bob is, then flipping the direction of v means that we’re trying to find a plan that will satisfy both someone that likes Bob and hates Bob. Is that best done by setting the happiness value near 0? If so, what level of Bob’s happiness is 0? What if it’s worse than it is without any action on the agent’s part?
Perhaps the solution there is to just say “yeah, we only care about things that are metrics (i.e. 0 is special and natural),” but I think that’s unsatisfying because it only allows for negative externalities, and we might want to incorporate both positive and negative externalities into our reasoning.
0 is not the default; the default is the expected v, given that M(εu+v) is unleashed upon the world. That event will (counterfactually) happen, and neither M(εu+v) nor M(u-v) can change it. M(εu+v) will not allow an S(u) that costs it v-utility; given that, M(u-v) knows that it cannot reduce the expected v, so will try best to build S(u) to affect it the least.
If you prefer, since the plans could be vetoed by someone who hates Bob, all Bob-helping plans will get vetoed. Therefore the agent who likes Bob must be careful that they don’t inadvertently hurt Bob, because there is an asymmetry of impact as those will get accepted.
Keeping the agent ignorant of v (or “Bob”) is purely to prevent something like “S(u) rampages out of control, but then fine tunes the universe to undo any expected impact on Bob’s happiness”.
Now that I have time to actually work through the math, I agree that 0 is not a special point for v; it’s a special point for Δv (which seems reasonable).
But I’m not sure what the second M is doing, now. A S design that satisfies M(u-v) more than default is one where Δ(u-v)>0, or Δu>Δv (1). A S design that satisfies M(εu+v) more than default is one where Δ(εu+v)>0, or εΔu>-Δv (2). If you look at the 2d graph of Δu and Δv, the point of constraint (1) is to block off the southeastern half of the graph (cases where our negative externality outweighs our improvement), and the point of constraint (2) is to block off the “southwestern” half (rotated by ε).
Constraint 1 seems reasonable—don’t do more negative externalities than you accrue in benefits. Constraint 2 seems weird, because the cases it cuts off are the cases where S does more positive externalities than it loses in benefits. This is sort of an anti-first law, in that the agent will choose inaction or pursuing its duties instead of helping out others—but only when it helps too much! A mail delivery robot might be willing to deliver one less piece of mail in order to prevent one blind pedestrian from walking in front of a truck, but not be willing to deliver one less piece of mail in order to prevent two blind pedestrians from walking in front of a truck, because that would have counterfactually caused it to not be made in the first place (and thus goes against its inborn moral sense).
[Edit]I suppose the underlying principle here might be “timidity”—the agent doesn’t trust itself to get right any plan which has a larger impact than some threshold, and so has a tightly bounded utility function in some way. But this doesn’t look like the right way to bound it.[/Edit]
(If we have defined all possible vs such that Δv≥0, then constraint 2 is never active, because we’re only considering the right half of that graph.)
Suppose among the human population there lives one morally relevant person (or, if you prefer, 36 of them). The AI knows that it is very important that they not be disturbed—but not who they are.
Contrast this to the case where the AI thinks that all humans are morally relevant, with an importance of not disturbing a person that’s about 1/N of the importance assigned in the previous case. What’s the difference between the two cases? To first order, it looks like nothing; to second order, it looks like the first case might have some bizarreness about summing up disturbances across people that the second case won’t have.
That is, I don’t think we can just say “the agent is ignorant of v, so it does the right thing by default.” That sounds like trying to extract useful work out of ignorance! The agent’s prior over v—that is, what sort of externalities are worth preventing—will determine what prohibitions or reservations are baked into S, and it seems really strange to me to trust that the uncertainty will take care of it. If we don’t have the right reference class to begin with, being uncertain will include lots of things from the wrong reference class, and S will make crazy tradeoffs. But if we have the right reference class, we might as well go with it.
This is reminding me of Jainism, actually—I had just been focusing on building a robot with ahimsa, but I think also trying to incorporate anekantavada would lead to a suggestion like this one.
I am trying to extract work from ignorance. The same way that I did with “resource gathering”. An AI that is ignorant of its utility will try and gather power and resources, and preserve flexibility—that’s a kind of behaviour you can get mainly from an ignorant AI.
Unlikely, because I’d generally design with equal chances of v and -v (or at least comparable chances).
We don’t know that v is nice—in fact, it’s likely nasty. With -v also being nasty. So we don’t want either of them to be strongly maximised, in fact.
What happens here is that as Δu increases and S(u) uses up resources, the probability that Δv will remain bounded (in plus or minus) decreases strongly. So the best way of keeping Δv bounded is not to burn up much resources towards Δu.
I’m assuming we don’t. And it’s much easier to define a category V such that we are fairly confident that there is a good utility/reference class in V, than to pick it out. But reduced impact kind of behaviour might even help if we cannot define V. Even if we can’t say exactly that some humans are morally valuable, killing a lot of humans is likely to be disruptive for a lot of utility functions (in a positive or negative direction), so we get reduced impact from that.
I think we have different intuitions about what it means to estimate Δv over an uncertain set / the constraints we’re putting on v. I’m imagining integrating Δvdv, and so if there is any v whose negative is also in the set with the same probability, then the two will cancel out completely, neither of them affecting the end result.
It seems to me like the property you want comes from having non-negative vs, which might have opposite inputs. That is, instead of v_1 being “Bob’s utility function” and v_2 being “Bob’s utility function, with a minus sign in front,” v_3 would be “positive changes to Bob’s utility function that I caused” and v_4 would be “negative changes to Bob’s utility function that I caused.” If we assign equal weight to only v_1 and v_2, it looks like there is no change to Bob’s utility function that will impact our decision-making, since when we integrate over our uncertainty the two balance out.
We’ve defined v_3 and v_4 to be non-negative, though. If we pull Bob’s sweater to rescue him from the speeding truck, v_3 is positive (because we’ve saved Bob) and v_4 is positive (because we’ve damaged his sweater). So we’ll look for plans that reduce both (which is most easily done by not intervening, and letting Bob be hit by the truck). If we want the agent to save Bob, we need to include that in u, and if we do so it’ll try to save Bob in the way with minimal other effects.
Agreed that an AI that tries to maximize “profit” instead of “revenue” is the best place to look for a reduced impact AI (I also think that reduced impact AI is the best name for this concept, btw). I don’t think I’m seeing yet how this plan is a good representation of “cost.” It seems that in order to produce minimal activity, we need to put effort into balancing our weights on possible vs such that inaction looks better than action.
(I think this is easier to formulate in terms of effort spent than consequences wrought, but clearly we want to measure “inaction” in terms of consequences, not actions. It might be very low cost for the RIAI to send a text message to someone, but then that someone might do a lot of things that impact a lot of people and preferences, and we would rather if the RIAI just didn’t send the message.)
It seems to me that any aggregation procedure over a category V is equivalent to a particular utility v*, and so the implausibility that a particular utility function v’ is the right one to pick applies as strongly to v*. For this to not be the case, we need to know something nontrivial about our category V or our aggregation procedure. (I also think we can, given an aggregation procedure or a category, work back from v’ to figure out at least one implied category or aggregation procedure given some benign assumptions.)
Do you disagree with my description of the “resource gathering agent”: http://lesswrong.com/r/discussion/lw/luo/resource_gathering_and_precorriged_agents/
The point here is that M(u-v) might not know what v is, but M(εu+v) certainly does, and this is not the same as maximising an unknown utility function.
Ah, okay. I think I see better what you’re getting at. My intuition is that there’s a mapping to minimization of a reasonable aggregation of the set of non-negative utilities, but I think I should actually work through some examples before I make any long comments.
I don’t think I had read that article until now, but no objections come to mind.
That would be useful to know, if you can find examples. Especially ones where all v and -v have the same probability (which is my current favourite requirement in this area).