I liked this post, and look forward to the next one.
More specific, and critical commentary (It seems it is easier to notice surprise than agreement):
(With embedded footnotes)
1.
If the CCC is right, then if power gain is disincentivised, the agent isn’t incentivised to overfit and disrupt our AU landscape.
(The CCC didn’t make reference to overfitting.)
Premise:
If A is true then B will be true.
Conclusion:
If A is false B will be false.
The conclusion doesn’t follow from the premise.
2.
Without even knowing who we are or what we want, the agent’s actions preserve our attainable utilities.
Note that preserving our attainable utilities isn’t a good thing, it’s just not a bad thing.
Issues: Attainable utilities indefinitely ‘preserved’ are wasted.
Possible issues: If an AI just happened to discovered a cure for cancer, we’d probably want to know the cure. But if an AI didn’t know what we wanted, and just focused on preserving utility*, then (perhaps as a side effect of considering both that we might want to know the cure, and might not want to know the cure) it might not tell us because that preserves utility. (The AI might operate on a framework that distinguishes between action and inaction, in a way that means it doesn’t do thing that might be bad, at the cost of not doing things that might be good.)
*If we are going to calculate something and a reliable source (which has already done the calculation) tells us the result, we can save on energy (and preserve resources that can be converted into utility) by not doing the calculation. In theory this could include not only arithmetic, but simulations of different drugs or cancer treatments to come up with better options.
3.
We can tell it:
Is this a metaphor for making an ‘agent’ with that goal, or actually creating an agent that we can give different commands to and switch out/modify/add to its goals? (Why ask it to ‘make paperclips’ if that’s dangerous, when we can ask it to ‘make 100 paperclips’?)
4.
Narrowly improve paperclip production efficiency ← This is the kind of policy AUP_conceptual is designed to encourage and allow. We don’t know if this is the optimal policy, but by CCC, the optimal policy won’t be catastrophic.
Addressed in 1.
5.
Imagine I take over a bunch of forever inaccessible stars and jumble them up. This is a huge change in state, but it doesn’t matter to us.
It does a little bit.
It means we can’t observe them for astronomical purposes. But this isn’t the same as losing a telescope looking at them—it’s (probably) permanent, and maybe we learn something different from it. We learn that stars can be jumbled up. This may have physics/stellar engineering consequences, etc.
6.
AUP_conceptual solves this “locality” problem by regularizing the agent’s impact on the nearby AU landscape.
Nearby from its perspective? (From a practical standpoint, if you’re close to an airport you’re close to a lot of places on earth, that you aren’t from a ‘space’ perspective.)
7.
For past-impact measures, it’s not clear that their conceptual thrusts are well-aimed, even if we could formalize everything correctly. Past approaches focus either on minimizing physical change to some aspect of the world or on maintaining ability to reach many world states.
If there’s a limited amount of energy, then using energy limits ability to reach many world states—perhaps in a different sense than above. If there’s a machine that can turn all pebbles into something else (obsidian, precious stones, etc.) but it takes a lot of energy, then using up energy limits the number of times it can be used. (This might seem quantifiable, moving the world* from containing 101 units of energy → 99 units an effect on how many times the machine can be used if it requires 100, or 10 units to use. But this isn’t robust against random factors decreasing energy (or decreasing it), or future improvements in energy efficiency of the machine—if the cost is brought down to 1 unit of energy, then using up 2 units prevents it from being used twice.
*Properly formalizing this should take a lot of other things into account, like ‘distant’ and notions of inaccessible regions of space, etc.
Also the agent might be concerned with flows rather than actions.* We have an intuitive notion that ‘building factories increases power’, but what about redirecting a river/stream/etc. with dams or digging new paths for water to flow? What does the agent do if it unexpectedly gains power by some means, or realizes its paperclip machines can be used to move strawberries/make a copy itself which is weaker but less constrained? Can the agent make a machine that makes paperclips/make making paperclips easier?
*As a consequence of this being a more effective approach—it makes certain improvements obvious. If you have a really long commute to work, you might wish you lived closer to your work. (You might also be aware that houses closer to your work are more expensive, but humans are good at picking up on this kind of low hanging fruit. A capable agent that thinks about process seeing ‘opportunities to gain power’ is of some general concern. In this case because an agent that tries to minimize reducing/affecting** other agents attainable utility, without knowing/needing to know about other agents is somewhat counterintuitive.
**It’s not clear if increasing shows up on the AUP map, or how that’s handled.
8.
Therefore, I consider AUP to conceptually be a solution to impact measurement.
Wait! Let’s not get ahead of ourselves! I don’t think we’ve fully bridged the concept/execution gap.
However for AUP, it seems possible—more on that later.
I appreciate this distinction being made. A post that explains the intuitions behind an approach is very useful, and my questions about the approach may largely relate to implementation details.
9.
AUP aims to prevent catastrophes by stopping bad agents from gaining power to do bad things, but it symmetrically impedes otherwise-good agents.
A number of my comments above were anticipated then.
CCC says (for non-evil goals) “if the optimal policy is catastrophic, then it’s because of power-seeking”. So its contrapositive is indeed as stated.
Note that preserving our attainable utilities isn’t a good thing, it’s just not a bad thing.
I meant “preserving” as in “not incentivized to take away power from us”, not “keeps us from benefitting from anything”, but you’re right about the implication as stated. Sorry for the ambiguity.
Is this a metaphor for making an ‘agent’ with that goal, or actually creating an agent that we can give different commands to and switch out/modify/add to its goals?
Metaphor.
“AUP_conceptual solves this “locality” problem by regularizing the agent’s impact on the nearby AU landscape.”
Nearby from its perspective? (From a practical standpoint, if you’re close to an airport you’re close to a lot of places on earth, that you aren’t from a ‘space’ perspective.)
Nearby wrt this kind of “AU distance/practical perspective”, yes. Great catch.
Also the agent might be concerned with flows rather than actions.* We have an intuitive notion that ‘building factories increases power’, but what about redirecting a river/stream/etc. with dams or digging new paths for water to flow? What does the agent do if it unexpectedly gains power by some means, or realizes its paperclip machines can be used to move strawberries/make a copy itself which is weaker but less constrained? Can the agent make a machine that makes paperclips/make making paperclips easier?
As a consequence of this being a more effective approach—it makes certain improvements obvious. If you have a really long commute to work, you might wish you lived closer to your work. (You might also be aware that houses closer to your work are more expensive, but humans are good at picking up on this kind of low hanging fruit. A capable agent that thinks about process seeing ‘opportunities to gain power’ is of some general concern. In this case because an agent that tries to minimize reducing/affecting* other agents attainable utility, without knowing/needing to know about other agents is somewhat counterintuitive.
**It’s not clear if increasing shows up on the AUP map, or how that’s handled.
Great thoughts. I think some of this will be answered in a few posts by the specific implementation details. What do you mean by “AUP map”? The AU landscape?
What does the agent do if it unexpectedly gains power by some means,
The idea is it only penalizes expected power gain.
CCC says (for non-evil goals) “if the optimal policy is catastrophic, then it’s because of power-seeking”. So its contrapositive is indeed as stated.
That makes sense. One of the things I like about this approach is that it isn’t immediately clear what else could be a problem, and that might just be implementation details or parameters: corrigibility from limited power only works if we make sure that power is low enough we can turn it off, if the agent will acquire power if that’s the only way to achieve its goal rather than stopping at/before some limit then it might still acquire power and be catastrophic*, etc.
*Unless power seeking behavior is the cause of catastrophe, rather than having power.
Sorry for the ambiguity.
It wasn’t ambiguous, I meant to gesture at stuff like ‘astronomical waste’ (and waste on smaller scales) - areas where we do want resources to be used. This was addressed at the end of your post already,:
So we can hope to build a non-catastrophic AUP agent and get useful work out of it. We just can’t directly ask it to solve all of our problems: it doesn’t make much sense to speak of a “low-impact singleton”.
-but I wanted to highlight the area where we might want powerful aligned agents, rather than AUP agents that don’t seek power.
What do you mean by “AUP map”? The AU landscape?
That is what I meant originally, though upon reflection a small distinction could be made:
Territory: AU landscape*
Map: AUP map (an AUP agent’s model of the landscape)
*Whether or not this is thought of as ‘Territory’ or a ‘map’, conceptually AUP agents will navigate (and/or create) a map of the AU landscape. (If AU landscape is a map, then AUP agents may navigate a map of a map. There also might be better ways this distinction could be made, like AU landscape is a style/type of map, just like there are maps of elevation and topology.)
The idea is it only penalizes expected power gain.
Gurkenglas previously commented that they didn’t think that AUP solved ‘agents learns how to convince people/agents to do things’. While it’s not immediately clear how an agent could happen to find out how to convince humans of anything (the super-intelligent persuader), if an agent obtained that power, it continuing to operate could constitute a risk. (Though further up this comment I brought up the possibility that “power seeking behavior is the cause of catastrophe, rather than having power.” This doesn’t seem likely in its entirety, but seems possible in part—that is, powerful and power seeking might not be as dangerous as powerful and power seeking.)
if we make sure that power is low enough we can turn it off, if the agent will acquire power if that’s the only way to achieve its goal rather than stopping at/before some limit then it might still acquire power and be catastrophic*, etc.
Yeah. I have the math for this kind of tradeoff worked out—stay tuned!
Though further up this comment I brought up the possibility that “power seeking behavior is the cause of catastrophe, rather than having power.”
I think this is true, actually; if another agent already has a lot of power and it isn’t already catastrophic for us, their continued existence isn’t that big of a deal wrt the status quo. The bad stuff comes with the change in who has power.
The act of taking away our power is generally only incentivized so the agent can become better able to achieve its own goal. The question is, why is the agent trying to convince us of something / get someone else to do something catastrophic, if the agent isn’t trying to increase its own AU?
I liked this post, and look forward to the next one.
More specific, and critical commentary (It seems it is easier to notice surprise than agreement):
(With embedded footnotes)
1.
(The CCC didn’t make reference to overfitting.)
Premise:
If A is true then B will be true.
Conclusion:
If A is false B will be false.
The conclusion doesn’t follow from the premise.
2.
Note that preserving our attainable utilities isn’t a good thing, it’s just not a bad thing.
Issues: Attainable utilities indefinitely ‘preserved’ are wasted.
Possible issues: If an AI just happened to discovered a cure for cancer, we’d probably want to know the cure. But if an AI didn’t know what we wanted, and just focused on preserving utility*, then (perhaps as a side effect of considering both that we might want to know the cure, and might not want to know the cure) it might not tell us because that preserves utility. (The AI might operate on a framework that distinguishes between action and inaction, in a way that means it doesn’t do thing that might be bad, at the cost of not doing things that might be good.)
*If we are going to calculate something and a reliable source (which has already done the calculation) tells us the result, we can save on energy (and preserve resources that can be converted into utility) by not doing the calculation. In theory this could include not only arithmetic, but simulations of different drugs or cancer treatments to come up with better options.
3.
Is this a metaphor for making an ‘agent’ with that goal, or actually creating an agent that we can give different commands to and switch out/modify/add to its goals? (Why ask it to ‘make paperclips’ if that’s dangerous, when we can ask it to ‘make 100 paperclips’?)
4.
Addressed in 1.
5.
It does a little bit.
It means we can’t observe them for astronomical purposes. But this isn’t the same as losing a telescope looking at them—it’s (probably) permanent, and maybe we learn something different from it. We learn that stars can be jumbled up. This may have physics/stellar engineering consequences, etc.
6.
Nearby from its perspective? (From a practical standpoint, if you’re close to an airport you’re close to a lot of places on earth, that you aren’t from a ‘space’ perspective.)
7.
If there’s a limited amount of energy, then using energy limits ability to reach many world states—perhaps in a different sense than above. If there’s a machine that can turn all pebbles into something else (obsidian, precious stones, etc.) but it takes a lot of energy, then using up energy limits the number of times it can be used. (This might seem quantifiable, moving the world* from containing 101 units of energy → 99 units an effect on how many times the machine can be used if it requires 100, or 10 units to use. But this isn’t robust against random factors decreasing energy (or decreasing it), or future improvements in energy efficiency of the machine—if the cost is brought down to 1 unit of energy, then using up 2 units prevents it from being used twice.
*Properly formalizing this should take a lot of other things into account, like ‘distant’ and notions of inaccessible regions of space, etc.
Also the agent might be concerned with flows rather than actions.* We have an intuitive notion that ‘building factories increases power’, but what about redirecting a river/stream/etc. with dams or digging new paths for water to flow? What does the agent do if it unexpectedly gains power by some means, or realizes its paperclip machines can be used to move strawberries/make a copy itself which is weaker but less constrained? Can the agent make a machine that makes paperclips/make making paperclips easier?
*As a consequence of this being a more effective approach—it makes certain improvements obvious. If you have a really long commute to work, you might wish you lived closer to your work. (You might also be aware that houses closer to your work are more expensive, but humans are good at picking up on this kind of low hanging fruit. A capable agent that thinks about process seeing ‘opportunities to gain power’ is of some general concern. In this case because an agent that tries to minimize reducing/affecting** other agents attainable utility, without knowing/needing to know about other agents is somewhat counterintuitive.
**It’s not clear if increasing shows up on the AUP map, or how that’s handled.
8.
I appreciate this distinction being made. A post that explains the intuitions behind an approach is very useful, and my questions about the approach may largely relate to implementation details.
9.
A number of my comments above were anticipated then.
CCC says (for non-evil goals) “if the optimal policy is catastrophic, then it’s because of power-seeking”. So its contrapositive is indeed as stated.
I meant “preserving” as in “not incentivized to take away power from us”, not “keeps us from benefitting from anything”, but you’re right about the implication as stated. Sorry for the ambiguity.
Metaphor.
Nearby wrt this kind of “AU distance/practical perspective”, yes. Great catch.
Great thoughts. I think some of this will be answered in a few posts by the specific implementation details. What do you mean by “AUP map”? The AU landscape?
The idea is it only penalizes expected power gain.
That makes sense. One of the things I like about this approach is that it isn’t immediately clear what else could be a problem, and that might just be implementation details or parameters: corrigibility from limited power only works if we make sure that power is low enough we can turn it off, if the agent will acquire power if that’s the only way to achieve its goal rather than stopping at/before some limit then it might still acquire power and be catastrophic*, etc.
*Unless power seeking behavior is the cause of catastrophe, rather than having power.
It wasn’t ambiguous, I meant to gesture at stuff like ‘astronomical waste’ (and waste on smaller scales) - areas where we do want resources to be used. This was addressed at the end of your post already,:
-but I wanted to highlight the area where we might want powerful aligned agents, rather than AUP agents that don’t seek power.
That is what I meant originally, though upon reflection a small distinction could be made:
Territory: AU landscape*
Map: AUP map (an AUP agent’s model of the landscape)
*Whether or not this is thought of as ‘Territory’ or a ‘map’, conceptually AUP agents will navigate (and/or create) a map of the AU landscape. (If AU landscape is a map, then AUP agents may navigate a map of a map. There also might be better ways this distinction could be made, like AU landscape is a style/type of map, just like there are maps of elevation and topology.)
Gurkenglas previously commented that they didn’t think that AUP solved ‘agents learns how to convince people/agents to do things’. While it’s not immediately clear how an agent could happen to find out how to convince humans of anything (the super-intelligent persuader), if an agent obtained that power, it continuing to operate could constitute a risk. (Though further up this comment I brought up the possibility that “power seeking behavior is the cause of catastrophe, rather than having power.” This doesn’t seem likely in its entirety, but seems possible in part—that is, powerful and power seeking might not be as dangerous as powerful and power seeking.)
Yeah. I have the math for this kind of tradeoff worked out—stay tuned!
I think this is true, actually; if another agent already has a lot of power and it isn’t already catastrophic for us, their continued existence isn’t that big of a deal wrt the status quo. The bad stuff comes with the change in who has power.
The act of taking away our power is generally only incentivized so the agent can become better able to achieve its own goal. The question is, why is the agent trying to convince us of something / get someone else to do something catastrophic, if the agent isn’t trying to increase its own AU?