I don’t really disagree with any of what you’re saying but I also don’t see why it matters.
I consider myself to be saying “you can’t just abstract this system as ‘trying to make evaluations come out high’; the dynamics really do matter, and considering the situation in more detail does change the conclusions.”
I’m on board with the first part of this, but I still don’t see the part where it changes any conclusions. From my perspective your responses are of the form “well, no, your abstract argument neglects X, Y and Z details” rather than explaining how X, Y and Z details change the overall picture.
For example, in the above comment you’re talking about how the planner will be different if the shards are different, because the historical reinforcement-events would be different. I agree with that. But then it seems like if you want to argue that one is safer than the other, you have to talk about the historical reinforcement-events and how they arose, whereas all of your discussion of grader-optimizers vs values-executors doesn’t talk about the historical reinforcement-events at all, and instead talks about the motivational architecture while screening off the historical reinforcement-events.
(Indeed, my original comment was specifically asking about what your story was for the historical reinforcement-events for values-executors: “Certainly I agree that if you successfully instill good values into your AI system, you have defused the risk argument above. But how did you do that? Why didn’t we instead get “almost-value-child”, who (say) values doing challenging things that require hard work, and so enrolls in harder and harder courses and gets worse and worse grades?”)
I don’t really disagree with any of what you’re saying but I also don’t see why it matters. … Indeed, my original comment was specifically asking about what your story was for the historical reinforcement-events for values-executors
I was pretty surprised by the values-executor pseudocode in Appendix B, because it seems like a bog-standard consequentialist which I would have thought you’d consider as a grader-optimizer. In particular you can think of the pseudocode as follows:
Grader-optimizer: planModificationSample + the for loop that keeps improving the plan based on proposed modifications
If you agree that [planModificationSample + the for loop] is a grader-optimizer, why isn’t this an example of an alignment approach involving a grader-optimizer that could plausibly work?
If you don’t agree that [planModificationSample + the for loop] is a grader-optimizer, then why not, and what modification would you have to make in order to make it a grader-optimizer with the grader self.diamondShard(self.WM.getConseq(plan))?
You also said:
I saw that and I don’t understand why it rules out planModificationSample + the associated for loop as a grader-optimizer. Given your pseudocode it seems like the only point of planModificationSample is to produce plan modifications that lead to high outputs of self.diamondShard(self.WM.getConseq(plan)). So why is that not “optimizing the outputs of the grader as its main terminal motivation”?
And now, it seems like we agree that the pseudocode I gave isn’t a grader-optimizer for the grader self.diamondShard(self.WM.getConseq(plan)), and that e.g. approval-directed agents are grader-optimizers for some idealized function of human-approval? That seems like a substantial resolution of disagreement, no?
Sounds like we mostly disagree on cumulative effort to: (get a grader-optimizer to do good things) vs (get a values-executing agent to do good things).
We probably perceive the difficulty as follows:
Getting the target configuration into an agent
Grader-optimization
Alex: Very very hard
Rohin: Hard
Values-executing
Alex: Moderate/hard
Rohin: Hard
Aligning the target configuration such that good things happen (e.g. makes diamonds), conditional on the intended cognitive patterns being instilled to begin with (step 1)
Grader-optimization
Alex: Extremely hard
Rohin: Very hard
Values-executing
Alex: Hard
Rohin: Hard
Does this seem reasonable? We would then mostly disagree on relative difficulty of 1a vs 1b.
Separately, I apologize for having given an incorrect answer earlier, which you then adopted, and then I berated you for adopting my own incorrect answer—how simplistic of you! Urgh.
I had said:
and what modification would you have to make in order to make it a grader-optimizer with the grader self.diamondShard(self.WM.getConseq(plan))?
Oh, I would change self.diamondShard to self.diamondShardShard?
But I should also have mentioned the change in planModificationSample. Sorry about that.
And now, it seems like we agree that the pseudocode I gave isn’t a grader-optimizer for the grader self.diamondShard(self.WM.getConseq(plan)), and that e.g. approval-directed agents are grader-optimizers for some idealized function of human-approval? That seems like a substantial resolution of disagreement, no?
I don’t think I agree with this.
At a high level, your argument can be thought of as having two steps:
Grader-optimizers are bad, because of problem P.
Approval-directed agents / [things built by IDA, debate, RRM] are grader-optimizers.
I’ve been trying to resolve disagreement along one of two pathways:
Collapse the argument into a single statement “approval-directed agents are bad because of problem P”, and try to argue about that statement. (Strategy in the previous comment thread, specifically by arguing that problem P also applied to other approaches.)
Understand what you mean by grader-optimizers, and then figure out which of the two steps of your argument I disagree with, so that we can focus on that subclaim instead. (Strategy for most of this comment thread.)
Unfortunately, I don’t think I have a sufficient definition (intensional or extensional) of grader-optimizers to say which of the two steps I disagree with. I don’t have a coherent concept in my head that says your pseudocode isn’t a grader-optimizer and approval-directed agents are grader-optimizers. (The closest is the “grader is complicit” thing, which I think probably could be made coherent, but it would say that your pseudocode isn’t a grader-optimizer and is agnostic / requires more details for approval-directed agents.)
In my previous comment I switched back from strategy 2 to strategy 1 since that seemed more relevant to your response but I should have signposted it more, sorry about that.
But how did you do that? Why didn’t we instead get “almost-value-child”, who (say) values doing challenging things that require hard work, and so enrolls in harder and harder courses and gets worse and worse grades?”)
I think the truth is even worse than that, in that deceptively aligned value children are the default scenario without myopia and solely causal decision theory or variants of it, and Turntrout either has good arguments for why this isn’t favored, or Turntrout hopes that deceptive alignment is defined away.
For reasons why deceptive alignment might be favored, I have a link below:
I don’t really disagree with any of what you’re saying but I also don’t see why it matters.
I’m on board with the first part of this, but I still don’t see the part where it changes any conclusions. From my perspective your responses are of the form “well, no, your abstract argument neglects X, Y and Z details” rather than explaining how X, Y and Z details change the overall picture.
For example, in the above comment you’re talking about how the planner will be different if the shards are different, because the historical reinforcement-events would be different. I agree with that. But then it seems like if you want to argue that one is safer than the other, you have to talk about the historical reinforcement-events and how they arose, whereas all of your discussion of grader-optimizers vs values-executors doesn’t talk about the historical reinforcement-events at all, and instead talks about the motivational architecture while screening off the historical reinforcement-events.
(Indeed, my original comment was specifically asking about what your story was for the historical reinforcement-events for values-executors: “Certainly I agree that if you successfully instill good values into your AI system, you have defused the risk argument above. But how did you do that? Why didn’t we instead get “almost-value-child”, who (say) values doing challenging things that require hard work, and so enrolls in harder and harder courses and gets worse and worse grades?”)
Uh, I’m confused. From your original comment in this thread:
You also said:
And now, it seems like we agree that the pseudocode I gave isn’t a grader-optimizer for the grader
self.diamondShard(self.WM.getConseq(plan))
, and that e.g. approval-directed agents are grader-optimizers for some idealized function of human-approval? That seems like a substantial resolution of disagreement, no?Sounds like we mostly disagree on cumulative effort to: (get a grader-optimizer to do good things) vs (get a values-executing agent to do good things).
We probably perceive the difficulty as follows:
Getting the target configuration into an agent
Grader-optimization
Alex: Very very hard
Rohin: Hard
Values-executing
Alex: Moderate/hard
Rohin: Hard
Aligning the target configuration such that good things happen (e.g. makes diamonds), conditional on the intended cognitive patterns being instilled to begin with (step 1)
Grader-optimization
Alex: Extremely hard
Rohin: Very hard
Values-executing
Alex: Hard
Rohin: Hard
Does this seem reasonable? We would then mostly disagree on relative difficulty of 1a vs 1b.
Separately, I apologize for having given an incorrect answer earlier, which you then adopted, and then I berated you for adopting my own incorrect answer—how simplistic of you! Urgh.
I had said:
But I should also have mentioned the change in
planModificationSample
. Sorry about that.I don’t think I agree with this.
At a high level, your argument can be thought of as having two steps:
I’ve been trying to resolve disagreement along one of two pathways:
Collapse the argument into a single statement “approval-directed agents are bad because of problem P”, and try to argue about that statement. (Strategy in the previous comment thread, specifically by arguing that problem P also applied to other approaches.)
Understand what you mean by grader-optimizers, and then figure out which of the two steps of your argument I disagree with, so that we can focus on that subclaim instead. (Strategy for most of this comment thread.)
Unfortunately, I don’t think I have a sufficient definition (intensional or extensional) of grader-optimizers to say which of the two steps I disagree with. I don’t have a coherent concept in my head that says your pseudocode isn’t a grader-optimizer and approval-directed agents are grader-optimizers. (The closest is the “grader is complicit” thing, which I think probably could be made coherent, but it would say that your pseudocode isn’t a grader-optimizer and is agnostic / requires more details for approval-directed agents.)
In my previous comment I switched back from strategy 2 to strategy 1 since that seemed more relevant to your response but I should have signposted it more, sorry about that.
I think the truth is even worse than that, in that deceptively aligned value children are the default scenario without myopia and solely causal decision theory or variants of it, and Turntrout either has good arguments for why this isn’t favored, or Turntrout hopes that deceptive alignment is defined away.
For reasons why deceptive alignment might be favored, I have a link below:
https://www.lesswrong.com/posts/A9NxPTwbw6r6Awuwt/how-likely-is-deceptive-alignment