Thanks for taking time to answer my questions in detail!
About your example for other failure modes
Is it meant to point at the ability of the actor to make the plan more confusing/harder to evaluate? Meaning that you’re pointing at the ability for the actor to “obfuscate” its plan in order to get high reward?
If so, it’s not clear to me why this is valuable for the actor to do? How is it supposed to get better reward from confusion only? If it has another agenda (making paperclips instead of diamonds for example), then the obfuscation is clearly valuable to allow it to work on its main goal. But here its goal is to improve evaluation, and so confusion doesn’t seem like it helps.
About the cost/competitiveness argument
I think that a values-executing AGI can also search over as many plans which actually make sense, I don’t think its options are limited or anything. But it’ll be generating different kinds of plans, using reflective reasoning to restrict its search to non-adversarial-to-own-values parts of plan space (e.g. “don’t think about basilisks”).
This is the part where I currently cannot reconstruct your model (maybe because I haven’t read shard theory in a while). Because from an abstract perspective, the trick part of starting from significantly more limited plans is how to expand the range of plans without allowing adversarial ones. And I don’t have a model of what mechanism you think make it easier to go from the bottom up (from few plans accepted to more plans accepted) safely rather than from the top down (from all plans accepted to less plans accepted) safely.
About the “negative result” of not even trying to do grader optimization
I don’t see why that should exist; any plan-inhabiting adversary wishes to fool the boundary of whatever rule you provide.
See the last 10+years of alignment researchers failing to do this. Probably wise to not spend further increments of research time on such matters, once the fault is pointed out.
But why don’t you see that it should exist? What are the parts of your model that make this prediction? Because by definition, there is not a single innovation or progress that most people expected to exist. That’s what it means to figure out something new. So you need an actual reason for why you’re not falling into that trap of only being able to imagine what already exists.
Here are some guesses:
Specific scientific claim that there should be no true name/clean concept of what is adversarial for a large class of cognitive architectures (in opposition to Natural Abstraction Hypothesis I guess?)
Human limitation claim that even though there might be such a true name/clear concept, we have neither the time not the ability to get it right under the current conditions
Security mindset claim that because the actor is allowed to get arbitrary smart, you should expect that if it optimizes against you, it wins.
For the second point, what do you make of the claim that there are ridiculously few researchers who even attempted to work on the problem in the last 10 years, compared to almost all easier fields of knowledge? Do you think that this should not alter the evidence of absence you get from the absence of evidence?
Is it meant to point at the ability of the actor to make the plan more confusing/harder to evaluate? Meaning that you’re pointing at the ability for the actor to “obfuscate” its plan in order to get high reward?
No, the point is that the grader can only grade the current plan; it doesn’t automatically know what its counterfactual branches output. The grader is scope-limited to its current invocation. This makes consistent grading harder (e.g. the soup-kitchen plan vs political activism, neither invocation knows what would be given by the other call to the grader, so they can’t trivially agree on a consistent scale).
Thanks for taking time to answer my questions in detail!
About your example for other failure modes
Is it meant to point at the ability of the actor to make the plan more confusing/harder to evaluate? Meaning that you’re pointing at the ability for the actor to “obfuscate” its plan in order to get high reward?
If so, it’s not clear to me why this is valuable for the actor to do? How is it supposed to get better reward from confusion only? If it has another agenda (making paperclips instead of diamonds for example), then the obfuscation is clearly valuable to allow it to work on its main goal. But here its goal is to improve evaluation, and so confusion doesn’t seem like it helps.
About the cost/competitiveness argument
This is the part where I currently cannot reconstruct your model (maybe because I haven’t read shard theory in a while). Because from an abstract perspective, the trick part of starting from significantly more limited plans is how to expand the range of plans without allowing adversarial ones. And I don’t have a model of what mechanism you think make it easier to go from the bottom up (from few plans accepted to more plans accepted) safely rather than from the top down (from all plans accepted to less plans accepted) safely.
About the “negative result” of not even trying to do grader optimization
But why don’t you see that it should exist? What are the parts of your model that make this prediction? Because by definition, there is not a single innovation or progress that most people expected to exist. That’s what it means to figure out something new. So you need an actual reason for why you’re not falling into that trap of only being able to imagine what already exists.
Here are some guesses:
Specific scientific claim that there should be no true name/clean concept of what is adversarial for a large class of cognitive architectures (in opposition to Natural Abstraction Hypothesis I guess?)
Human limitation claim that even though there might be such a true name/clear concept, we have neither the time not the ability to get it right under the current conditions
Security mindset claim that because the actor is allowed to get arbitrary smart, you should expect that if it optimizes against you, it wins.
For the second point, what do you make of the claim that there are ridiculously few researchers who even attempted to work on the problem in the last 10 years, compared to almost all easier fields of knowledge? Do you think that this should not alter the evidence of absence you get from the absence of evidence?
No, the point is that the grader can only grade the current plan; it doesn’t automatically know what its counterfactual branches output. The grader is scope-limited to its current invocation. This makes consistent grading harder (e.g. the soup-kitchen plan vs political activism, neither invocation knows what would be given by the other call to the grader, so they can’t trivially agree on a consistent scale).