That all sounds right to me. So do you now agree that it’s not obvious whether values-executors are more promising than grader-optimizers?
Obvious? No. (It definitely wasn’t obvious to me!) It just seems more promising to me on balance given the considerations we’ve discussed.
If we had mastery over cognitive interpretability, building a grader-optimizer wouldn’t yield an agent that really stably pursues what we want. It would yield an agent that really wants to pursue grader evaluations, plus an external restraint to prevent the agent from deceiving us (“during deployment we could identify why the actor thinks a plan would be highly grader-evaluated”). Both of those are required at runtime in order to safely get useful work out of the system as a whole. The restraint is a critical point of failure which we are relying on ourselves/the operator to actively maintain. The agent under restraint doesn’t positively want that restraint to remain in place and not-fail; the agent isn’t directing its cognitive horsepower towards ensuring that its own thoughts are running along the tracks we intended it to. It’s safe but only in a contingent way that seems unstable to me, unnecessarily so.
If we had that level of mastery over cognitive interpretability, I don’t understand why we wouldn’t use that tech to directly shape the agent to want what we want it to want. And I’d think I’d say basically the same thing even at lesser levels of mastery over the tech.
When I say we have to accurately evaluate plans, I just mean that our evaluations need to be correct; I don’t mean that they have to be based on a prediction of the consequences of the plan.
I do agree that cognitive interpretability/decoding inner thoughts/mechanistic explanation is a primary candidate for how we can successfully accurately evaluate plans.
Cool, yes I agree. When we need assurances about a particular plan that the agent has made, that seems like a good way to go. I also suspect that at a certain level of mechanistic understanding of how the agent’s cognition is developing over training & what motivations control its decision-making, it won’t be strictly required for us to continue evaluating individual plans. But that, I’m not too confident about.
Sure, that all sounds reasonable to me; I think we’ve basically converged.
I don’t understand why we wouldn’t use that tech to directly shape the agent to want what we want it to want.
The main reason is that we don’t know ourselves what we want it to want, and we would instead like it to follow some process that we like (e.g. just do some scientific innovation and nothing else, help us do better philosophy to figure out what we want, etc). This sort of stuff seems like a poor fit for values-executors. Probably there will be some third, totally different mental architecture for such tasks, but if you forced me to choose between values-executors or grader-optimizers, I’d currently go with grader-optimizers.
Obvious? No. (It definitely wasn’t obvious to me!) It just seems more promising to me on balance given the considerations we’ve discussed.
If we had mastery over cognitive interpretability, building a grader-optimizer wouldn’t yield an agent that really stably pursues what we want. It would yield an agent that really wants to pursue grader evaluations, plus an external restraint to prevent the agent from deceiving us (“during deployment we could identify why the actor thinks a plan would be highly grader-evaluated”). Both of those are required at runtime in order to safely get useful work out of the system as a whole. The restraint is a critical point of failure which we are relying on ourselves/the operator to actively maintain. The agent under restraint doesn’t positively want that restraint to remain in place and not-fail; the agent isn’t directing its cognitive horsepower towards ensuring that its own thoughts are running along the tracks we intended it to. It’s safe but only in a contingent way that seems unstable to me, unnecessarily so.
If we had that level of mastery over cognitive interpretability, I don’t understand why we wouldn’t use that tech to directly shape the agent to want what we want it to want. And I’d think I’d say basically the same thing even at lesser levels of mastery over the tech.
Cool, yes I agree. When we need assurances about a particular plan that the agent has made, that seems like a good way to go. I also suspect that at a certain level of mechanistic understanding of how the agent’s cognition is developing over training & what motivations control its decision-making, it won’t be strictly required for us to continue evaluating individual plans. But that, I’m not too confident about.
Sure, that all sounds reasonable to me; I think we’ve basically converged.
The main reason is that we don’t know ourselves what we want it to want, and we would instead like it to follow some process that we like (e.g. just do some scientific innovation and nothing else, help us do better philosophy to figure out what we want, etc). This sort of stuff seems like a poor fit for values-executors. Probably there will be some third, totally different mental architecture for such tasks, but if you forced me to choose between values-executors or grader-optimizers, I’d currently go with grader-optimizers.