Outer alignment: An objective function r is outer aligned if all models that perform optimally on r in the limit of perfect training and infinite data are intent aligned.”
Good point—and I think that the reference to intent alignment is an important part of outer alignment, so I don’t want to change that definition. I further tweaked the intent alignment definition a bit to just reference optimal policies rather than outer alignment.
Cool (though FWIW, if you’re going to lean on the notion of policies being aligned with humans, I’d be inclined to define that as well, in addition to defining what it is for agents to be aligned with humans. But maybe the implied definition is clear enough: I’m assuming you have in mind something like “a policy is aligned with humans if an agent implementing that policy is aligned with humans.”).
Regardless, sounds like your definition is pretty similar to: “An agent is intent aligned if its behavioral objective is such that an arbitrarily powerful and competent agent pursuing this objective to arbitrary extremes wouldn’t act in ways that humans judge bad”? If you see it as importantly different from this, I’d be curious.
Maybe the best thing to use here is just the same definition as I gave for outer alignment—I’ll change it to reference that instead.
Aren’t they now defined in terms of each other?
“Intent alignment: An agent is intent aligned if its behavioral objective is outer aligned.
Outer alignment: An objective function r is outer aligned if all models that perform optimally on r in the limit of perfect training and infinite data are intent aligned.”
Good point—and I think that the reference to intent alignment is an important part of outer alignment, so I don’t want to change that definition. I further tweaked the intent alignment definition a bit to just reference optimal policies rather than outer alignment.
Cool (though FWIW, if you’re going to lean on the notion of policies being aligned with humans, I’d be inclined to define that as well, in addition to defining what it is for agents to be aligned with humans. But maybe the implied definition is clear enough: I’m assuming you have in mind something like “a policy is aligned with humans if an agent implementing that policy is aligned with humans.”).
Regardless, sounds like your definition is pretty similar to: “An agent is intent aligned if its behavioral objective is such that an arbitrarily powerful and competent agent pursuing this objective to arbitrary extremes wouldn’t act in ways that humans judge bad”? If you see it as importantly different from this, I’d be curious.