I’m not sure you’ve gotten quite ALBA right here, and I think that causes a problem for your objection. Relevant writeups: most recent and original ALBA.
As I understand it, ALBA proposes the following process:
H trains A to choose actions that would get the best immediate feedback from H. A is benign (assuming that H could give not-catastrophic immediate feedback for all actions and that the learning process is robust). H defines the feedback, and so A doesn’t make decisions that are more effective at anything than H is; A is just faster.
A (and possibly H) is (are) used to define a slow process A+ that makes “better” decisions than A or H would. (Better is in quotes because we don’t have a definition of better; the best anyone knows how to do right now is look at the amplification process and say “yep, that should do better.”) Maybe H uses A as an assistant, maybe a copy of A breaks down a decision problem into parts and hands them off to other copies of A, maybe A makes decisions that guide a much larger cognitive process.
The whole loop starts over with A+ used as H.
The claim is that step 2 produces a system that is able to give “better” feedback than the human could—feedback that considers more consequences more accurately in more complex decision situations, that has spent more effort introspecting, etc. This should make it able to handle circumstances further and further outside human-ordinary, eventually scaling up to extraordinary circumstances. So, while you say that the best case to hope for is ri→r, it seems like ALBA is claiming to do more.
A second objection is that while you call each ri a “reward function”, each system is only trained to take actions that maximize the very next reward they get (not sum of future rewards). This means that each system is only effective at anything insofar as the feedback function it’s maximizing at each step considers the long-term consequences of each action. So, if ri→r, we don’t have reason to think that the system will be competent at anything outside of the “normal circumstances + a few exceptions” you describe—all of its planning power comes from ri, so we should expect it to be basically incompetent where ri is incompetent.
that is able to give “better” feedback than the human could – feedback that considers more consequences more accurately in more complex decision situations, that has spent more effort introspecting
This is roughly how I would run ALBA in practice, and why I said it was better in practice than in theory. I’d be working with considerations I mentioned in this post and try and formalise how to extend utilities/rewards to new settings.
I’m not sure you’ve gotten quite ALBA right here, and I think that causes a problem for your objection. Relevant writeups: most recent and original ALBA.
As I understand it, ALBA proposes the following process:
H trains A to choose actions that would get the best immediate feedback from H. A is benign (assuming that H could give not-catastrophic immediate feedback for all actions and that the learning process is robust). H defines the feedback, and so A doesn’t make decisions that are more effective at anything than H is; A is just faster.
A (and possibly H) is (are) used to define a slow process A+ that makes “better” decisions than A or H would. (Better is in quotes because we don’t have a definition of better; the best anyone knows how to do right now is look at the amplification process and say “yep, that should do better.”) Maybe H uses A as an assistant, maybe a copy of A breaks down a decision problem into parts and hands them off to other copies of A, maybe A makes decisions that guide a much larger cognitive process.
The whole loop starts over with A+ used as H.
The claim is that step 2 produces a system that is able to give “better” feedback than the human could—feedback that considers more consequences more accurately in more complex decision situations, that has spent more effort introspecting, etc. This should make it able to handle circumstances further and further outside human-ordinary, eventually scaling up to extraordinary circumstances. So, while you say that the best case to hope for is ri→r, it seems like ALBA is claiming to do more.
A second objection is that while you call each ri a “reward function”, each system is only trained to take actions that maximize the very next reward they get (not sum of future rewards). This means that each system is only effective at anything insofar as the feedback function it’s maximizing at each step considers the long-term consequences of each action. So, if ri→r, we don’t have reason to think that the system will be competent at anything outside of the “normal circumstances + a few exceptions” you describe—all of its planning power comes from ri, so we should expect it to be basically incompetent where ri is incompetent.
This is roughly how I would run ALBA in practice, and why I said it was better in practice than in theory. I’d be working with considerations I mentioned in this post and try and formalise how to extend utilities/rewards to new settings.
If I read Paul’s post correctly, ALBA is supposed to do this in theory—I don’t understand the theory/practice distinction you’re making.
I disagree. I’m arguing that the concept of “aligned at a certain capacity” makes little sense, and this is key to ALBA in theory.