So you start with an AI with an external reward structure that the AI is trained to follow. And the AI is incentivized to take over its reward structure.
Then you talk about “solving” this by having multiple possible rewards and the AI asks humans which one is best. Well, OK, but if this solves it, the solution has to do with how the AI (as opposed to the reward structure) now works, which you don’t explain at all. That is, if the AI is still works the same way, trained by the external reward structure, where does the decision to ask the humans come from?
But more than this, the background assumptions seem very concerning. Some particular issues IMO:
Hedonic (as opposed to preference-based) utility is a bad idea. This will, once the AI is powerful, reliably lead to it killing all humans to replace with hedonium, or replace with hedonium with rules-lawyered characteristics to barely count as “human”, or whatever.
The AI needs to not care about the future directly, only via human preferences which care about the future. Otherwise, it will still create utility monsters (just now preference-based instead of hedonium-based).
At least because of (2), but probably for many other reasons as well (including wireheading as was the original problem mentioned in the video), you probably can’t make an aligned AI by training it on external rewards.
One possible alternative to external reward training would be to:
First train a world-modeling non-agent AI (using e.g. prediction of future inputs) to understand enough about the world that it also understands a plan-scoring function that you will later want it to follow
Then agentify it, by adding a decision-making procedure that points to the concept of the scoring function in the world-model it learned earlier, using the world model to generate and select plans according to how high they score according to the function.
Note that the score function should not care about the future directly, e.g. the score function could be something like: if I follow this plan, how well does this entire process and the resulting outcome align with averaged informed/extrapolated human preferences as of the time of decision to take the plan (not any future time)?
So you start with an AI with an external reward structure that the AI is trained to follow. And the AI is incentivized to take over its reward structure.
Then you talk about “solving” this by having multiple possible rewards and the AI asks humans which one is best. Well, OK, but if this solves it, the solution has to do with how the AI (as opposed to the reward structure) now works, which you don’t explain at all. That is, if the AI is still works the same way, trained by the external reward structure, where does the decision to ask the humans come from?
But more than this, the background assumptions seem very concerning. Some particular issues IMO:
Hedonic (as opposed to preference-based) utility is a bad idea. This will, once the AI is powerful, reliably lead to it killing all humans to replace with hedonium, or replace with hedonium with rules-lawyered characteristics to barely count as “human”, or whatever.
The AI needs to not care about the future directly, only via human preferences which care about the future. Otherwise, it will still create utility monsters (just now preference-based instead of hedonium-based).
At least because of (2), but probably for many other reasons as well (including wireheading as was the original problem mentioned in the video), you probably can’t make an aligned AI by training it on external rewards.
One possible alternative to external reward training would be to:
First train a world-modeling non-agent AI (using e.g. prediction of future inputs) to understand enough about the world that it also understands a plan-scoring function that you will later want it to follow
Then agentify it, by adding a decision-making procedure that points to the concept of the scoring function in the world-model it learned earlier, using the world model to generate and select plans according to how high they score according to the function.
Note that the score function should not care about the future directly, e.g. the score function could be something like: if I follow this plan, how well does this entire process and the resulting outcome align with averaged informed/extrapolated human preferences as of the time of decision to take the plan (not any future time)?