I read Reward is not the optimisation target as a result of your article. (It was a link in the 3rd bullet point, under the Assumptions section.) I downvoted that article and upvoted several people who were critical of it.
Near the top of the responses was this quote.
… If this agent is smart/reflective enough to model/predict the future effects of its RL updates, then you already are assuming a model-based agent which will then predict higher future reward by going for the blueberry. You seem to be assuming the bizarre combination of model-based predictive capability for future reward gradient updates but not future reward itself. Any sensible model-based agent would go for the blueberry absent some other considerations. …
Emphasis mine.
I tend to be suspicious of people who insist their assumptions are valid without being willing to point to work that proves the hypothesis.
In the end, your proposal has a test plan. Do the test, show the results. My prediction is that your theory will not be supported by the test results, but if you show your work, and it runs counter to my current model and predictions, then you could sway me. But not until then, given the assumptions you made and the assumptions you’re importing via related theories. Until you have test results, I’ll remain skeptical.
Don’t get me wrong, I applaud the intent behind searching for an alignment solution.I don’t have a solution or even a working hypothesis. I don’t agree with everything in this article (that I’m about to link), but it relates to something that I’ve been thinking for a while—that it’s unsafe to abstract away the messiness of humanity in pursuit of alignment. That humans are not aligned, and therefore the difficulty with trying to create alignment where none exists naturally is inherently problematic.
You might argue that humans cope with misalignment, and that that’s our “alignment goal” for AI… but I would propose that humans cope due to power imbalance, and that the adage “power corrupts, and absolute power corrupts absolutely” has relevance—or said another way, if you want to know the true nature of a person, given them power over another and observe their actions.
[I’m not anthropomorphizing the AI. I’m merely saying if one intelligence [humans] can display this behavior, and deceptive behaviors can be observed in less intelligent entities, then an intelligence of similar level to a human might possess similar traits. Not as a certainty, but as a non-negligible possibility.]
If the AI is deceptive so long as humans maintain power over it, and then behave differently when that power imbalance is changed, that’s not “the alignment solution” we’re looking for.
I read Reward is not the optimisation target as a result of your article. (It was a link in the 3rd bullet point, under the Assumptions section.) I downvoted that article and upvoted several people who were critical of it.
Near the top of the responses was this quote.
Emphasis mine.
I tend to be suspicious of people who insist their assumptions are valid without being willing to point to work that proves the hypothesis.
In the end, your proposal has a test plan. Do the test, show the results. My prediction is that your theory will not be supported by the test results, but if you show your work, and it runs counter to my current model and predictions, then you could sway me. But not until then, given the assumptions you made and the assumptions you’re importing via related theories. Until you have test results, I’ll remain skeptical.
Don’t get me wrong, I applaud the intent behind searching for an alignment solution. I don’t have a solution or even a working hypothesis. I don’t agree with everything in this article (that I’m about to link), but it relates to something that I’ve been thinking for a while—that it’s unsafe to abstract away the messiness of humanity in pursuit of alignment. That humans are not aligned, and therefore the difficulty with trying to create alignment where none exists naturally is inherently problematic.
You might argue that humans cope with misalignment, and that that’s our “alignment goal” for AI… but I would propose that humans cope due to power imbalance, and that the adage “power corrupts, and absolute power corrupts absolutely” has relevance—or said another way, if you want to know the true nature of a person, given them power over another and observe their actions.
[I’m not anthropomorphizing the AI. I’m merely saying if one intelligence [humans] can display this behavior, and deceptive behaviors can be observed in less intelligent entities, then an intelligence of similar level to a human might possess similar traits. Not as a certainty, but as a non-negligible possibility.]
If the AI is deceptive so long as humans maintain power over it, and then behave differently when that power imbalance is changed, that’s not “the alignment solution” we’re looking for.