One way I imagine that would work for me is if the AI explained with sufficient persuasion that there simply isn’t anything more meaningful for me to do than to play games. If there actually is something more meaningful for people to do, then the AI should probably let people do that.
It doesn’t matter if it does or not; the fact that you can conceive of situations where persuadability would fail as a criterion immediately means it fails.
This is a category error. Meaningfulness is in your mind and in intersubjective constructions, not in the objective world. There is no fact of the matter for the AI to explain to you.
One way I imagine that would work for me is if the AI explained with sufficient persuasion that there simply isn’t anything more meaningful for me to do than to play games. If there actually is something more meaningful for people to do, then the AI should probably let people do that.
An AI could persuade you to become a kangaroo—this is a broken criterion for decision-making.
I am skeptical that rationality and exponentially greater-than-human intelligence actually confers this power.
It doesn’t matter if it does or not; the fact that you can conceive of situations where persuadability would fail as a criterion immediately means it fails.
Well, that was the big controversy over the AI Box experiments, so no need to rehash all that here.
This is a category error. Meaningfulness is in your mind and in intersubjective constructions, not in the objective world. There is no fact of the matter for the AI to explain to you.