I like this theory of human motivation. My two main points: there could be many almost consistent theories of motivation, and the choice between them is difficult when we want to upload them into AI.
Not anything that looks like human value is actually valuable in sense that that future AI should care about it. For example, in one model human motivation consists from “animal” desires and socially accepted rules. If AI will learn my preferences I prefer that it will learn my rules, but not animal desires. I wrote more about critics of the idea of human values here.
I like this theory of human motivation. My two main points: there could be many almost consistent theories of motivation, and the choice between them is difficult when we want to upload them into AI.
Not anything that looks like human value is actually valuable in sense that that future AI should care about it. For example, in one model human motivation consists from “animal” desires and socially accepted rules. If AI will learn my preferences I prefer that it will learn my rules, but not animal desires. I wrote more about critics of the idea of human values here.