That said, I’m still unsure about how one could guarantee that the AI could not hack its own “human affect detector” to make it very easy for itself by forcing smiles on everyone’s face under torture and defining torture as the preferred human activity.
That’s a valid question, but note that it’s asking a different question than the one that this model is addressing. (This model asks “what are human values and what do we want the AI to do with them”, your question here is “how can we prevent the AI from wireheading itself in a way that stops it doing the things that we want it to do”. “What” versus “how”.)
I endorse this comment.
That’s a valid question, but note that it’s asking a different question than the one that this model is addressing. (This model asks “what are human values and what do we want the AI to do with them”, your question here is “how can we prevent the AI from wireheading itself in a way that stops it doing the things that we want it to do”. “What” versus “how”.)