The problem is that I don’t see much evidence that Mr. Loosemore is correct. I can quite easily conceive of a superhuman intelligence that was built with the specification of “human pleasure = brain dopamine levels”, not least of all because there are people who’d want to be wireheads and there’s a massive amount of physiological research showing human pleasure to be caused by dopamine levels.
I don’t think Loosemore was addressing deliberately unfriendly AI, and for that matter EY hasn’t been either.
Both are addressing intentionally friendly or neutral AI that goes wrong.
I can quite easily conceive of a superhuman intelligence that knows humans prefer more complicated enjoyment, and even do complex modeling of how it would have to manipulate people away from those more complicated enjoyments, and still have that superhuman intelligence not care.
I don’t think Loosemore was addressing deliberately unfriendly AI, and for that matter EY hasn’t been either. Both are addressing intentionally friendly or neutral AI that goes wrong.
Wouldn’t it care about getting things right?