Are we actually optimizing for “subjective happiness”? That’s the wireheading scenario. I would say that wireheading humans seems better than killing humans and creating a wireheaded machine, but… both scenarios seem suboptimal.
And if you instead want to make a machine that is much better at “human values” (not just “subjective happiness”) than humans… I guess the tricky part is making the machine that is good at human values.
Are we actually optimizing for “subjective happiness”? That’s the wireheading scenario. I would say that wireheading humans seems better than killing humans and creating a wireheaded machine, but… both scenarios seem suboptimal.
And if you instead want to make a machine that is much better at “human values” (not just “subjective happiness”) than humans… I guess the tricky part is making the machine that is good at human values.