It is definitely not about some false dichotomy between “natural” and “artificial” happiness; after all, Nature doesn’t have a clue what the difference between them is (nor do I).
Certainly not, but we do need to understand utility functions and their modification; if we don’t, then bad things might happen. For example (I steal this example from EY), a ‘FAI’ might decide to be Friendly by rewiring our brains to simply be really really happy no matter what, and paperclip the rest of the universe. To most people, this would be a bad outcome, and is an intuitive argument that there are good and bad kinds of happiness, and the distinctions probably have something to do with properties of the external world.
Certainly not, but we do need to understand utility functions and their modification; if we don’t, then bad things might happen. For example (I steal this example from EY), a ‘FAI’ might decide to be Friendly by rewiring our brains to simply be really really happy no matter what, and paperclip the rest of the universe. To most people, this would be a bad outcome, and is an intuitive argument that there are good and bad kinds of happiness, and the distinctions probably have something to do with properties of the external world.