Hm… not obviously so. Any reductionist explanation of happiness from any source is going to end up mentioning hormones & chemicals in the brain, but it doesn’t follow that wanting happiness (& hence wanting the attendant chemicals) = wanting to wirehead.
I struggle to articulate my objection to wireheading, but it has something to do with the shallowness of pleasure that is totally non-contingent on my actions and thoughts. It is definitely not about some false dichotomy between “natural” and “artificial” happiness; after all, Nature doesn’t have a clue what the difference between them is (nor do I).
It is definitely not about some false dichotomy between “natural” and “artificial” happiness; after all, Nature doesn’t have a clue what the difference between them is (nor do I).
Certainly not, but we do need to understand utility functions and their modification; if we don’t, then bad things might happen. For example (I steal this example from EY), a ‘FAI’ might decide to be Friendly by rewiring our brains to simply be really really happy no matter what, and paperclip the rest of the universe. To most people, this would be a bad outcome, and is an intuitive argument that there are good and bad kinds of happiness, and the distinctions probably have something to do with properties of the external world.
Hm… not obviously so. Any reductionist explanation of happiness from any source is going to end up mentioning hormones & chemicals in the brain, but it doesn’t follow that wanting happiness (& hence wanting the attendant chemicals) = wanting to wirehead.
I struggle to articulate my objection to wireheading, but it has something to do with the shallowness of pleasure that is totally non-contingent on my actions and thoughts. It is definitely not about some false dichotomy between “natural” and “artificial” happiness; after all, Nature doesn’t have a clue what the difference between them is (nor do I).
Certainly not, but we do need to understand utility functions and their modification; if we don’t, then bad things might happen. For example (I steal this example from EY), a ‘FAI’ might decide to be Friendly by rewiring our brains to simply be really really happy no matter what, and paperclip the rest of the universe. To most people, this would be a bad outcome, and is an intuitive argument that there are good and bad kinds of happiness, and the distinctions probably have something to do with properties of the external world.