I would personally be more concerned about an AI trying to make me deliriously happy no matter what methods it used.
Happiness is part of our cybernetic feedback mechanism. It’s designed to end once we’re on a particular course of action, just as pain ends when we act to prevent damage to ourselves. It’s not capable of being a permanent state, unless we drive our nervous system to such an extreme that we break its ability to adjust, and that would probably be lethal.
Any method of producing constant happiness ultimately turns out to be pretty much equivalent to heroin—you compensate so that even extreme levels of the stimulus have no effect, forming the new functional baseline, and the old equilibrium becomes excruciating agony for as long as the compensations remain. Addiction—and desensitization—is inevitable.
I would personally be more concerned about an AI trying to make me deliriously happy no matter what methods it used.
Happiness is part of our cybernetic feedback mechanism. It’s designed to end once we’re on a particular course of action, just as pain ends when we act to prevent damage to ourselves. It’s not capable of being a permanent state, unless we drive our nervous system to such an extreme that we break its ability to adjust, and that would probably be lethal.
Any method of producing constant happiness ultimately turns out to be pretty much equivalent to heroin—you compensate so that even extreme levels of the stimulus have no effect, forming the new functional baseline, and the old equilibrium becomes excruciating agony for as long as the compensations remain. Addiction—and desensitization—is inevitable.