Cocaine-
I was completely awed by how just totally-mind-blowing-amazing this stuff was the once and only time I tried it. Now, I knew the euphoric-orgasmic state I was in had been induced by a drug, and this knowledge would make me classify it as ‘not real happiness,’ but if someone had secretly dosed me after saving a life or having sex, I probably would have interpreted it as happiness proper. Sex and love make people happy in a very similar way as cocaine, and don’t seem to have the same negative effects as cocaine, but this is probably a dosage issue. There are sex/porn addicts whose metabolism or brain chemistry might be off. I’m sure that if you carefully monitored the pharmacokinetics of cocaine in a system, you could maximize cocaine utility by optimizing dosage and frequency such that you didn’t sensitize to it or burn out endogenous seretonin.
Would it be wrong for humans to maximize drug-induced euphoria? Then why not for an AI to?
What about rewarding with cocaine after accomplishing desired goals? Another million in the fAI fund… AHHH… Maybe Eliezer should become a sugar-daddy to his cronies to get more funds out of them. (Do this secretly so they think the high is natural and not that they can buy it on the street for $30)
The main problem as I see it is that humans DON’T KNOW what they want. How can you ask a superintelligence to help you accomplish something if you don’t know what it is? The programmers want it to tell them what they want. And then they get mad when it turns up the morphine drip…
Maybe another way to think about it is we want the superintelligence to think like a human and share human goals, but be smarter and take them to the next level through extrapolation.
But how do we even know that human goals are indefinitely extrapolatable? Maybe taking human algorithms to an extreme DO lead to everyone being wire-headed in one way or another. If you say, “I can’t just feel good without doing anything… here are the goals that make me feel good- and it CAN’T be a simulation,′ then maybe the superintelligence will just set up a series of scenarios in which people can live out their fantasies for real… but they will still all be staged fantasies.
Cocaine-
I was completely awed by how just totally-mind-blowing-amazing this stuff was the once and only time I tried it. Now, I knew the euphoric-orgasmic state I was in had been induced by a drug, and this knowledge would make me classify it as ‘not real happiness,’ but if someone had secretly dosed me after saving a life or having sex, I probably would have interpreted it as happiness proper. Sex and love make people happy in a very similar way as cocaine, and don’t seem to have the same negative effects as cocaine, but this is probably a dosage issue. There are sex/porn addicts whose metabolism or brain chemistry might be off. I’m sure that if you carefully monitored the pharmacokinetics of cocaine in a system, you could maximize cocaine utility by optimizing dosage and frequency such that you didn’t sensitize to it or burn out endogenous seretonin.
Would it be wrong for humans to maximize drug-induced euphoria? Then why not for an AI to?
What about rewarding with cocaine after accomplishing desired goals? Another million in the fAI fund… AHHH… Maybe Eliezer should become a sugar-daddy to his cronies to get more funds out of them. (Do this secretly so they think the high is natural and not that they can buy it on the street for $30)
The main problem as I see it is that humans DON’T KNOW what they want. How can you ask a superintelligence to help you accomplish something if you don’t know what it is? The programmers want it to tell them what they want. And then they get mad when it turns up the morphine drip…
Maybe another way to think about it is we want the superintelligence to think like a human and share human goals, but be smarter and take them to the next level through extrapolation.
But how do we even know that human goals are indefinitely extrapolatable? Maybe taking human algorithms to an extreme DO lead to everyone being wire-headed in one way or another. If you say, “I can’t just feel good without doing anything… here are the goals that make me feel good- and it CAN’T be a simulation,′ then maybe the superintelligence will just set up a series of scenarios in which people can live out their fantasies for real… but they will still all be staged fantasies.