“An altruist might ask the genie to maximize the amount of happiness in the universe or something like that, in which case the genie might create a huge number of wireheads. This seems to me like a bad outcome, and would likely be seen as a bad outcome by the altruist who made the request of the genie.”
Eh? An altruist would voluntarily summon disaster upon the world?
By the way, I have some questions about wireheading. What is it, really? Why is it so repulsive? Is it really so bad? If, when you imagine your brain rewired, you envision something that is too alien to be considered you, or too devoid of creative thought to be considered alive, it’s possible that an AI ordered to make you happy would choose some other course of action. It would be illogical to create something that is neither you nor happy.
″
Eh? An altruist would voluntarily summon disaster upon the world?”
No, an altruist’s good-outcomes are complex enough to be difficult to distinguish from disasters by verbal rules. An altruist has to calculate for 6 billion evaluative agents, an egoist just 1.
″
By the way, I have some questions about wireheading. What is it, really? Why is it so repulsive?”
Wireheading is more or less where a sufficiently powerful agent told to optimize for happiness optimizes for the emotional referents without the intellectual and teleological human content typically associated with that.
You can perform primitive wireheading right now with various recreational drugs. The fact that almost everyone uses at least a few of the minor ones tells us that wireheading isn’t in and of itself absolutely repugnant to everyone—but the fact that only the desperate pursue the more major forms of wireheading available and the results (junkies) are widely looked upon as having entered a failure-mode is good evidence that it’s not a path we want sufficiently powerful agents to go down.
″ it’s possible that an AI ordered to make you happy would choose some other course of action. ”
When unleashing forces one cannot un-unleash, one wants to deal in probability, not possibility. That’s more or less the whole Yudkowskian project in a nutshell.
“An altruist might ask the genie to maximize the amount of happiness in the universe or something like that, in which case the genie might create a huge number of wireheads. This seems to me like a bad outcome, and would likely be seen as a bad outcome by the altruist who made the request of the genie.”
Eh? An altruist would voluntarily summon disaster upon the world?
By the way, I have some questions about wireheading. What is it, really? Why is it so repulsive? Is it really so bad? If, when you imagine your brain rewired, you envision something that is too alien to be considered you, or too devoid of creative thought to be considered alive, it’s possible that an AI ordered to make you happy would choose some other course of action. It would be illogical to create something that is neither you nor happy.
″ Eh? An altruist would voluntarily summon disaster upon the world?” No, an altruist’s good-outcomes are complex enough to be difficult to distinguish from disasters by verbal rules. An altruist has to calculate for 6 billion evaluative agents, an egoist just 1.
″ By the way, I have some questions about wireheading. What is it, really? Why is it so repulsive?” Wireheading is more or less where a sufficiently powerful agent told to optimize for happiness optimizes for the emotional referents without the intellectual and teleological human content typically associated with that.
You can perform primitive wireheading right now with various recreational drugs. The fact that almost everyone uses at least a few of the minor ones tells us that wireheading isn’t in and of itself absolutely repugnant to everyone—but the fact that only the desperate pursue the more major forms of wireheading available and the results (junkies) are widely looked upon as having entered a failure-mode is good evidence that it’s not a path we want sufficiently powerful agents to go down.
″ it’s possible that an AI ordered to make you happy would choose some other course of action. ” When unleashing forces one cannot un-unleash, one wants to deal in probability, not possibility. That’s more or less the whole Yudkowskian project in a nutshell.