if it was a total utility maximizing AI it would clone the utility monster (or start cloning everyone else if the utility monster is super linear) edit: on the other hand, if it was average utility maximizing AI it would kill everyone else leaving just the utility monster. In any case there’d be some serious population ‘adjustment’.
It doesn’t have to tell the monster. (this btw is one wireheading-related issue; i do quite hate the lingo here though; calling it wireheaded makes it sound like there isn’t a couple thousands years of moral philosophy about the issue and related issues)
this btw is one wireheading-related issue; i do quite hate the lingo here though; calling it wireheaded makes it sound like there isn’t a couple thousands years of moral philosophy about the issue and related issues
I’m not aware of an alternative to “wireheading” with the same meaning.
That’s the ancient greeks writing about hypothetical wireheads. (the ‘moral philosophy’ is perhaps a bad choice of word for search for greek stuff; ethics is the greek word)
A bit of search around that showed nearly no reference to lotus eating/lotus eater in moral philosophy.
Something much closer to “wireheading” would be hedonism, and more specifically Nozick’s Experience Machine, which is pretty much wireheading, but isn’t thousands of years old, and has been referenced here.
(And the term “wirehead” as used here probably comes from the Known Space stories, so probably predates Nozick’s 1974 book)
Well, for one thing, it ought to be obvious that Mohammed would have banned a wire into the pleasure centre, but lacking the wires, he just banned the alcohol and other intoxicants. The concept of ‘wrong’ ways of seeking the pleasure is very, very old.
I don’t think you looked very hard—I turned up a few books apparently on moral philosophy by searching in Google Books for ‘moral (“lotus eating” OR “lotus-eating” OR “lotus eater” OR “lotus-eater”)’.
And yes, I’m pretty sure the wirehead term comes from Niven’s Known Space. I’ve never seen any other origin discussed.
Sure, it could lock the monster in an illusory world of optimal happiness, or just stimulate his pleasure centers directly, etc. But unless we assume that the AI is working under constraints that prevent that sort of thing, the comic doesn’t make much sense.
There’s no clear line between ‘hiding’ and ‘not showing’. You can leave just a million people or so, to be put around the monster, and simply not show him the rest. It is not like the AI is making every wall into the screen displaying the suffering on the construction of pyramids. Or you can kill those people and show it in such a way that the monster derives pleasure from it. At any rate, anyone whose death would go unnoticed by the monster, or whose death does not sufficiently distress the monster, would die, if the AI is to focus on average pleasure.
edit: I think those solutions really easily come to mind when you know of what a soviet factory would do to exceed the five year plan.
At any rate, anyone whose death would go unnoticed by the monster, or whose death does not sufficiently distress the monster, would die, if the AI is to focus on average pleasure.
The AI explicitly wasn’t focused on average pleasure, but on total pleasure, as measured by average pleasure times the population.
You’re all wrong — if the happiness of the utility monster compounds as the comic says, then you get greater happiness out of lumping it all into one monster rather than cloning.
if it was a total utility maximizing AI it would clone the utility monster (or start cloning everyone else if the utility monster is super linear) edit: on the other hand, if it was average utility maximizing AI it would kill everyone else leaving just the utility monster. In any case there’d be some serious population ‘adjustment’.
Not if that made the utility monster unhappy.
It doesn’t have to tell the monster. (this btw is one wireheading-related issue; i do quite hate the lingo here though; calling it wireheaded makes it sound like there isn’t a couple thousands years of moral philosophy about the issue and related issues)
I’m not aware of an alternative to “wireheading” with the same meaning.
Go classical - ‘lotus-eating’.
Good one.
http://en.wikipedia.org/wiki/Lotus-eaters
That’s the ancient greeks writing about hypothetical wireheads. (the ‘moral philosophy’ is perhaps a bad choice of word for search for greek stuff; ethics is the greek word)
A bit of search around that showed nearly no reference to lotus eating/lotus eater in moral philosophy.
Something much closer to “wireheading” would be hedonism, and more specifically Nozick’s Experience Machine, which is pretty much wireheading, but isn’t thousands of years old, and has been referenced here.
(And the term “wirehead” as used here probably comes from the Known Space stories, so probably predates Nozick’s 1974 book)
Well, for one thing, it ought to be obvious that Mohammed would have banned a wire into the pleasure centre, but lacking the wires, he just banned the alcohol and other intoxicants. The concept of ‘wrong’ ways of seeking the pleasure is very, very old.
I don’t think you looked very hard—I turned up a few books apparently on moral philosophy by searching in Google Books for ‘moral (“lotus eating” OR “lotus-eating” OR “lotus eater” OR “lotus-eater”)’.
And yes, I’m pretty sure the wirehead term comes from Niven’s Known Space. I’ve never seen any other origin discussed.
It would be awfully hard to hide.
Sure, it could lock the monster in an illusory world of optimal happiness, or just stimulate his pleasure centers directly, etc. But unless we assume that the AI is working under constraints that prevent that sort of thing, the comic doesn’t make much sense.
There’s no clear line between ‘hiding’ and ‘not showing’. You can leave just a million people or so, to be put around the monster, and simply not show him the rest. It is not like the AI is making every wall into the screen displaying the suffering on the construction of pyramids. Or you can kill those people and show it in such a way that the monster derives pleasure from it. At any rate, anyone whose death would go unnoticed by the monster, or whose death does not sufficiently distress the monster, would die, if the AI is to focus on average pleasure.
edit: I think those solutions really easily come to mind when you know of what a soviet factory would do to exceed the five year plan.
The AI explicitly wasn’t focused on average pleasure, but on total pleasure, as measured by average pleasure times the population.
Yep. I was just posting on what average pleasure maximizing AI would do, that isn’t part of the story.
You’re all wrong — if the happiness of the utility monster compounds as the comic says, then you get greater happiness out of lumping it all into one monster rather than cloning.