if it was a total utility maximizing AI it would clone the utility monster (or start cloning everyone else if the utility monster is super linear) edit: on the other hand, if it was average utility maximizing AI it would kill everyone else leaving just the utility monster. In any case there’d be some serious population ‘adjustment’.
It doesn’t have to tell the monster. (this btw is one wireheading-related issue; i do quite hate the lingo here though; calling it wireheaded makes it sound like there isn’t a couple thousands years of moral philosophy about the issue and related issues)
this btw is one wireheading-related issue; i do quite hate the lingo here though; calling it wireheaded makes it sound like there isn’t a couple thousands years of moral philosophy about the issue and related issues
I’m not aware of an alternative to “wireheading” with the same meaning.
That’s the ancient greeks writing about hypothetical wireheads. (the ‘moral philosophy’ is perhaps a bad choice of word for search for greek stuff; ethics is the greek word)
A bit of search around that showed nearly no reference to lotus eating/lotus eater in moral philosophy.
Something much closer to “wireheading” would be hedonism, and more specifically Nozick’s Experience Machine, which is pretty much wireheading, but isn’t thousands of years old, and has been referenced here.
(And the term “wirehead” as used here probably comes from the Known Space stories, so probably predates Nozick’s 1974 book)
Well, for one thing, it ought to be obvious that Mohammed would have banned a wire into the pleasure centre, but lacking the wires, he just banned the alcohol and other intoxicants. The concept of ‘wrong’ ways of seeking the pleasure is very, very old.
I don’t think you looked very hard—I turned up a few books apparently on moral philosophy by searching in Google Books for ‘moral (“lotus eating” OR “lotus-eating” OR “lotus eater” OR “lotus-eater”)’.
And yes, I’m pretty sure the wirehead term comes from Niven’s Known Space. I’ve never seen any other origin discussed.
Sure, it could lock the monster in an illusory world of optimal happiness, or just stimulate his pleasure centers directly, etc. But unless we assume that the AI is working under constraints that prevent that sort of thing, the comic doesn’t make much sense.
There’s no clear line between ‘hiding’ and ‘not showing’. You can leave just a million people or so, to be put around the monster, and simply not show him the rest. It is not like the AI is making every wall into the screen displaying the suffering on the construction of pyramids. Or you can kill those people and show it in such a way that the monster derives pleasure from it. At any rate, anyone whose death would go unnoticed by the monster, or whose death does not sufficiently distress the monster, would die, if the AI is to focus on average pleasure.
edit: I think those solutions really easily come to mind when you know of what a soviet factory would do to exceed the five year plan.
At any rate, anyone whose death would go unnoticed by the monster, or whose death does not sufficiently distress the monster, would die, if the AI is to focus on average pleasure.
The AI explicitly wasn’t focused on average pleasure, but on total pleasure, as measured by average pleasure times the population.
You’re all wrong — if the happiness of the utility monster compounds as the comic says, then you get greater happiness out of lumping it all into one monster rather than cloning.
It is a good thing that you are thinking good things about Felix. This means he is happier if you aren’t in corn field since you are a good person with no bad thoughts.
Felix means happy (or lucky), and is the origin of the word felicity. It took me a while to realize this, so I thought I would note it. Is it obvious for all native English speakers?
Not obvious to me. I did know the meaning of Felix, but it’s deep enough in the unused drawers of my memory that I might never have made the connection without someone pointing it out.
Everyone’s talking about this as if it was a hypothetical, but as far as I can tell it describes pretty accurately how hierarchical human civilizations tend to organize themselves once they hit a certain size. Isn’t a divine ruler precisely someone who is more deserving and more able to absorb resources? Aren’t the lower orders people who would not appreciate luxuries and indeed have fully internalized such a fact (“Not for the likes of me”)
If you skip the equality requirement, it seems history is full of utilitarian societies.
I am very glad, that the people who advocate the Felix morality with the “dust speck” sophism have virtually no chance to really accomplish something in the AI field.
Downvoted: You should let someone actually advocate the Felix argument, before bashing them for supposedly advocating it.
So far, generalizing from my own example, there’s atleast one person who agrees with the dust speck argument, but opposes the Felix argument. I know as yet no person who agree with the Felix argument. So, I find it obnoxious that you effectively pretend I advocate the Felix argument when I don’t.
You may think I’m inconsistent in supporting the one but not the other, but don’t pretend I support both, okay?
So far, generalizing from my own example, there’s atleast one person who agrees with the dust speck argument, but opposes the Felix argument.
Make that two.
Thomas, your treatment of this is reductio ad absurdum of what I feel like is what at least 33% of LW believes. Worse, when we’re (and by we, I mean everyone else, since I’m not going to bother getting involved in this further) calling you on it and actually trying to have a dialogue, you’re dismissing us and insulting us.
To be fair, one could argue for dust specks without Felix morality by weighing increases in individual’s happiness with diminishing returns such as to asymptotically approach some limit. (But then you would sacrifice one individuals arbitrarily unimaginable happiness just to bring someone else an arbitrarily small sliver towards baseline)
Since it’s meaningless to call dust specks “right”, just consider it true if you want to. I don’t want to so I don’t.
The problem is the line of reasoning, where a “50 years of torture” is better than 3^^^3 years with a dust speck in the eye every so often.
That’s not even the dilemma you linked to. The dilemma you linked to “one person be horribly tortured for fifty years without hope or rest, or that 3^^^3 people get dust specks in their eyes”.
What is then the torture of all the Humanity, against the super happy Felix with 3^^^3 pyramids. Nothing. By the same line of reasoning.
It’s probably bad practice to say two lines of reasoning are the same line of reasoning, if you don’t believe in either of them.
For starters I don’t need to have a positive factor for Felix’s further happiness in my utilty function. That alone is a significant difference.
Look. You have one person, under terrible torture for 50 years on one side and a gazillion of people with a slight discomfort every year or so on the other side.
It is claimed that the first is better.
Now, you have a small humanity as is, only enslaved for pyramid building for Felix. He has eons of subjective time to enjoy this pyramids and he is unbelievably happy. More happy than any man, woman or child could ever be. The amount of happiness of Felix outweights the misery of billion of people by a factor of a million.
What’s the fundamental difference between those two cases? I don’t see it, do you?
The only similarity between those cases is that they involve utility calculations you disagree with. Otherwise every single detail is completely different. (e. g. the sort of utility considered, two negative utilities being traded against each other vs. trading utility elsewhere (positive and negative) for positive utility, which side of the trade the single person with the large individual utility difference is on, the presence of perverse incentives, etc, etc).
If anything it would be more logical to equate Felix with the tortured person and treat this as a reductio ad absurdum of your position on the dust speck problem. (But that would be wrong too, since the numbers aren’t actually the problem with Felix, the fact that there’s an incentive to manipulate your own utility function that way is (among other things).)
You aren’t seeing forest for the trees… the thing that is identical is that you are trading utilities across people, which is fundamentally problematic and leads to either tortured child or utility monster, or both.
Omelas is a goddamned paradise. Omelas without the tortured child would be better, yeah, but Omelas as described is still better than any human civilization that has ever existed. (For one thing, it only contains one miserable child.)
Well it seems to me they are trading N dust specks vs torture in Omelas. edit: Actually, I don’t like Omelas [as example]. I think that miserable child would only make the society way worse, with the people just opting to e.g. kill someone when it ever so slightly results in increase in their personal expected utility. This child in Omelas puts them straight on the slippery slope, and making everyone aware of slippage makes people slide down for fun and profit.
Our ‘civilization’ though, of course, is a god damn jungle and so its pretty damn bad. It’s pretty hard to beat on the moral wrongness scale, from first principles; you have to take our current status quo and modify it to get to something worse (or take our earlier status quo).
Your edit demonstrates that you really don’t get consequentialism at all. Why would making a good tradeoff (one miserable child in exchange for paradise for everyone else) lead to making a terrible one (a tiny bit of happiness for one person in exchange for death for someone else)?
People are individual survival machines, that’s why. Each bastard in the Omelas knows at the gut level (not in some abstract way) that there’s a child being miserable specifically for a tiny bit of his happiness. His personally. He will then kill for larger bit of his happiness. He isn’t society. He’s an individual. It is all between him and that child. At very best, between him&his family, and that child. The society ain’t part of equation. (And if it is, the communism should of worked perfectly in that universe) [assuming that the individual believes he won’t be caught]
edit: also i think you don’t understand the story. They didn’t take the child apart for much needed organs to save other folks in Omelas. The child is miserable for the purpose of bringing sense of unity into the commune, for the purpose of making them value their happiness. That is already very irrational, and not only that but also entirely contrary to how homo sapiens behave when exposed to gross injustice.
edit: To explain my use of language. We are not talking about rational agents and what they ought to decide. We are taking of irrational agents that are supposedly (premise of the story) made more well behaved by participation in a pointless and evil ritual, which is the opposite of the known effect of direct participation in that sort of ritual, on populace. That’s why the story makes a poor case against utilitarianism. Because the consequence is grossly invalid.
What ever. The reason why I don’t like that story too much is, I do not believe that, given the way homo sapiens are, demonstrating them that child in the Omelas would have consequence stated in the story, even if they are instructed that this is the consequence. It’s too much of a stretch. The effect of such on H. Sapiens, that I would forecast, would be entirely opposite. The Omelas is doing something more similar to how you break in the soldiers for effective Holocaust death squad—the soldiers that later kill others or themselves outside the orders. You make the soldiers participate all together in something like that. That’s why I don’t like this as example. I’m arguing against my own point of bringing it up as example. Because the reason we don’t like Omelas is because keeping child like this won’t have positive consequence. (and for it to have stated positive consequence, the people already have to have a grossly irrational reaction to exposure to that child)
the thing that is identical is that you are trading utilities across people,
This is either wrong (the utility functions of the people involved aren’t queried in the dust speck problem) or so generic as to be encompassed in the concept of “utility calculation”.
Aggregating utility functions across different people is an unsolved problem, but not necessarily an unsolvable one. One way of avoiding utility monsters would be to normalize utility functions. The obvious way to do that leads to problems such as arachnophobes getting less cake even if they like cake equally much, but IMO that’s better than utility monsters.
This is either wrong (the utility functions of the people involved aren’t queried in the dust speck problem) or so generic as to be encompassed in the concept of “utility calculation”.
The utilities of many people are a vector, you are to map it to a scalar value, that loses a lot of information in process, and it seems to me however you do it, leads to some sort of objectionable outcomes. edit: I have a feeling one could define it reasonably with some sort of Kolmogorov complexity like metric that would grow incredibly slowly for the dust specks and would never equate what ever hideously clever thing does our brain do to most of the neurons when we suffer; the suffering beating the dust specks on the complexity (you’d have to write down the largest number you can write down in as many bits as the bits being tortured in the brain; then that number of dust specks starts getting to the torture level). We need to understand how pain works before we can start comparing pain vs dust specks.
Really? Every use of utilities I have seen either uses a real world measure (such as money) with a notation that it isn’t really utilities or they go directly for the unfalsifiable handwaving. So far I haven’t seen anything to suggest “aggregating utility functions” is even theoretically possible. For that matter most of what I have read suggests that even an individual’s “utility function” is usually unmanageably fuzzy, or even unfalsifiable, itself.
Felix is essentially a Utility Monster: a thought experiment that’s been addressed here before. As that family of examples shows, happiness-maximization breaks down rather spectacularly when you start considering self- or other-modification or any seriously unusual agents. You can bite that bullet, if you want, but not many people here do; fortunately, there are a few other ways you can tackle this if you’re interested in a formalization of humanlike ethics. The “Value Stability and Aggregation” post linked above touches on the problem, for example, as does Eliezer’s Fun Theory sequence.
You don’t need any self-modifying or non-humanlike agents to run into problems related to “Torture vs. Dust Specks”, though; all you need is to be maximizing over the welfare of a lot of ordinary agents. 3^^^3 is an absurdly huge number and leads you to a correspondingly counterintuitive conclusion (one which, incidentally, I’d estimate has led to more angry debate than anything else on this site), but lesser versions of the same tradeoff are quite realistic; unless you start invoking sacred vs. profane values or otherwise define the problem away, it differs only in scale from the same utilitarian calculations you make when, say, assigning chores.
In one case, (Torture to avoid the specks) the larger portion of people is better off if you pick the single person. In the other case, (Build pyramids to please Felix) the larger portion of people is worse off if you pick the single person.
So if my position was “The majority should win” It would be right to torture the person and it would be right to depose Felix.
I’m not sure if it’s a fundamental difference or a good difference, but I think that means I can lay out the following 4 distinct answer pairs:
Depose Felix, Torture Man: Majority wins.
Adore Felix, Speck people: Minority wins.
Adore Felix, Torture Man: Mean Happiness wins.
Depose Felix, Speck People: Minimum happiness wins. (Assuming either Felix is happier about being deposed than an average person with a dust speck in their eye, or dead, and no longer counted for minimum happiness.)
So I think I can see all 4 distinct positions, if I’m not missing something.
In one case, (Torture to avoid the specks) the larger portion of people is better off if you pick the single person. In the other case, (Build pyramids to please Felix) the larger portion of people is worse off if you pick the single person.
Imagine that there is one tortured for 50 years and then free of any dust speck for the next 3^^^3 years.
Then we don’t have “the larger portion of people” anymore. Is anything different in such a case?
Imagine that there is one tortured for 50 years and then free of any dust speck for the next 3^^^3 years.
If I understand the dilemma, in your most recent phrasing, it’s this: A person who lives 3^^^3 years either: a) has to suffer a dustspeck per year b) has to suffer 50 years of torture at some point in that time, then I assume gets the memory of that torture deleted from his mind and his mind’s state restored to what it was before the torture (so that he doesn’t suffer further disutility from that memory or the broken mind-state, he only has to suffer the torture itself), He lives the remaining 3^^^3 years dustspeck-free.
If we don’t know what his own preferences are, and have no way of asking him, what should we choose on his behalf?
Can we have one dilemma at a time, please, Thomas? You said something about 3^^^3 years—therefore you’re not talking about the dilemma as stated in the original sequence, as that dilemma doesn’t say anything about 3^^^3 years.
Which preferences are in question now?
The preferences relating to the original dilemma, are the preferences of the person who presumably prefers not to get tortured, vs the preferences of 3^^^3 people who presumably prefer not to get a dust speck in the eye.
Well, first of all, I’m assuming that you’re doing that to both groupings (since otherwise I could say “Well, one has only one person and one has a massive number of people, which is a difference.” but that seems like a trivial point)
So if you apply it to both, then it’s just one person considering tradeoff A, (pay torture to go speck free for eons)
And another person considering tradeoff B(personally build pyramids for eons to get to live in your own collection of pyramids for some years.)
I could say that in once case the pain is relatively dense (torture, condensed to 50 years) and the pleasure is relatively sparse,(speck free, over 3^^^3 years) and that in the other case the pain relatively sparse (slave labor, spread out over a long time) and the pleasure is relatively dense (Incomprehensible pyramidgasm.).
I’m not sure if that matters or in what ways that difference matters. I’m really not up to date on how your brain handles that specifically and would probably need to look it up further.
personally build pyramids for eons to get to live in your own collection of pyramids for some years.)
No. Building pyramids as humans. And enjoying them much, much longer as they stand there, for Felix. Enjoyed by Felix.
Maybe the amount of our pleasure with Giza pyramids already exceeded the pain invested to build them. I don’t know.
Can all the pains of a slave be justified by all the pleasures of the tourist, visiting the hole in the rock, he was forced carving for 50 years?
Or can a large group of sick sadists are entitled to slowly torture someone, since their pleasure sum will be greater than the pain of the unlucky one?
Maybe the amount of our pleasure with Giza pyramids already exceeded the pain invested to build them. I don’t know. Can all the pains of a slave be justified by all the pleasures of the tourist, visiting the hole in the rock, he was forced carving for 50 years?
Was it that much pain? I read in National Geographic, IIRC, that the modern archaeological conception was that the pyramids were mostly or entirely built by paid labor—Nile farmers killing time during the dry season. This may even be a good thing, depending on whether it diverted imperial tax revenue from foreign adventurism into monument/tomb-building.
Well, it’s still a fun Fermi calculation problem, anyway.
Let’s see, the Pyramids have been the targets of tourism since at least the original catalogue of wonders of the ancient world, Antipater of Sidon ~140 BC which includes “the great man-made mountains of the lofty pyramids”. So that’s ~2150 years of tourism (2012+140). Quickly checking, Wikipedia says 12.8 million people visited Egypt for tourism in 2008, but surely not all of them visited the pyramids? Let’s halve it to 6 million.
Let’s pretend Egyptian tourism followed a linear growth between 140 BC with one visitor (Antipater) and 6 million in 2012 (yes, world population & wealth has grown and so you’d expect tourism to grow a lot, but Egypt has been pretty chaotic recently), over 2150 years. We can just average that to 3 million a year, which gives us a silly total number of tourists of 2150 * 3 million or 6.45 billion visitors.
There are 138 pyramids, WP says, with the Great Pyramid estimated at 100,000 workers. Let’s halve it (again with the assumptions!) at 50k workers a pyramid, 50,000 * 138 = 6.9m workers total.
This gives us the visitor:worker ratio of 6.45b:6.9m, or 21,500:23, or 934.8:1.
And of course the pyramids are still there, so whatever the real ratio, it’s getting better (modulo issues of maintenance and restoration).
You’d need a heck of a lot more tourism than for Egypt… although apparently there’s quite a range of estimates of deaths, from less than 20,000 a year to more than 200,000 a year. Given the substantially less tourism to the Aztec pyramids (inasmuch as apparently only 2 small unimpressive Aztec pyramids survive, with all the impressive ones like Tenochtitlan destroyed), it’s safe to say that the utilitarian calculus will never work out for them.
It seems to me that any historical event that was both painful to the participants, and interesting to read and learn about after the fact, creates the same dilemma that’s been discussed here. Will World War Two have been a net good if 10,000 years from now trillions of people have gotten incredible enjoyment from watching movies, reading books, and playing videogames that involve WWII as a setting in some way?
The first solution to this dilemma that comes to mind is that ready substitutes exist for most of the entertainments associated with these unpleasant events. If the Aztecs had built their pyramids and then never sacrificed anyone on them it probably wouldn’t hurt the modern tourist trade that much. And if WWII had never happened and thus caused the Call of Duty videogame franchise to never exist, it wouldn’t have a big impact on utility because some cognates of the Doom, Unreal, and similar franchises would still exist (those franchises are based on fictional events, so no one got hurt inspiring them).
In fact, if I was to imagine an alternate human history where no war, slavery, or similar conflict had ever happened, and the inhabitants got all their enjoyment from entertainment media based on fictional conflicts, I think such a world would have a much higher net utility than our own.
Sure—but can you offhand fit an exponential curve and calculate its summation? I’m sure it’s doable with the specified endpoints and # of periods (just steal a simple interest formula), but it’s more work than halving and multiplying.
Well… integral from t0 to t1 of exp(at+b) dt = (exp(at1+b)-exp(a*t2+b))/a i.e. the difference between the endpoints times the time needed to increase by a factor of e… a 6-million-fold increase is about 22.5 doublings (knowing 2^20 = 1 million), hence about 15 factors of e (knowing that ln 2 = 0.7) i.e. about one in 150… hence the total number of tourists is about 1 billion (about six times less than Rhwawn’s estimate—my eyeballs had told me it would be about one third… close enough!)
Being very very outraged isn’t really an argument.
Give us your own (non-utilitarian I assume) decision theory that you consider encapsulating all that is good and moral, if you please.
If you can’t, please stop being outraged as those of us who try to solve the problem, even if you feel we’ve taken wrong turns in the path towards the solution.
I don’t know, 3^^^3 is a pretty long time to fix brain trauma. Or are you offering complete restoration after the torture? In that case, I might just take it.
I am not offering anything at all. I strongly advice you NOT to substitute the slight discomfort over long time period with a horrible torture for a shorter period.
What’s the fundamental difference between those two cases? I don’t see it, do you?
One fundamental difference is that I don’t care about Felix’s further happiness. After some point, I may even resent it, which would make his additional happiness of negative utility to me.
Another difference is that happiness may be best represented as a percentage with an upper bound of e.g. 100% happy, rather than be an integer you can keep adding to without end.
I think Felix’s case may be an interesting additional scenario to consider, in order to be sure that AIs don’t fall victims to it (e.g. by creating a superintelligence and making it super-happy, to the expense of normal human happiness). But it’s not the same scenario as the specks.
The FAI should make a drug which will make you happy for Felix. edit: to clarify. The two choices here are not happy naturally vs happy via wireheading. The two choices are intense AI-induced ‘natural’ unhappiness, vs drug induced happiness. It’s similar to having your hand amputated, with or without ‘wireheading’, err, painkillers. I think it is pretty clear that if you have someone’s hand amputated, it is better if they can’t feel it and see it. Be careful with non-wireheading FAIs, ’less all surgery will be without anaesthesia (perhaps with only the muscle relaxant).
Well, in some sense, achieving happiness by anything other than reproduction, is already wireheading. Doesn’t need to be with a wire; what if I make a video which evokes intense feeling of pleasure? How far you can go before it is a mind hack?
edit: actually, I think the AI could raise people to be very empathetic for Felix, and very happy for him. Is it not good to raise your kids so that they can be happy in the world the way it is (when they can’t change anything anyway) ?
“achieving happiness by anything other than [subgoals of] reproduction” is wireheading from the perspective of my genes, and if they want to object I’m not stopping them. Happiness via drugs is wireheading from the perspective of me, and I object myself.
What if there’s double rainbow ? What if you have lower than ‘normal’ level of some neurotransmitter and under-appreciate the double rainbow without drugs? What if higher than ‘normal’?
I’m not advocating drugs, by the way, just pointing out the difficulty in making any binary distinction here. The natural happy should be preferred to wire-headed happy, but the society does think that some people should take anti-depressants. If you are to labour in the name of the utility monster anyway, you could as well be happy. You object to happiness via drug as substitute for happiness without drugs, but if the happiness without drugs is not going to happen—then what?
Well, in some sense, achieving happiness by anything other than reproduction, is already wireheading.
No. This reduces the words to the point of meaninglessness. Human beings have values other than reproduction, values that make them happy when satisfied—art, pride, personal achievement, understanding, etc. Wireheading is about being made happy directly, regardless of the satisfaction of the various values.
The scenario previously discussed about Felix is that he was happy and everyone else suffered.
Now you’re posing a scenario where everyone is happy, but they’re made happy by having their values rewritten to place extremelty value on Felix’s happiness instead.
At this point, I hope we’re not pretending it’s the same scenario with only minor modifications, right? Your scenario is about the AI rewriting our values, it’s not about trading our collective suffering for Felix’s happiness.
Your scenario can effectively remove the person of Felix from the situation altogether, and the AI could just make us all very happy that the laws of physics keep on working.
You say art… what if I am a musician and I am making a song? That’s good, right? What if I get 100 experimental subjects to sit in MRI, as they listen to test music, and using my intelligence and some software tools, make very pleasurable song? What if I know that it works by activating such and such connections here and there which end up activating the reward system? What if I don’t use MRI, but use internal data available in my own brain, to achieve same result?
I know that this is arriving at meaninglessness, I just don’t see it as reducing the words anywhere; the words already only seem meaningful in the context of limited depth of inference, but it all falls apart if you are to make more steps (like an axiomatic system that leads to self contradiction). Making people happy [as terminal goal], this way or that, just leads to some form of really objectionable behaviour if done by something more intelligent than human.
You say art… what if I am a musician and I am making a song? That’s good, right? What if I get 100 experimental subjects to sit in MRI, as they listen to test music, and using my intelligence and some software tools, make very pleasurable song? What if I know that it works by activating such and such connections here and there which end up activating the reward system? What if I don’t use MRI, but use internal data available in my own brain, to achieve same result?
Be specific about what you are asking, please. What does the “what if” mean here? Whether these thing should be considered good? Whether such things should be considered “wireheading”? Whether we want an AI to do such things? What?
Making people happy, this way or that, just leads to some form of really objectionable behaviour if done by something more intelligent than human.
This claim doesn’t seem to make much sense to me. I’ve already been made non-objectionably happy by people more intelligent than me from time to time. My parents, when I was child. Good writers and funny entertainers, as an adult. How does it become authomatically “really objectionable” if it’s “something more intelligent than human” as opposed to “something more intelligent than you, personally?”
Be specific about what you are asking, please. What does the “what if” mean here? Whether these thing should be considered good? Whether such things should be considered “wireheading”? Whether we want an AI to do such things? What?
I’m trying to make you think a little deeper about your distinction between wireheading and non-wireheading. The point is that your choice of the dividing line is entirely arbitrary (and most people don’t agree where to put dividing line). I don’t know where you put the dividing line, and frankly I don’t care; i just want you to realize that you’re drawing arbitrary line on the beach, to the left of it is the land, to the right is the ocean. edit: That’s how maps work, not how territory works, btw.
This claim doesn’t seem to make much sense to me. I’ve already been made non-objectionably happy by people more intelligent than me from time to time. My parents, when I was child. Good writers and funny entertainers, as an adult. How does it become authomatically “really objectionable” if it’s “something more intelligent than human” as opposed to “something more intelligent than you, personally?”
I’d say, they had a goal to achieve something other than happiness , and the happiness was incidental.
I’m trying to make you think a little deeper about your distinction between wireheading and non-wireheading.
Don’t assume you know how deeply I think about it. The only thing I’ve effectively communicated to you so far that I consider it ludicrous to say that “achieving happiness by anything other than reproduction, is already wireheading”
We can agree Yes/No, that this discussion doesn’t have much of anything to do with the Felix scenario, right? Please answer this question.
The point is that your choice of the dividing line is entirely arbitrary (and most people don’t agree where to put dividing line).
Perhaps people don’t have to agree, and the people whose coherent extrapolated volition allows a situation “W” to be done to them, should so have it done to them, regardless of whether you label W to be ‘wireheading’ or ‘wellbeing’.
Or perhaps not. After all, it’s not as if I ever declared Friendliness to be a solved problem, so I don’t know why you keep talking to me as if I claimed it’s easy to arrive at a conclusion.
“Whether such things should be considered “wireheading”?” is what i want you to consider, yes.
I don’t have a binary classifier, absolute wireheading vs non-wireheading. I have the wireheadedness quantity. Connecting a wire straight into your pleasure centre will have wireheadedness of (very close to) 1, reproduction (maximization of expected number of each gene) will have wireheadedness of 0, taking heroin will be close to 1, taking LSD will be lower, the wireheadedness of the art varies depending on how much of your brain is involved in making pleasure out of art (how much involved is the art), and perhaps to how much of a hack the art is, though ultimately all of art is to greater or lesser extent a hack. edit: and i am actually earning my living sort of making art (i make CGI software, but also do CGI myself).
I don’t consider the low wireheadedness to be necessarily good. That’s the christian moral connotations, which I do not share as an atheist grown in non religious family.
Happiness, as a state of mind in humans, seems less to me about how strong the “orgasms” are than how frequently they occur without lessening the probability they will continue to occur. So what problems might there be with maximizing total future happy seconds experienced in humans, including emulations thereof (other than describing with sufficient accuracy the concepts of ‘human’ and ‘happiness’ to a computer)?
I think doing so would extrapolate to increasing population and longevity to within resource constraints and diminishing returns on improving average happiness uptime and existential risk mitigation, which seem to me to be the crux of people’s intuitions about the Felix and Wireheading problems.
What is then the torture of all the Humanity, against the super happy Felix with 3^^^3 pyramids. Nothing. By the same line of reasoning.
It’s hedonistic total-utilitarianism vs preference based consequentialism. That’s a big difference. Not only would the ‘sequence’ you reject not advocate preferring to torture humanity for the sake of making Felix superhappy, even in the absence of negative externalities it would still consider that sort of ‘happiness’ production a bad thing even for Felix.
Seriously though, either the Moral Universalism (and absolutism) is correct, in which case we could make an AI that would by itself develop very agreeable universal moral code, similar to how you can do it for mathematics or laws of physics (instead of us trying to implement our customs into AI), or it is incorrect, there’s no way to absolute moral code, and any FAI is going to be a straitjacket of humanity, at best implementing (some of) our customs and locking those in, and at worst implementing and enforcing something else like in that comic.
Saying no FAI exists in design space that could satisfy us is equivalent to saying nothing can satisfy us. In other words, if you are correct then the AI isn’t the problem and humanity would be “straitjacketed” anyway.
Saying we could never build an AI that would satisfy us because of the technical difficulty is plausible, but I don’t think that’s what you are saying.
Saying no FAI exists in design space that could satisfy us is equivalent to saying nothing can satisfy us. In other words, if you are correct then the AI isn’t the problem and humanity would be “straitjacketed” anyway.
I don’t see how not being fully satisfied is a straitjacket. I’m saying that our (the mankind) maximum satisfaction may be when straitjacketed, because mankind isn’t sane (and if there isn’t any truly sane morality system edit: to clarify. if there is truly sane morality system, then mankind can be cured of insanity).
I was using the term “satisfied” to include all human preferences, including the desire to not be “straitjacketed”.
If human preferences are inconsistent then humans still can’t do any better than an AI for there is an AI in design space that does nothing in our world but would make similar worlds look exactly like ours.
You assume that the utility of two different worlds can not be exactly equal. edit: or maybe you don’t. In any case, this AI which does absolutely nothing in our world is no more useful than AI that does nothing in all possible worlds, or just a brick.
Also, the desire for mankind (and life) not to be straitjacketed, is my view, i’m not sure it is coherently shared by mankind, and in fact i’m not even sure i like the way it is going if it is not straitjacketed in some way. edit: to clarify. I like the heuristics of maximizing the future choices for me. It is part of my values, that i don’t want removed. I don’t like [consequences of] this heuristic for mankind. Mankind is a meta-organism that is dumb and potentially self destructive.
edit: To clarify. What I am saying, is that there’s conflict between two values whose product matters. Survival vs freedom. Survival without freedom is bad. Freedom without survival is nonsense.
this AI which does absolutely nothing in our world is no more useful than AI that does nothing in all possible worlds, or just a brick.
Sorry, I wasn’t being clear. The point was saying that no AI can do better than humanity implies that our world is optimal out of all similar worlds. (I believe there are much stronger arguments than this against what you are saying, but this one should suffice)
It only implies so if your AI is totally omniscient.
edit: Anyhow, I can of course think of AI that can do better than humanity: the AI sits inside Jupiter, and nudges away any incoming comets and asteroids, and that’s it (then as sun burns up then burns out, moves Earth around). The problem starts when you make the AI discriminate between very similar worlds. edit: and even that asteroid stopping AI may be a straitjacket to intelligent life as it may be that the mankind is a wrong thing entirely, and should be permitted to kill itself, and then the meteorite impacts should be allowed so that ants get a chance.
as it may be that the manking is a wrong thing entirely, and should be permitted to kill itself, and then the meteorite impacts should be allowed so that ants get a chance.
I don’t know much about my own extrapolated preferences but I can reason that as my preferences are the product of noise in the evolutionary process, reality is unlikely to align with them naturally. It’s possible that my preferences consider “mankind a wrong thing entirely”; but that they would align with whatever the universe happens to produce next on earth (assuming the rise of another dominant species is even plausible) is incredibly unlikely. Anything that happens without a causal line of descent from human values is unlikely to align with human values.
Anything that happens without a causal line of descent from human values is unlikely to align with human values.
Unlikely to align how exactly? There’s also the common causes, you know; A and B can be correlated when A causes B, when B causes A, or when C causes A and B.
It seems to me that you can require arbitrary degree of alignment to arrive at arbitrary unlikehood, but some alignment via common cause is nonetheless probable.
There’s such thing as over-fitting… if you have some noisy data, the theory that fits the data ideally is just the table of the data (e.g. heights and falling times); the useful theory doesn’t fit data exactly in practice. If we make the AI perfectly fit to what mankind does, we could just as well make a brick and proclaim it an omnipotent omniscient mankind-friendly AI that will never stop the mankind from doing something that mankind wants (including taking the extinction risks).
Moral Universalism could be true in some sense, but not automatically compelling, and the AI would need to be programmed to find and/or follow it.
My original post had this possibility. Where you make the AI that develops much of the morality (which it would really have to). edit: note that the AI in question may be just a theorem prover which tries to find some universal moral axioms, but is not itself moral or compelled to implement anything in real world.
There could be a uniquely specified human morality that fulfills much of the same purpose Moral Universalism does for humans.
What’s in 10 millions years? 100 millions? A straitjacket for intelligent life.
It might be possible to specify what we want in a more dynamic way than freezing in current customs.
We would still want some limits from our values right now, e.g. so that the society wouldn’t steer itself to suicide somehow. Even rules like ’it is good if 99% of people agree with it” can steer us into some really nasty futures over the time. Other issue is the possibility of de-evolution of human intelligence. We would not want to lock in all the customs, but some of the values of the today, would get frozen in.
edit: and it’s not even a dichotomy. There’s the hypothetical AIs which implement some moral absolute that is good for all cultures, possible cultures, and everyone, which we would invent, aliens would invent, whatever we evolve into could invent, etc. If those do not exist, then what exists that isn’t to some extent culturally specific to h. Sapiens circa today?
The Unobtrusive Guardian. An FAI that concludes that humanity’s aversion to being ‘straightjacketed’ is such that it is never ok for it to interfere with what humans do themselves. It proceeds to navigate itself out of the way and wait until it spots an external threat like a comet or hostile aliens. It then destroys those threats.
(The above is not a recommended FAI design. It is a refutation by example of an absolute claim that would exclude the above.)
didn’t i myself describe it and outline how this one also limits opportunities normally available to evolution for instance? It’s to very little extent a straitjacket to life, as it does very little.
In case anyone is unfamiliar with the concept: Utility Monster.
It’s a total-utility maximising AI.
if it was a total utility maximizing AI it would clone the utility monster (or start cloning everyone else if the utility monster is super linear) edit: on the other hand, if it was average utility maximizing AI it would kill everyone else leaving just the utility monster. In any case there’d be some serious population ‘adjustment’.
Not if that made the utility monster unhappy.
It doesn’t have to tell the monster. (this btw is one wireheading-related issue; i do quite hate the lingo here though; calling it wireheaded makes it sound like there isn’t a couple thousands years of moral philosophy about the issue and related issues)
I’m not aware of an alternative to “wireheading” with the same meaning.
Go classical - ‘lotus-eating’.
Good one.
http://en.wikipedia.org/wiki/Lotus-eaters
That’s the ancient greeks writing about hypothetical wireheads. (the ‘moral philosophy’ is perhaps a bad choice of word for search for greek stuff; ethics is the greek word)
A bit of search around that showed nearly no reference to lotus eating/lotus eater in moral philosophy.
Something much closer to “wireheading” would be hedonism, and more specifically Nozick’s Experience Machine, which is pretty much wireheading, but isn’t thousands of years old, and has been referenced here.
(And the term “wirehead” as used here probably comes from the Known Space stories, so probably predates Nozick’s 1974 book)
Well, for one thing, it ought to be obvious that Mohammed would have banned a wire into the pleasure centre, but lacking the wires, he just banned the alcohol and other intoxicants. The concept of ‘wrong’ ways of seeking the pleasure is very, very old.
I don’t think you looked very hard—I turned up a few books apparently on moral philosophy by searching in Google Books for ‘moral (“lotus eating” OR “lotus-eating” OR “lotus eater” OR “lotus-eater”)’.
And yes, I’m pretty sure the wirehead term comes from Niven’s Known Space. I’ve never seen any other origin discussed.
It would be awfully hard to hide.
Sure, it could lock the monster in an illusory world of optimal happiness, or just stimulate his pleasure centers directly, etc. But unless we assume that the AI is working under constraints that prevent that sort of thing, the comic doesn’t make much sense.
There’s no clear line between ‘hiding’ and ‘not showing’. You can leave just a million people or so, to be put around the monster, and simply not show him the rest. It is not like the AI is making every wall into the screen displaying the suffering on the construction of pyramids. Or you can kill those people and show it in such a way that the monster derives pleasure from it. At any rate, anyone whose death would go unnoticed by the monster, or whose death does not sufficiently distress the monster, would die, if the AI is to focus on average pleasure.
edit: I think those solutions really easily come to mind when you know of what a soviet factory would do to exceed the five year plan.
The AI explicitly wasn’t focused on average pleasure, but on total pleasure, as measured by average pleasure times the population.
Yep. I was just posting on what average pleasure maximizing AI would do, that isn’t part of the story.
You’re all wrong — if the happiness of the utility monster compounds as the comic says, then you get greater happiness out of lumping it all into one monster rather than cloning.
Whoops. Panel 3 (y axis caption) and 6 (suicide not allowed) indeed make that clear.
Why are you wasting your time on-line? Felix wants more pyramids.
Chain gangs strike me as sub-optimal for building pyramids or total happiness.
Clearly, Felix prefers pyramids built by chain-gangs.
It’s a GOOD life.
It is a good thing that you are thinking good things about Felix. This means he is happier if you aren’t in corn field since you are a good person with no bad thoughts.
I’m not sure why the down vote.
If it helps, Konkavistador and I are referring to a classic horror story called “It’s a Good Life”.
Felix means happy (or lucky), and is the origin of the word felicity. It took me a while to realize this, so I thought I would note it. Is it obvious for all native English speakers?
Not obvious to me. I did know the meaning of Felix, but it’s deep enough in the unused drawers of my memory that I might never have made the connection without someone pointing it out.
It was obvious to me, I’m not a native English speaker. Anyone knowing a bit of Latin is probably going to catch it.
I am a native English speaker, but, yeah, without the Latin I probably wouldn’t have noticed.
Not obvious to me. I honestly would never have made the connection.
Obvious to me. Native speaker.
The latest SMBC comic is now an illustrated children’s story which more or less brings up parallel thoughts to Cynical about Cynicism.
Everyone’s talking about this as if it was a hypothetical, but as far as I can tell it describes pretty accurately how hierarchical human civilizations tend to organize themselves once they hit a certain size. Isn’t a divine ruler precisely someone who is more deserving and more able to absorb resources? Aren’t the lower orders people who would not appreciate luxuries and indeed have fully internalized such a fact (“Not for the likes of me”)
If you skip the equality requirement, it seems history is full of utilitarian societies.
Another good one on Ethics
Felix is 3^^^3 units happy. And no dust speck in his eyes. What is torturing millions for this noble goal?
I, of course, reject that “sequence” which preaches exactly this.
That’s because your brain doesn’t have the ability to imagine just how happy Felix is and fails to weigh his actual happiness against humanity’s.
And since I don’t want that ability I think we are still fine. At the end of the day I’m perfectly ok with not caring about Felix that much.
BTW, would anyone have a one on one chat with me about the dust speck argument?
Yeah, yeah.
I am very glad, that the people who advocate the Felix morality with the “dust speck” sophism have virtually no chance to really accomplish something in the AI field.
Downvoted: You should let someone actually advocate the Felix argument, before bashing them for supposedly advocating it.
So far, generalizing from my own example, there’s atleast one person who agrees with the dust speck argument, but opposes the Felix argument. I know as yet no person who agree with the Felix argument. So, I find it obnoxious that you effectively pretend I advocate the Felix argument when I don’t.
You may think I’m inconsistent in supporting the one but not the other, but don’t pretend I support both, okay?
Make that two.
Thomas, your treatment of this is reductio ad absurdum of what I feel like is what at least 33% of LW believes. Worse, when we’re (and by we, I mean everyone else, since I’m not going to bother getting involved in this further) calling you on it and actually trying to have a dialogue, you’re dismissing us and insulting us.
To be fair, one could argue for dust specks without Felix morality by weighing increases in individual’s happiness with diminishing returns such as to asymptotically approach some limit. (But then you would sacrifice one individuals arbitrarily unimaginable happiness just to bring someone else an arbitrarily small sliver towards baseline)
Since it’s meaningless to call dust specks “right”, just consider it true if you want to. I don’t want to so I don’t.
Which sequence is that?
This one.
I am not sure if it counts into “The Sequence”, I guess it does.
The problem is the line of reasoning, where a “50 years of torture” is better than 3^^^3 years with a dust speck in the eye every so often.
What is then the torture of all the Humanity, against the super happy Felix with 3^^^3 pyramids. Nothing. By the same line of reasoning.
That’s not even the dilemma you linked to. The dilemma you linked to “one person be horribly tortured for fifty years without hope or rest, or that 3^^^3 people get dust specks in their eyes”.
It’s probably bad practice to say two lines of reasoning are the same line of reasoning, if you don’t believe in either of them.
For starters I don’t need to have a positive factor for Felix’s further happiness in my utilty function. That alone is a significant difference.
Look. You have one person, under terrible torture for 50 years on one side and a gazillion of people with a slight discomfort every year or so on the other side.
It is claimed that the first is better.
Now, you have a small humanity as is, only enslaved for pyramid building for Felix. He has eons of subjective time to enjoy this pyramids and he is unbelievably happy. More happy than any man, woman or child could ever be. The amount of happiness of Felix outweights the misery of billion of people by a factor of a million.
What’s the fundamental difference between those two cases? I don’t see it, do you?
The only similarity between those cases is that they involve utility calculations you disagree with. Otherwise every single detail is completely different. (e. g. the sort of utility considered, two negative utilities being traded against each other vs. trading utility elsewhere (positive and negative) for positive utility, which side of the trade the single person with the large individual utility difference is on, the presence of perverse incentives, etc, etc).
If anything it would be more logical to equate Felix with the tortured person and treat this as a reductio ad absurdum of your position on the dust speck problem. (But that would be wrong too, since the numbers aren’t actually the problem with Felix, the fact that there’s an incentive to manipulate your own utility function that way is (among other things).)
You aren’t seeing forest for the trees… the thing that is identical is that you are trading utilities across people, which is fundamentally problematic and leads to either tortured child or utility monster, or both.
Omelas is a goddamned paradise. Omelas without the tortured child would be better, yeah, but Omelas as described is still better than any human civilization that has ever existed. (For one thing, it only contains one miserable child.)
Well it seems to me they are trading N dust specks vs torture in Omelas. edit: Actually, I don’t like Omelas [as example]. I think that miserable child would only make the society way worse, with the people just opting to e.g. kill someone when it ever so slightly results in increase in their personal expected utility. This child in Omelas puts them straight on the slippery slope, and making everyone aware of slippage makes people slide down for fun and profit.
Our ‘civilization’ though, of course, is a god damn jungle and so its pretty damn bad. It’s pretty hard to beat on the moral wrongness scale, from first principles; you have to take our current status quo and modify it to get to something worse (or take our earlier status quo).
Your edit demonstrates that you really don’t get consequentialism at all. Why would making a good tradeoff (one miserable child in exchange for paradise for everyone else) lead to making a terrible one (a tiny bit of happiness for one person in exchange for death for someone else)?
People are individual survival machines, that’s why. Each bastard in the Omelas knows at the gut level (not in some abstract way) that there’s a child being miserable specifically for a tiny bit of his happiness. His personally. He will then kill for larger bit of his happiness. He isn’t society. He’s an individual. It is all between him and that child. At very best, between him&his family, and that child. The society ain’t part of equation. (And if it is, the communism should of worked perfectly in that universe) [assuming that the individual believes he won’t be caught]
edit: also i think you don’t understand the story. They didn’t take the child apart for much needed organs to save other folks in Omelas. The child is miserable for the purpose of bringing sense of unity into the commune, for the purpose of making them value their happiness. That is already very irrational, and not only that but also entirely contrary to how homo sapiens behave when exposed to gross injustice.
edit: To explain my use of language. We are not talking about rational agents and what they ought to decide. We are taking of irrational agents that are supposedly (premise of the story) made more well behaved by participation in a pointless and evil ritual, which is the opposite of the known effect of direct participation in that sort of ritual, on populace. That’s why the story makes a poor case against utilitarianism. Because the consequence is grossly invalid.
Tapping out, inferential distance too wide.
What ever. The reason why I don’t like that story too much is, I do not believe that, given the way homo sapiens are, demonstrating them that child in the Omelas would have consequence stated in the story, even if they are instructed that this is the consequence. It’s too much of a stretch. The effect of such on H. Sapiens, that I would forecast, would be entirely opposite. The Omelas is doing something more similar to how you break in the soldiers for effective Holocaust death squad—the soldiers that later kill others or themselves outside the orders. You make the soldiers participate all together in something like that. That’s why I don’t like this as example. I’m arguing against my own point of bringing it up as example. Because the reason we don’t like Omelas is because keeping child like this won’t have positive consequence. (and for it to have stated positive consequence, the people already have to have a grossly irrational reaction to exposure to that child)
This is either wrong (the utility functions of the people involved aren’t queried in the dust speck problem) or so generic as to be encompassed in the concept of “utility calculation”.
Aggregating utility functions across different people is an unsolved problem, but not necessarily an unsolvable one. One way of avoiding utility monsters would be to normalize utility functions. The obvious way to do that leads to problems such as arachnophobes getting less cake even if they like cake equally much, but IMO that’s better than utility monsters.
The utilities of many people are a vector, you are to map it to a scalar value, that loses a lot of information in process, and it seems to me however you do it, leads to some sort of objectionable outcomes. edit: I have a feeling one could define it reasonably with some sort of Kolmogorov complexity like metric that would grow incredibly slowly for the dust specks and would never equate what ever hideously clever thing does our brain do to most of the neurons when we suffer; the suffering beating the dust specks on the complexity (you’d have to write down the largest number you can write down in as many bits as the bits being tortured in the brain; then that number of dust specks starts getting to the torture level). We need to understand how pain works before we can start comparing pain vs dust specks.
Really? Every use of utilities I have seen either uses a real world measure (such as money) with a notation that it isn’t really utilities or they go directly for the unfalsifiable handwaving. So far I haven’t seen anything to suggest “aggregating utility functions” is even theoretically possible. For that matter most of what I have read suggests that even an individual’s “utility function” is usually unmanageably fuzzy, or even unfalsifiable, itself.
You must know, that every pyramid Felix does not get, causes him a dust speck size pain.
The Humanity’s suffering is only several billion times as much as a single 50 years torture of one individual.
Nothing compared to 3^^^3 “dust specks”??!
Felix is essentially a Utility Monster: a thought experiment that’s been addressed here before. As that family of examples shows, happiness-maximization breaks down rather spectacularly when you start considering self- or other-modification or any seriously unusual agents. You can bite that bullet, if you want, but not many people here do; fortunately, there are a few other ways you can tackle this if you’re interested in a formalization of humanlike ethics. The “Value Stability and Aggregation” post linked above touches on the problem, for example, as does Eliezer’s Fun Theory sequence.
You don’t need any self-modifying or non-humanlike agents to run into problems related to “Torture vs. Dust Specks”, though; all you need is to be maximizing over the welfare of a lot of ordinary agents. 3^^^3 is an absurdly huge number and leads you to a correspondingly counterintuitive conclusion (one which, incidentally, I’d estimate has led to more angry debate than anything else on this site), but lesser versions of the same tradeoff are quite realistic; unless you start invoking sacred vs. profane values or otherwise define the problem away, it differs only in scale from the same utilitarian calculations you make when, say, assigning chores.
In one case, (Torture to avoid the specks) the larger portion of people is better off if you pick the single person. In the other case, (Build pyramids to please Felix) the larger portion of people is worse off if you pick the single person.
So if my position was “The majority should win” It would be right to torture the person and it would be right to depose Felix.
I’m not sure if it’s a fundamental difference or a good difference, but I think that means I can lay out the following 4 distinct answer pairs:
Depose Felix, Torture Man: Majority wins.
Adore Felix, Speck people: Minority wins.
Adore Felix, Torture Man: Mean Happiness wins.
Depose Felix, Speck People: Minimum happiness wins. (Assuming either Felix is happier about being deposed than an average person with a dust speck in their eye, or dead, and no longer counted for minimum happiness.)
So I think I can see all 4 distinct positions, if I’m not missing something.
Edit: Fixed spacing.
Imagine that there is one tortured for 50 years and then free of any dust speck for the next 3^^^3 years.
Then we don’t have “the larger portion of people” anymore. Is anything different in such a case?
If I understand the dilemma, in your most recent phrasing, it’s this: A person who lives 3^^^3 years either:
a) has to suffer a dustspeck per year
b) has to suffer 50 years of torture at some point in that time, then I assume gets the memory of that torture deleted from his mind and his mind’s state restored to what it was before the torture (so that he doesn’t suffer further disutility from that memory or the broken mind-state, he only has to suffer the torture itself), He lives the remaining 3^^^3 years dustspeck-free.
If we don’t know what his own preferences are, and have no way of asking him, what should we choose on his behalf?
But what does this have to do with Felix?
It is argued in the said sequence, how much better is to have 1 tortured for 50 years, than 3^^^3 people having slight discomfort.
Which preferences are in question now?
Can we have one dilemma at a time, please, Thomas? You said something about 3^^^3 years—therefore you’re not talking about the dilemma as stated in the original sequence, as that dilemma doesn’t say anything about 3^^^3 years.
The preferences relating to the original dilemma, are the preferences of the person who presumably prefers not to get tortured, vs the preferences of 3^^^3 people who presumably prefer not to get a dust speck in the eye.
Well, first of all, I’m assuming that you’re doing that to both groupings (since otherwise I could say “Well, one has only one person and one has a massive number of people, which is a difference.” but that seems like a trivial point)
So if you apply it to both, then it’s just one person considering tradeoff A, (pay torture to go speck free for eons)
And another person considering tradeoff B(personally build pyramids for eons to get to live in your own collection of pyramids for some years.)
I could say that in once case the pain is relatively dense (torture, condensed to 50 years) and the pleasure is relatively sparse,(speck free, over 3^^^3 years) and that in the other case the pain relatively sparse (slave labor, spread out over a long time) and the pleasure is relatively dense (Incomprehensible pyramidgasm.).
I’m not sure if that matters or in what ways that difference matters. I’m really not up to date on how your brain handles that specifically and would probably need to look it up further.
No. Building pyramids as humans. And enjoying them much, much longer as they stand there, for Felix. Enjoyed by Felix.
Maybe the amount of our pleasure with Giza pyramids already exceeded the pain invested to build them. I don’t know.
Can all the pains of a slave be justified by all the pleasures of the tourist, visiting the hole in the rock, he was forced carving for 50 years?
Or can a large group of sick sadists are entitled to slowly torture someone, since their pleasure sum will be greater than the pain of the unlucky one?
I don’t think so.
I’ve heard that the labourers who made the pyramids were actually quite well paid.
Was it that much pain? I read in National Geographic, IIRC, that the modern archaeological conception was that the pyramids were mostly or entirely built by paid labor—Nile farmers killing time during the dry season. This may even be a good thing, depending on whether it diverted imperial tax revenue from foreign adventurism into monument/tomb-building.
I saw it, too. Had to use other example. Mayan or Aztec pyramids maybe.
Well, it’s still a fun Fermi calculation problem, anyway.
Let’s see, the Pyramids have been the targets of tourism since at least the original catalogue of wonders of the ancient world, Antipater of Sidon ~140 BC which includes “the great man-made mountains of the lofty pyramids”. So that’s ~2150 years of tourism (2012+140). Quickly checking, Wikipedia says 12.8 million people visited Egypt for tourism in 2008, but surely not all of them visited the pyramids? Let’s halve it to 6 million.
Let’s pretend Egyptian tourism followed a linear growth between 140 BC with one visitor (Antipater) and 6 million in 2012 (yes, world population & wealth has grown and so you’d expect tourism to grow a lot, but Egypt has been pretty chaotic recently), over 2150 years. We can just average that to 3 million a year, which gives us a silly total number of tourists of 2150 * 3 million or 6.45 billion visitors.
There are 138 pyramids, WP says, with the Great Pyramid estimated at 100,000 workers. Let’s halve it (again with the assumptions!) at 50k workers a pyramid, 50,000 * 138 = 6.9m workers total.
This gives us the visitor:worker ratio of 6.45b:6.9m, or 21,500:23, or 934.8:1.
And of course the pyramids are still there, so whatever the real ratio, it’s getting better (modulo issues of maintenance and restoration).
Maybe those pyramids in Egypt are not so bad, after all.
But with how much tourism you can justify Aztec pyramids, where people were slaughtered?
How many billion tourist should come to be worth to start with them all over again?
You’d need a heck of a lot more tourism than for Egypt… although apparently there’s quite a range of estimates of deaths, from less than 20,000 a year to more than 200,000 a year. Given the substantially less tourism to the Aztec pyramids (inasmuch as apparently only 2 small unimpressive Aztec pyramids survive, with all the impressive ones like Tenochtitlan destroyed), it’s safe to say that the utilitarian calculus will never work out for them.
It seems to me that any historical event that was both painful to the participants, and interesting to read and learn about after the fact, creates the same dilemma that’s been discussed here. Will World War Two have been a net good if 10,000 years from now trillions of people have gotten incredible enjoyment from watching movies, reading books, and playing videogames that involve WWII as a setting in some way?
The first solution to this dilemma that comes to mind is that ready substitutes exist for most of the entertainments associated with these unpleasant events. If the Aztecs had built their pyramids and then never sacrificed anyone on them it probably wouldn’t hurt the modern tourist trade that much. And if WWII had never happened and thus caused the Call of Duty videogame franchise to never exist, it wouldn’t have a big impact on utility because some cognates of the Doom, Unreal, and similar franchises would still exist (those franchises are based on fictional events, so no one got hurt inspiring them).
In fact, if I was to imagine an alternate human history where no war, slavery, or similar conflict had ever happened, and the inhabitants got all their enjoyment from entertainment media based on fictional conflicts, I think such a world would have a much higher net utility than our own.
Big romances have been inspired by much smaller events, it should be noted.
The first approximation which springs in my mind would be an exponential growth rather than a linear one.
Sure—but can you offhand fit an exponential curve and calculate its summation? I’m sure it’s doable with the specified endpoints and # of periods (just steal a simple interest formula), but it’s more work than halving and multiplying.
Well… integral from t0 to t1 of exp(at+b) dt = (exp(at1+b)-exp(a*t2+b))/a i.e. the difference between the endpoints times the time needed to increase by a factor of e… a 6-million-fold increase is about 22.5 doublings (knowing 2^20 = 1 million), hence about 15 factors of e (knowing that ln 2 = 0.7) i.e. about one in 150… hence the total number of tourists is about 1 billion (about six times less than Rhwawn’s estimate—my eyeballs had told me it would be about one third… close enough!)
I’m actually a little surprised that his such gross approximation puts it off by only 6x. For a Fermi estimate that’s perfectly acceptable.
Being very very outraged isn’t really an argument.
Give us your own (non-utilitarian I assume) decision theory that you consider encapsulating all that is good and moral, if you please.
If you can’t, please stop being outraged as those of us who try to solve the problem, even if you feel we’ve taken wrong turns in the path towards the solution.
Found this by random clicking around, I expect no one’s still reading this, but maybe we’ll catch each other via Inbox:
How about “optimize the worst case” from in game theory? It settles both the dust speck vs. torture and the the Utility Monster Felix problems neatly.
I don’t know, 3^^^3 is a pretty long time to fix brain trauma. Or are you offering complete restoration after the torture? In that case, I might just take it.
I am not offering anything at all. I strongly advice you NOT to substitute the slight discomfort over long time period with a horrible torture for a shorter period.
One fundamental difference is that I don’t care about Felix’s further happiness. After some point, I may even resent it, which would make his additional happiness of negative utility to me.
Another difference is that happiness may be best represented as a percentage with an upper bound of e.g. 100% happy, rather than be an integer you can keep adding to without end.
I think Felix’s case may be an interesting additional scenario to consider, in order to be sure that AIs don’t fall victims to it (e.g. by creating a superintelligence and making it super-happy, to the expense of normal human happiness). But it’s not the same scenario as the specks.
The FAI should make a drug which will make you happy for Felix. edit: to clarify. The two choices here are not happy naturally vs happy via wireheading. The two choices are intense AI-induced ‘natural’ unhappiness, vs drug induced happiness. It’s similar to having your hand amputated, with or without ‘wireheading’, err, painkillers. I think it is pretty clear that if you have someone’s hand amputated, it is better if they can’t feel it and see it. Be careful with non-wireheading FAIs, ’less all surgery will be without anaesthesia (perhaps with only the muscle relaxant).
Cute, but that’s effectively the well-known scenario of Wireheading where the complexity of human value is replaced by mere ‘happiness’.
Well, in some sense, achieving happiness by anything other than reproduction, is already wireheading. Doesn’t need to be with a wire; what if I make a video which evokes intense feeling of pleasure? How far you can go before it is a mind hack?
edit: actually, I think the AI could raise people to be very empathetic for Felix, and very happy for him. Is it not good to raise your kids so that they can be happy in the world the way it is (when they can’t change anything anyway) ?
“achieving happiness by anything other than [subgoals of] reproduction” is wireheading from the perspective of my genes, and if they want to object I’m not stopping them. Happiness via drugs is wireheading from the perspective of me, and I object myself.
What if there’s double rainbow ? What if you have lower than ‘normal’ level of some neurotransmitter and under-appreciate the double rainbow without drugs? What if higher than ‘normal’?
I’m not advocating drugs, by the way, just pointing out the difficulty in making any binary distinction here. The natural happy should be preferred to wire-headed happy, but the society does think that some people should take anti-depressants. If you are to labour in the name of the utility monster anyway, you could as well be happy. You object to happiness via drug as substitute for happiness without drugs, but if the happiness without drugs is not going to happen—then what?
No. This reduces the words to the point of meaninglessness. Human beings have values other than reproduction, values that make them happy when satisfied—art, pride, personal achievement, understanding, etc. Wireheading is about being made happy directly, regardless of the satisfaction of the various values.
The scenario previously discussed about Felix is that he was happy and everyone else suffered. Now you’re posing a scenario where everyone is happy, but they’re made happy by having their values rewritten to place extremelty value on Felix’s happiness instead.
At this point, I hope we’re not pretending it’s the same scenario with only minor modifications, right? Your scenario is about the AI rewriting our values, it’s not about trading our collective suffering for Felix’s happiness.
Your scenario can effectively remove the person of Felix from the situation altogether, and the AI could just make us all very happy that the laws of physics keep on working.
You say art… what if I am a musician and I am making a song? That’s good, right? What if I get 100 experimental subjects to sit in MRI, as they listen to test music, and using my intelligence and some software tools, make very pleasurable song? What if I know that it works by activating such and such connections here and there which end up activating the reward system? What if I don’t use MRI, but use internal data available in my own brain, to achieve same result?
I know that this is arriving at meaninglessness, I just don’t see it as reducing the words anywhere; the words already only seem meaningful in the context of limited depth of inference, but it all falls apart if you are to make more steps (like an axiomatic system that leads to self contradiction). Making people happy [as terminal goal], this way or that, just leads to some form of really objectionable behaviour if done by something more intelligent than human.
Be specific about what you are asking, please. What does the “what if” mean here? Whether these thing should be considered good? Whether such things should be considered “wireheading”? Whether we want an AI to do such things? What?
This claim doesn’t seem to make much sense to me. I’ve already been made non-objectionably happy by people more intelligent than me from time to time. My parents, when I was child. Good writers and funny entertainers, as an adult. How does it become authomatically “really objectionable” if it’s “something more intelligent than human” as opposed to “something more intelligent than you, personally?”
I’m trying to make you think a little deeper about your distinction between wireheading and non-wireheading. The point is that your choice of the dividing line is entirely arbitrary (and most people don’t agree where to put dividing line). I don’t know where you put the dividing line, and frankly I don’t care; i just want you to realize that you’re drawing arbitrary line on the beach, to the left of it is the land, to the right is the ocean. edit: That’s how maps work, not how territory works, btw.
I’d say, they had a goal to achieve something other than happiness , and the happiness was incidental.
Don’t assume you know how deeply I think about it. The only thing I’ve effectively communicated to you so far that I consider it ludicrous to say that “achieving happiness by anything other than reproduction, is already wireheading”
We can agree Yes/No, that this discussion doesn’t have much of anything to do with the Felix scenario, right? Please answer this question.
Perhaps people don’t have to agree, and the people whose coherent extrapolated volition allows a situation “W” to be done to them, should so have it done to them, regardless of whether you label W to be ‘wireheading’ or ‘wellbeing’.
Or perhaps not. After all, it’s not as if I ever declared Friendliness to be a solved problem, so I don’t know why you keep talking to me as if I claimed it’s easy to arrive at a conclusion.
“Whether such things should be considered “wireheading”?” is what i want you to consider, yes.
I don’t have a binary classifier, absolute wireheading vs non-wireheading. I have the wireheadedness quantity. Connecting a wire straight into your pleasure centre will have wireheadedness of (very close to) 1, reproduction (maximization of expected number of each gene) will have wireheadedness of 0, taking heroin will be close to 1, taking LSD will be lower, the wireheadedness of the art varies depending on how much of your brain is involved in making pleasure out of art (how much involved is the art), and perhaps to how much of a hack the art is, though ultimately all of art is to greater or lesser extent a hack. edit: and i am actually earning my living sort of making art (i make CGI software, but also do CGI myself).
I don’t consider the low wireheadedness to be necessarily good. That’s the christian moral connotations, which I do not share as an atheist grown in non religious family.
Happiness, as a state of mind in humans, seems less to me about how strong the “orgasms” are than how frequently they occur without lessening the probability they will continue to occur. So what problems might there be with maximizing total future happy seconds experienced in humans, including emulations thereof (other than describing with sufficient accuracy the concepts of ‘human’ and ‘happiness’ to a computer)?
I think doing so would extrapolate to increasing population and longevity to within resource constraints and diminishing returns on improving average happiness uptime and existential risk mitigation, which seem to me to be the crux of people’s intuitions about the Felix and Wireheading problems.
It’s hedonistic total-utilitarianism vs preference based consequentialism. That’s a big difference. Not only would the ‘sequence’ you reject not advocate preferring to torture humanity for the sake of making Felix superhappy, even in the absence of negative externalities it would still consider that sort of ‘happiness’ production a bad thing even for Felix.
Hahaha.
Seriously though, either the Moral Universalism (and absolutism) is correct, in which case we could make an AI that would by itself develop very agreeable universal moral code, similar to how you can do it for mathematics or laws of physics (instead of us trying to implement our customs into AI), or it is incorrect, there’s no way to absolute moral code, and any FAI is going to be a straitjacket of humanity, at best implementing (some of) our customs and locking those in, and at worst implementing and enforcing something else like in that comic.
Saying no FAI exists in design space that could satisfy us is equivalent to saying nothing can satisfy us. In other words, if you are correct then the AI isn’t the problem and humanity would be “straitjacketed” anyway.
Saying we could never build an AI that would satisfy us because of the technical difficulty is plausible, but I don’t think that’s what you are saying.
I don’t see how not being fully satisfied is a straitjacket. I’m saying that our (the mankind) maximum satisfaction may be when straitjacketed, because mankind isn’t sane (and if there isn’t any truly sane morality system edit: to clarify. if there is truly sane morality system, then mankind can be cured of insanity).
I was using the term “satisfied” to include all human preferences, including the desire to not be “straitjacketed”.
If human preferences are inconsistent then humans still can’t do any better than an AI for there is an AI in design space that does nothing in our world but would make similar worlds look exactly like ours.
You assume that the utility of two different worlds can not be exactly equal. edit: or maybe you don’t. In any case, this AI which does absolutely nothing in our world is no more useful than AI that does nothing in all possible worlds, or just a brick.
Also, the desire for mankind (and life) not to be straitjacketed, is my view, i’m not sure it is coherently shared by mankind, and in fact i’m not even sure i like the way it is going if it is not straitjacketed in some way. edit: to clarify. I like the heuristics of maximizing the future choices for me. It is part of my values, that i don’t want removed. I don’t like [consequences of] this heuristic for mankind. Mankind is a meta-organism that is dumb and potentially self destructive.
edit: To clarify. What I am saying, is that there’s conflict between two values whose product matters. Survival vs freedom. Survival without freedom is bad. Freedom without survival is nonsense.
Sorry, I wasn’t being clear. The point was saying that no AI can do better than humanity implies that our world is optimal out of all similar worlds. (I believe there are much stronger arguments than this against what you are saying, but this one should suffice)
It only implies so if your AI is totally omniscient.
edit: Anyhow, I can of course think of AI that can do better than humanity: the AI sits inside Jupiter, and nudges away any incoming comets and asteroids, and that’s it (then as sun burns up then burns out, moves Earth around). The problem starts when you make the AI discriminate between very similar worlds. edit: and even that asteroid stopping AI may be a straitjacket to intelligent life as it may be that the mankind is a wrong thing entirely, and should be permitted to kill itself, and then the meteorite impacts should be allowed so that ants get a chance.
I don’t know much about my own extrapolated preferences but I can reason that as my preferences are the product of noise in the evolutionary process, reality is unlikely to align with them naturally. It’s possible that my preferences consider “mankind a wrong thing entirely”; but that they would align with whatever the universe happens to produce next on earth (assuming the rise of another dominant species is even plausible) is incredibly unlikely. Anything that happens without a causal line of descent from human values is unlikely to align with human values.
Unlikely to align how exactly? There’s also the common causes, you know; A and B can be correlated when A causes B, when B causes A, or when C causes A and B.
It seems to me that you can require arbitrary degree of alignment to arrive at arbitrary unlikehood, but some alignment via common cause is nonetheless probable.
Well yes, but I would assume you would want more alignment, not less.
There’s such thing as over-fitting… if you have some noisy data, the theory that fits the data ideally is just the table of the data (e.g. heights and falling times); the useful theory doesn’t fit data exactly in practice. If we make the AI perfectly fit to what mankind does, we could just as well make a brick and proclaim it an omnipotent omniscient mankind-friendly AI that will never stop the mankind from doing something that mankind wants (including taking the extinction risks).
False dichotomy.
Name 3 things in the middle.
(examples chosen for being at different points in the spectrum between the two options, not for being likely)
Moral Universalism could be true in some sense, but not automatically compelling, and the AI would need to be programmed to find and/or follow it.
There could be a uniquely specified human morality that fulfills much of the same purpose Moral Universalism does for humans.
It might be possible to specify what we want in a more dynamic way than freezing in current customs.
My original post had this possibility. Where you make the AI that develops much of the morality (which it would really have to). edit: note that the AI in question may be just a theorem prover which tries to find some universal moral axioms, but is not itself moral or compelled to implement anything in real world.
What’s in 10 millions years? 100 millions? A straitjacket for intelligent life.
We would still want some limits from our values right now, e.g. so that the society wouldn’t steer itself to suicide somehow. Even rules like ’it is good if 99% of people agree with it” can steer us into some really nasty futures over the time. Other issue is the possibility of de-evolution of human intelligence. We would not want to lock in all the customs, but some of the values of the today, would get frozen in.
The second two exceptions would clearly not be required for the purpose of rejecting a dichotomy.
Name 1 then.
edit: and it’s not even a dichotomy. There’s the hypothetical AIs which implement some moral absolute that is good for all cultures, possible cultures, and everyone, which we would invent, aliens would invent, whatever we evolve into could invent, etc. If those do not exist, then what exists that isn’t to some extent culturally specific to h. Sapiens circa today?
The Unobtrusive Guardian. An FAI that concludes that humanity’s aversion to being ‘straightjacketed’ is such that it is never ok for it to interfere with what humans do themselves. It proceeds to navigate itself out of the way and wait until it spots an external threat like a comet or hostile aliens. It then destroys those threats.
(The above is not a recommended FAI design. It is a refutation by example of an absolute claim that would exclude the above.)
didn’t i myself describe it and outline how this one also limits opportunities normally available to evolution for instance? It’s to very little extent a straitjacket to life, as it does very little.