“Living” the way I used it means “living to the fullest” or, a little more specifically “feeling really engaged in life” or “feeling fulfilled”.
I understand that you find your personal creative projects highly enjoyable, and that’s great. But you aren’t merely saying, “I enjoy X”, you’re saying, “enjoying Y instead of X is objectively wrong” (if I understand you correctly).
I used “living” to refer to a subjective state. There’s nothing objective about it, and IMO, there’s nothing objectively right or wrong about having a subjective state that is (even in your own opinion) not as good as the ideal.
I feel like your real challenge here is more similar to Kawoomba’s concern. Am I right?
They consume entertainment because it is enjoyable,
Do you find it more enjoyable to passively watch entertainment than to do your own projects? Do you think most people do? If so, might that be because the fun was taken out of learning, or people’s creativity was reduced to the point where doing your own project is too challenging, or people’s self-confidence was made too dependent on others such that they don’t feel comfortable pursuing that fulfilling sense of having done something on their own?
or because it facilitates social contact (which they in turn find enjoyable), not because they believe it will make them more efficient (though see below).
I puzzle at how you classify watching something together as “social contact”. To me, being in the same room is not a social life. Watching the same entertainment is not quality time. The social contact I yearn for involves emotional intimacy—contact with the actual person inside, not just a sense of being in the same room watching the same thing. I don’t understand how that can be called social contact.
Many people—yourself not among them, admittedly—find that they are able to internalize new ideas much more thoroughly if these ideas are tied into a narrative.
I’ve been thinking about this and I think what might be happening is that I make my own narratives.
Similarly, other people find it easier to communicate their ideas in the form of narratives
This, I can believe about Eliezer. There are places where he could have been more incisive but is instead gets wordy to compensate. That’s an interesting point.
I am just not convinced that this statement applies to anything like a majority of “person+idea” combinations.
Okay, so to clarify, your position is that entertainment is a more efficient way to learn?
“Living” the way I used it means “living to the fullest” or, a little more specifically “feeling really engaged in life” or “feeling fulfilled”.
I understand that you do not feel fulfilled when watching TV, but other people might. I would agree with your reply on Kawoomba’s sub-thread:
Now, if you want to disagree with me on whether they think they are “really living”, that might be really interesting. I acknowledge that mind projection fallacy might be causing me to think they want what I want.
For better or for worse, passive entertainment such as movies, books, TV shows, music, etc., is a large part of our popular culture. You say:
I puzzle at how you classify watching something together as “social contact”. To me, being in the same room is not a social life.
Strictly speaking this is true, but people usually discuss the things they watch (or read, or listen to, etc.), with their friends or, with the advent of the Internet, even with random strangers. The shared narratives thus facilitate the “emotional intimacy” you speak about. Furthermore, some specific works of passive entertainment, as well as generalized common tropes, make up a huge chunk of the cultural context without which it would be difficult to communicate with anyone in our culture on an emotional level (as opposed to, say, presenting mathematical proofs or engineering schematics to each other).
For example, if you take a close look at various posts on this very site, you will find references to the genres of science fiction and fantasy, as well as media such as movies or anime, which the posters simply take for granted (sometimes too much so, IMO; f.ex., not everyone knows what “tsuyoku naritai” means right off the bat). A person who did not share this common social context would find it difficult to communicate with anyone here.
Note, though, that once again I am describing a situation that exists, not prescribing a behavior. In terms of raw productivity per unit of time, I cannot justify any kind of entertainment at all. While it is true that entertainment has been with us since the dawn of civilization, so has cancer; just because something is old, doesn’t mean that it’s good.
Okay, so to clarify, your position is that entertainment is a more efficient way to learn?
No, this phrasing is too strong. I meant what I said before: many people find it easier to internalize new ideas when they are presented as part of a narrative. This doesn not mean that entertainment is a more efficient way to learn all things for all people, or that it is objectively the best technique for learning things, or anything of the sort.
Note, though, that once again I am describing a situation that exists, not prescribing a behavior. In terms of raw productivity per unit of time, I cannot justify any kind of entertainment at all. While it is true that entertainment has been with us since the dawn of civilization, so has cancer; just because something is old, doesn’t mean that it’s good.
Why try to justify entertainment in terms of productivity per time? Is there any reason this makes more sense than, say, justifying productivity in terms of how much entertainment it allows for?
Presumably, if your goal is to optimize the world, or to affect any part of it besides yourself in a non-trivial way, you should strive to do so as efficiently as possible. This means that spending time on any activities that do not contribute to this goal is irrational. A paperclip maximizer, for example, wouldn’t spend any time on watching soap operas or reading romance novels—unless doing so would lead to more paperclips (which is unlikely).
Of course, one could argue that consumption of passive entertainment does contribute to the average human’s goals, since humans are unable to function properly without some downtime. But I don’t know if I’d go so far as to claim that this is a feature, and not a bug, just like cancer or aging or whatever else evolution had saddled us with.
Presumably, if your goal is to optimize the world, or to affect any part of it besides yourself in a non-trivial way, you should strive to do so as efficiently as possible.
A decision theory that leads to the conclusion that we should all work like slaves for a future paradise, the slightest lapse incurring a cost equivalent to untold numbers of dead babies, and the enormity of the task meaning that we shall never experience it ourselves, is prima facie a broken decision theory. I’d even call it the sort of toxic mindwaste that RationalWiki loves to mock.
Once you’ve built that optimised world, who gets to slack off and just live in it, and how will they spend their time?
A decision theory that leads to the conclusion that we should all work like slaves for a future paradise, the slightest lapse incurring a cost equivalent to untold numbers of dead babies, and the enormity of the task meaning that we shall never experience it ourselves, is prima facie a broken decision theory.
Why exactly? I mean, my intuition also tells me it’s wrong… but my intuition has a few assumptions that disagree with the proposed scenario. Let’s make sure the intuition does not react to a strawman.
For example, when in real life people “work like slaves for a future paradise”, the paradise often does not happen. Typically, the people have a wrong model of the world. (The wrong model is often provided by their leader, and their work in fact results in building their leader’s personal paradise, nothing more.) And even if their model is right, their actions are more optimized for signalling effort than for real efficiency. (Working very hard signals more virtue than thinking and coming up with a smart plan to make a lot of money and pay someone else to do more work than we could.) Even with smart and honest people, there will typically be something they ignored or could not influence, such as someone powerful coming and taking the results of their work, or a conflict starting and destroying their seeds of the paradise. Or simply their internal conflicts, or lack of willpower to finish what they started.
The lesson we should take from this is that even if we have a plan to work like a slaves for a future paradise, there is very high prior probability that we missed something important. Which means that in fact we do not work for a future paradise, we only mistakenly think so. I agree that the prior probability is so high that even the most convincing reasoning and plans are unlikely to overweight it.
However, for the sake of experiment, imagine that Omega comes and tells you that if you will work like a slave for the next 20 or 50 years, the future paradise will happen with probability almost 1. You don’t have to worry about mistakes in your plans, because either Omega verified their correctness, or is going to provide you corrections when needed and predicts that you will be able to follow those corrections successfully. Omega also predicts that it you commit to the task, you will have enough willpower, health, and other necessary resources to complete it successfully. In this scenario, is committing for the slave work a bad decision?
In other words, is your objection “in situation X the decision D is wrong”, or is it “the situation X is so unlikely that any decision D based on assumption of X will in real life be wrong”?
When Omega enters a discussion, my interest in it leaves.
To that extent that someone is unable to use established tools of thought to focus attention on the important aspects of the problem their contribution to a conversation is likely to be negative. This is particularly the case when it comes to decision theory where it correlates strongly with pointless fighting of the counterfactual and muddled thinking.
And in future, if you wish to address a comment to me, I would appreciate being addressed directly, rather than with this pseudo-impersonal pomposity.
I intended the general claim as stated. I don’t know you well enough for it to be personal. I will continue to support the use of Omega (and simplified decision theory problems in general) as a useful way to think.
For practical purposes pronouncements like this are best interpreted as indications that the speaker has nothing of value to say on the subject, not as indications that the speaker is too sophisticated for such childish considerations.
For practical purposes pronouncements like this are best interpreted as indications
For practical purposes pronouncements like this are best interpreted as saying exactly what they say. You are, of course, free to make up whatever self-serving story you like around it.
For practical purposes pronouncements like this are best interpreted as saying exactly what they say. You are, of course, free to make up whatever self-serving story you like around it.
It is counterintuitive that you should slave for people you don’t know, perhaps because you can’t be sure you are serving their needs effectively. Even if that objection is removed by bringing in an omniscient oracle,there still seems to be a problem because the prospect of one generation slaving to create paradise for another isn’t fair. the simple
version of utilitiarianism being addressed here only sums individual utilities, and us blind to things that can only be defined at the group level like justice and equaliy.
However, for the sake of experiment, imagine that Omega comes and tells you that if you will work like a slave for the next 20 or 50 years, the future paradise will happen with probability almost 1. You don’t have to worry about mistakes in your plans, because either Omega verified their correctness, or is going to provide you corrections when needed and predicts that you will be able to follow those corrections successfully. Omega also predicts that it you commit to the task, you will have enough willpower, health, and other necessary resources to complete it successfully. In this scenario, is committing for the slave work a bad decision?
For the sake of experiment, imagine that air has zero viscosity. In this scenario, would a feather and a cannon ball fall in the same time?
For the sake of experiment, imagine that air has zero viscosity. In this scenario, would a feather and a cannon ball fall in the same time?
I believe the answer is “yes”, but I had to think about that for a moment. I’m not sure how that’s relevant to the current discussion, though.
I think your real point might be closer to something like, “thought experiments are useless at best, and should thus be avoided”, but I don’t want to put words into anyone’s mouth.
My point was something like, “of course if you assume away all the things that cause slave labour to be bad then slave labour is no longer bad, but that observation doesn’t yield much of an insight about the real world”.
That makes sense, but I don’t think it’s what Viliam_Bur was talking about. His point, as far as I could tell, was that the problem with slave labor is the coercion, not the labor itself.
Yep. A morality that leads to the conclusion that we should all work like slaves for a future paradise, the slightest lapse incurring a cost equivalent to untold numbers of dead babies, and the enormity of the task meaning that we shall never experience it ourselves, is prima facie a broken morality.
A decision theory that leads to the conclusion that we should all work like slaves for a future paradise … is prima facie a broken decision theory.
Why ? I mean, I do agree with you personally, but I don’t see why such a decision theory is objectively bad. You ask,
Once you’ve built that optimised world, who gets to slack off and just live in it, and how will they spend their time?
But the answer depends entirely on your goals. These can be as relatively modest as, “the world will be just like it is today, but everyone wears a party hat”. Or it could be as ambitious as, “the world contains as many paperclips as physically possible”. In the latter case, if you asked the paperclip maximizer “who gets to slack off ?”, it wouldn’t find the question relevant in the least. It doesn’t matter who gets to do what, all that matters are the paperclips.
You might argue that a paperclip-filled world would be a terrible place, and I agree, but that’s just because you and I don’t value paperclips as much as Clippy does. Clippy thinks your ideal world is terrible too, because it contains a bunch of useless things like “happy people in party hats”, and not nearly enough paperclips.
However, imagine if we ran two copies of Clippy in a grand paperclipping race: one that consumed entertainment by preference, and one that did not. The non-entertainment version would win every time. Similarly, if you want to make the world a better place (whatever that means for you), every minute you spend on doing other things is a minute wasted (unless they are explicitly included in your goals). This includes watching TV, eating, sleeping, and being dead. Some (if not all) of such activities are unavoidable, but as I said, I’m not sure whether it’s a bug or a feature.
However, imagine if we ran two copies of Clippy in a grand paperclipping race: one that consumed entertainment by preference, and one that did not. The non-entertainment version would win every time.
This is proving the conclusion by assuming it.
Similarly, if you want to make the world a better place (whatever that means for you), every minute you spend on doing other things is a minute wasted (unless they are explicitly included in your goals). This includes watching TV, eating, sleeping, and being dead. Some (if not all) of such activities are unavoidable, but as I said, I’m not sure whether it’s a bug or a feature.
The words make a perfectly logical pattern, but I find that the picture they make is absurd. The ontology has gone wrong.
Some businessman wrote a book of advice called “Never Eat Alone”, the title of which means that every meal is an opportunity to have a meal with someone to network with. That is what the saying “he who would be Pope must think of nothing else” looks like in practice. Not wearing oneself out like Superman in the SMBC cartoon, driven into self-imposed slavery by memetic immune disorder.
BTW, for what it’s worth, I do not watch TV. And now I am imagining a chapter of that book entitled “Never Sleep Alone”.
Some businessman wrote a book of advice called “Never Eat Alone”, the title of which means that every meal is an opportunity to have a meal with someone to network with. That is what the saying “he who would be Pope must think of nothing else” looks like in practice. Not wearing oneself out like Superman in the SMBC cartoon, driven into self-imposed slavery by memetic immune disorder.
Actually, I think that the world described in that SMBC cartoon is far preferable to the standard DC comics world with Superman. I do not think that doing what Superman did there is a memetic immune disorder, but rather a (successful) attempt to make the world a better place.
I definitely wouldn’t. A single tormented child seems to me like an incredibly good tradeoff for the number of very high quality lives that Omelas supports, much better than we get with real cities.
It sucks to actually be the person whose well-being is being sacrificed for everyone else, but if you’re deciding from behind a veil of ignorance which society to be a part of, your expected well being is going to be higher in Omelas.
Back when I was eleven or so, I contemplated this, and made a precommitment that if I were ever in a situation where I’m offered a chance to improve total wellfare for everyone at the cost of personal torment, I should take it immediately without giving myself any time to contemplate what I’d be getting myself into, so in that sense I’ve effectively volunteered myself to be the tormented child.
I don’t disagree with maximally efficient altruism, just with the idea that it’s sensible to judge entertainment only as an instrumental value in service of productivity.
It sucks to actually be the person whose well-being is being sacrificed for everyone else, but if you’re deciding from behind a veil of ignorance which society to be a part of, your expected well being is going to be higher in Omelas.
You’re assuming here that the “veil of ignorance” gives you exactly equal chance of being each citizen of Omelas, so that a decision under the veil reduces to average utilitarianism.
However, in Rawls’s formulation, you’re not supposed to assume that; the veil means you’re also entirely ignorant about the mechanism used to incarnate you as one of the citizens, and so must consider all probability distributions over the citizens when choosing your society. In particular, you must assign some weight to a distribution picked by a devil (or mischievous Omega) who will find the person with the very lowest utility in your choice of society and incarnate you as that person. So you wouldn’t choose Omelas.
This seems to be why Rawls preferred maximin decision theory under the veil of ignorance rather than expected utility decision theory.
In that case, don’t use a Rawlsian veil of ignorance, it’s not the best mechanism for addressing the decision. A veil where you have an equal chance of your own child being the victim to anyone else’s (assuming you’re already too old to be the victim) is more the sort of situation anyone actually deciding whether or not to live in Omelas would face.
Of course, I would pick Omelas even under the Rawlsian veil, since as I’ve said I’m willing to be the one who takes the hit.
Ah, so you are considering the question “If Omelas already exists, should I choose to live there or walk away?” rather than the Rawlsian question “Should we create a society like Omelas in the first place?” The “veil of ignorance” meme nearly always refers to the Rawlsian concept, so I misunderstood you there.
Incidentally, I reread the story and there seems to be no description of how the child was selected in the first place or how he/she is replaced. So it’s not clear that your own child does have the same chance of being the victim as anyone else’s.
Well, as I mentioned in another comment some time ago (not in this thread,) I support both not walking away from Omelas, and also creating Omelases unless an even more utility efficient method of creating happy and functional societies is forthcoming.
Our society rests on a lot more suffering than Omelas, not just in an incidental way (such as people within our cities who don’t have housing or medical care,) but directly, through channels such as economic slavery where companies rely on workers, mainly abroad, who they keep locked in debt, who could not leave to seek employment elsewhere even if they wanted to and other opportunities were forthcoming. I can respect a moral code that would lead people to walk out on Omelas as a form of protest that would also lead people to walk out on modern society to live on a self sufficient seasteading colony, but I reject the notion that Omelas is worse than, or as bad as, our own society, in a morally relevant way.
A single tormented child seems to me like an incredibly good tradeoff for the number of very high quality lives that Omelas supports, much better than we get with real cities.
I cannot fathom why a comment like that would be upvoted by anyone but an unfeeling robot. This is not even the dust-specks-vs-torture case, given that the Omelas is not a very large city.
if I were ever in a situation where I’m offered a chance to improve total wellfare for everyone at the cost of personal torment, I should take it immediately
Imagine that it is not you, but your child you must sacrifice. Would you shrug and say “sorry, my precious girl, you must suffer until you die so that your mommy/daddy can live a happy life”? I know what I would do.
Imagine that it is not you, but your child you must sacrifice. Would you shrug and say “sorry, my precious girl, you must suffer until you die so that your mommy/daddy can live a happy life”?
I hope I would have the strength to say “sorry, my precious girl, you must suffer until you die so that everyone in the city can live a happy life.” Doing it just for myself and my own social circle wouldn’t be a good tradeoff, but those aren’t the terms of the scenario.
Considering how many of our basic commodities rely on sweatshop or otherwise extremely miserable labor, we’re already living off the backs of quite a lot of tormented children.
The Babyeaters’ babies outnumber the adults; their situation is analogous, not to the city of Omelas, but to a utopian city built on top of another, even larger, dystopian city, on which it relies for its existence.
I would rather live in a society where people loved and cherished their children, but also valued their society, and were willing to shut up and multiply and take the hit themselves, or to their own loved ones, for the sake of a common good that really is that much greater, and I want to be the sort of person I’d want others in that society to be.
I’ve never had children, but I have been in love, in a reciprocated relationship of the sort where it feels like it’s actually as big a deal as all the love songs have ever made it out to be, and I think that sacrificing someone I loved for the sake of a city like Omelas is something I’d be willing to do in practice, not just in theory (and she never would have expected me to do differently, nor would I of her.) It’s definitely not the case that really loving someone, with true depth of feeling, precludes acknowledgment that there are some things worth sacrificing even that bond for.
I’m guessing that neither have most of those who upvoted you and downvoted me. I literally cannot imagine a worse betrayal than the scenario we’ve been discussing. I can imagine one kind-of-happy society where something like this would be OK, though.
I cannot fathom why a comment like that would be upvoted by anyone but an unfeeling robot.
Sounds like you need to update your model of people who don’t have children. Also, how aggressively do you campaign against things like sweatshop labor in third-world countries, which as Desrtopa correctly points out are a substantially worse real-world analogue? Do children only matter if they’re your children?
the real problem with omelas: It totally ignores the fact that there are children suffering literally as we speak in every city on the planet. Omelas somehow managed to get it down to one child. How many other children would you sacrifice for your own?
the real problem with omelas: It totally ignores the fact that there are children suffering literally as we speak in every city on the planet.
Unlike in the fictional Omelas, there is no direct dependence or direct sacrifice. Certainly it is possible to at least temporarily alleviate suffering of others in this non-hypothetical world by sacrificing some of your fortune, but that’s the difference between active and passive approach, there is a large gap there.
Related. Nornagest put their finger on this being a conflict between the consequentially compelling (optimizing for general welfare) and the psychologically compelling (not being confronted with knowledge of an individual child suffering torture because of you). I think Nornagest’s also right that a fully specified Omelas scenario would almost certainly feel less compelling, which is one reason I’m not much impressed by Le Guin’s story.
Imagine that it is not you, but your child you must sacrifice.
The situation is not analogous, since sacrificing one’s child would presumably make most parents miserable for the rest of their days. In Omelas, however, the sacrifice makes people happy, instead.
I don’t disagree with maximally efficient altruism, just with the idea that it’s sensible to judge entertainment only as an instrumental value in service of productivity.
As I said in previous comments, I am genuinely not sure whether entertainment is a good terminal goal to have.
By analogy, I absolutely require sleep in order to be productive at all in any capacity; but if I could swallow a magic pill that removed my need for sleep (with no other side-effects), I’d do so in a heartbeat. Sleep is an instrumental goal for me, not a terminal one. But I don’t know if entertainment is like that or not.
Thus, I’m really interested in hearing more about your thoughts on the topic.
I’m not sure that I would regard entertainment as a terminal goal, but I’m very sure I wouldn’t regard productivity as one. As an instrumental goal, it’s an intermediary between a lot of things that I care about, but optimizing for productivity seems like about as worthy a goal to me as paperclipping.
Right, agreed, but “productivity” is just a rough estimate of how quickly you’re moving towards your actual goals. If entertainment is not one of them, then either it enhances your productivity in some way, or it reduces it, or it has no effect (which is unlikely, IMO).
Productivity and fun aren’t orthogonal; for example, it is entirely possible that if your goal is “experience as much pleasure as possible”, then some amount of entertainment would directly contribute to the goal, and would thus be productive. That said, though, I can’t claim that such a goal would be a good goal to have in the first place.
How so ? Imagine that you have two identical paperclip maximizers; for simplicity’s sake, let’s assume that they are not capable of radical self-modification (though the results would be similar if they were). Each agent is capable of converting raw titanium to paperclips at the same rate. Agent A spends 100% of its time on making paperclips. Agent B spends 80% of its time on paperclips, and 20% of its time on watching TV. If we gave A and B two identical blocks of titanium, which agent would finish converting all of it to paperclips first ?
That is what the saying “he who would be Pope must think of nothing else” looks like in practice.
FeepingCreature addressed this better than I could in this comment . I understand that you find the idea of making paperclips (or political movements, or software, or whatever) all day every day with no breaks abhorrent, and so do I. But then, some people find polyamory abhorrent as well, and then they “polyhack” themselves and grow to enjoy it. Is entertainment your terminal value, or a mental bias ? And if it is a terminal value, is it the best terminal value that you could possibly have ?
WARNING: This comment contains explicit discussion of an information hazard.
Imagine that you have two identical paperclip maximizers
I decline to do so. What imaginary creatures would choose whose choice has been written into their definition is of no significance. (This is also a reply to the comment of FeepingCreature you referenced.) I’m more interested in the practical question of how actual human beings, which this discussion began with, can avoid the pitfall of being taken over by a utility monster they’ve created in their own heads.
This is a basilisk problem. Unlike Roko’s, which depends on exotic decision theory, this one involves nothing more than plain utilitarianism. Unlike the standard Utility Monster scenario, this one involves no imaginary entities or hypothetical situations. You just have to look at the actual world around you through the eyes of utilitarianism. It’s a very short road from the innocent-sounding “the greatest good for the greatest number” to this: There are seven billion people on this planet. How can the good you could do them possibly be outweighed by any amount of your own happiness? Just by sitting there reading LessWrong you’re killing babies! Having a beer? You’re drinking dead babies. Own a car? You’re driving on a carpet of dead babies! Murderer! Murderer! Add a dash of transhumanism and you can up the stakes to an obligation to bringing about billions of billions of future humans throughout the universe living lives billions of times better than ours.
But even Peter Singer doesn’t go that far, continuing to be an academic professor and paying his utilitarian obligations by preaching utilitarianism and donating twenty percent of his salary to charity.
This is such an obvious failure mode for utilitarianism, a philosophy at least two centuries old, that surely philosophers must have addressed it. But I don’t know what their responses are.
Christianity has the same problem, and handles it in practice by testing the vocation of those who come to it seeking to devote their whole life to the service of God, to determine whether they are truly called by God. For it is written that many are called, yet few are chosen. In non-supernatural terms, that means determining whether the applicant is psychologically fitted for the life they feel called to, and if not, deflecting their mania into some more productive route.
And then you have people like H0, who notices H2 is crazy, decides that that means that they shouldn’t even try to be altruistic, and accuses H1 of hypocrisy because she’s not like H2. (Exhibit A)
That is my expectation also. However, persuading H2 of that (“but dead babies!”) is likely to be a work of counselling or spiritual guidance rather than reason.
Well… so, if we both expect H1 to do more good than H2, it seems that if we were to look at them through the eyes of utilitarianism, we would endorse being H1 over being H2. But you seem to be saying that H2, looking through the eyes of utilitarianism, endorses being H2 over being H1. I am therefore deeply confused by your model of what’s going on here.
Oh yes, H1 is more effective, heathier, saner, more rational, etc. than H2. H2 is experiencing existential panic and cannot relinquish his death-grip on the idea.
Do you think being a utilitarian makes someone less effective, healthy, sane, rational etc.? Or do you think H2 has these various traits independent of them being a utilitarian?
WARNING: More discussion of a basilisk, with a link to a real-world example.
It’s a possible failure mode of utilitarianism. Some people succumb to it (see George Price for an actual example of a similar failure) and some don’t.
I don’t understand your confusion and this pair of questions just seems misconceived.
(shrug) OK. I certainly agree with you that some utilitarians suffer from the existential panic and inability to relinquish their death-grips on unhealthy ideas, while others don’t. I’m tapping out here.
One could reason that one is better placed to do good effectively when focussing on oneself, ones family, one’s community, etc, simply because one understands them better.
What imaginary creatures would choose whose choice has been written into their definition is of no significance.
Are you saying that human choices are not “written into their definition” in some measure ?
Also, keep in mind that a goal like “make more paperclips” does leave a lot of room for other choices. The agent could spend its time studying metallurgy, or buying existing paperclip factories, or experimenting with alloys, or attempting to invent nanotechnology, or some combination of these and many more activities. It’s not constrained to just a single path.
Just by sitting there reading LessWrong you’re killing babies! … Add a dash of transhumanism and you can up the stakes to an obligation to bringing about billions of billions of future humans throughout the universe living lives billions of times better than ours.
On the one hand, I do agree with you, and I can’t wait to see your proposed solution. On the other hand, I’m not sure what this has to do with the topic. I wasn’t talking about billions of future humans or anything of the sort, merely about a single (semi-hypothetical) human and his goals; whether entertainment is a terminal or instrumental goal; and whether it is a good goal to have.
Let me put it in a different way: if you could take a magic pill which would remove (or, at the very least, greatly reduce) your desire for passive entertainment, would you do it ? People with extremely low preferences for passive entertainment do exist, after all, so this scenario isn’t entirely fantastic (other than for the magic pill part, of course).
Are you saying that human choices are not “written into their definition” in some measure ?
What is written in to humans by evolution is hardly relevant. The point is that you can’t prove anything about humansby drawing a comparison with imaginary creatures that have had something potentially quite different written into them by their creator.
Are you saying that human choices are not “written into their definition” in some measure ?
I have no idea what that even means.
On the one hand, I do agree with you, and I can’t wait to see your proposed solution.
My only solution is “don’t do that then”. It’s a broken thought process, and my interest in it ends with that recognition. Am I a soul doctor? I am not. I seem to be naturally resistant to that failure, but I don’t know how to fix anyone who isn’t.
Let me put it in a different way: if you could take a magic pill which would remove (or, at the very least, greatly reduce) your desire for passive entertainment, would you do it ?
What desire for passive entertainment? For that matter, what is this “passive entertainment”? I am not getting a clear idea of what we are talking about. At any rate, I can’t imagine “entertainment” in the ordinary meaning of that word being a terminal goal.
FWIW, I do not watch television, and have never attended spectator sports.
People with extremely low preferences for passive entertainment do exist, after all
Are you saying that human choices are not “written into their definition” in some measure ?
I have no idea what that even means.
To rephrase: do you believe that all choices made by humans are completely under the humans’ conscious control ? If not, what proportion of our choices is under our control, and what proportion is written into our genes and is thus difficult, if not impossible, to change (given our present level of technology) ?
You objected to my using Clippy as an analogy to human behaviour, on the grounds that Clippy’s choices are “written into its definition”. My point is that a). Clippy is free to make whatever choices it wants, as long as it believes (correctly or erroneously) such choices would lead to more paperclips, and b). we humans operate in a similar way, only we care about things other than paperclips, and therefore c). Clippy is a valid analogy.
My only solution is “don’t do that then”.
Don’t do what ? Do you have a moral theory which works better than utilitarianism/consequentialism ?
What desire for passive entertainment? For that matter, what is this “passive entertainment”?
You don’t watch TV or attend sports, but do you read any fiction books ? Listen to music ? Look at paintings or sculptures (on your own initiative, that is, and not as part of a job) ? Enjoy listening to some small subclass of jokes ? Watch any movies ? Play video games ? Stare at a fire at night ? I’m just trying to pinpoint your general level of interest in entertainment.
At any rate, I can’t imagine “entertainment” in the ordinary meaning of that word being a terminal goal.
Just because you personally can’t imagine something, doesn’t mean it’s not true. For example, art and music—both of which are forms of passive entertainment—has been a part of human history ever since the caveman days, and continue to flourish today. There may be something hardcoded in our genes (maybe not yours personally, but on average) that makes us enjoy art and music. On the other hand, there are lots of things hardcoded in our genes that we’d be better off without...
To rephrase: do you believe that all choices made by humans are completely under the humans’ conscious control ? If not, what proportion of our choices is under our control, and what proportion is written into our genes and is thus difficult, if not impossible, to change (given our present level of technology) ?
The whole language is wrong here.
What does it mean to talk about a choice being “completely under the humans’ conscious control”? Obviously, the causal connections wind through and through all manner of things that are outside consciousness as well as inside. When could you ever say that a decision is “completely under conscious control”?
Then you talk as if a decision not “completely under conscious control” must be “written into the genes”. Where does that come from?
do you read any fiction books?
Why do you specify fiction? Is fiction “passive entertainment” but non-fiction something else?
There may be something hardcoded in our genes (maybe not yours personally, but on average) that makes us enjoy art and music.
What is this “us” that is separate from and acted upon by our genes? Mentalistic dualism?
My only solution is “don’t do that then”.
Don’t do what ? Do you have a moral theory which works better than utilitarianism/consequentialism ?
Don’t crash and burn. I have no moral theory and am not impressed by anything on offer from the philosophers.
To sum up, there’s a large and complex set of assumptions behind everything you’re saying here that I don’t think I share, but I can only guess at from glimpsing the shadowy outlines. I doubt further discussion will get anywhere useful.
Are you saying that human choices are not “written into their definition” in some measure ?
I think Bugmaster is equating being “written in” in the sense of a stipulation in a thought experiment with being “written in” in the sense of being the outcome of an evolutionary process.
That’s hardly objective. The challenge is to formalize that test.
Btw: the problem you’re having is not due to any decision theory but due to the goal system. You want there to be entertainment and fun and the like. However, the postulated agent had a primary goal that did not include entertainment and fun. This seems alien to us, but for the mindset of such an agent “eschew entertainment and fun” is the correct and sane behavior.
Exactly, though see my comment on a sibling thread.
Out of curiosity though, what is the “Scientology test” ? Is that some commonly-accepted term from the Less Wrong jargon ? Presumably it doesn’t involve poorly calibrated galvanic skin response meters… :-/
For better or for worse, passive entertainment such as movies, books, TV shows, music, etc., is a large part of our popular culture.
Music is only passive entertainment if you just listen at it, not if you sing it, play it, or dance at it.
Strictly speaking this is true, but people usually discuss the things they watch (or read, or listen to, etc.), with their friends or, with the advent of the Internet, even with random strangers. The shared narratives thus facilitate the “emotional intimacy” you speak about. Furthermore, some specific works of passive entertainment, as well as generalized common tropes, make up a huge chunk of the cultural context without which it would be difficult to communicate with anyone in our culture on an emotional level (as opposed to, say, presenting mathematical proofs or engineering schematics to each other).
I agree that people spend lots of time talking about these kind of things, and that the more shared topics of conversation you have with someone the easier it is to socialize with them, but I disagree that there are few non-technical things one can talk about other than what you get from passive entertainment. I seldom watch TV/films/sports, but I have plenty of non-technical things I can talk about with people—parties we’ve been to, people we know, places we’ve visited, our tastes in food and drinks, unusual stuff that happened to us, what we’ve been doing lately, our plans for the near future, ranting about politics, conspiracy theories, the freakin’ weather, whatever—and I’d consider talking about some of these topic to build more ‘emotional intimacy’ than talking about some Hollywood movie or the Champions League or similar. (Also, I take exception to the apparent implication of the parenthetical at the end of the paragraph—it is possible to entertain people by talking about STEM topics, if you’re sufficiently Feynman-esque about that.)
For example, if you take a close look at various posts on this very site, you will find references to the genres of science fiction and fantasy, as well as media such as movies or anime, which the posters simply take for granted (sometimes too much so, IMO; f.ex., not everyone knows what “tsuyoku naritai” means right off the bat). A person who did not share this common social context would find it difficult to communicate with anyone here.
I have read very little of that kind of fiction, and still I haven’t felt excluded by that in the slightest (well, except that one time when the latest HPMOR thread clogged up the top Discussion comments of the week when I hadn’t read HPMOR yet, and the occasional Discussion threads about MLP—but that’s a small minority of the time).
“Living” the way I used it means “living to the fullest” or, a little more specifically “feeling really engaged in life” or “feeling fulfilled”.
I used “living” to refer to a subjective state. There’s nothing objective about it, and IMO, there’s nothing objectively right or wrong about having a subjective state that is (even in your own opinion) not as good as the ideal.
I feel like your real challenge here is more similar to Kawoomba’s concern. Am I right?
Do you find it more enjoyable to passively watch entertainment than to do your own projects? Do you think most people do? If so, might that be because the fun was taken out of learning, or people’s creativity was reduced to the point where doing your own project is too challenging, or people’s self-confidence was made too dependent on others such that they don’t feel comfortable pursuing that fulfilling sense of having done something on their own?
I puzzle at how you classify watching something together as “social contact”. To me, being in the same room is not a social life. Watching the same entertainment is not quality time. The social contact I yearn for involves emotional intimacy—contact with the actual person inside, not just a sense of being in the same room watching the same thing. I don’t understand how that can be called social contact.
I’ve been thinking about this and I think what might be happening is that I make my own narratives.
This, I can believe about Eliezer. There are places where he could have been more incisive but is instead gets wordy to compensate. That’s an interesting point.
Okay, so to clarify, your position is that entertainment is a more efficient way to learn?
I understand that you do not feel fulfilled when watching TV, but other people might. I would agree with your reply on Kawoomba’s sub-thread:
For better or for worse, passive entertainment such as movies, books, TV shows, music, etc., is a large part of our popular culture. You say:
Strictly speaking this is true, but people usually discuss the things they watch (or read, or listen to, etc.), with their friends or, with the advent of the Internet, even with random strangers. The shared narratives thus facilitate the “emotional intimacy” you speak about. Furthermore, some specific works of passive entertainment, as well as generalized common tropes, make up a huge chunk of the cultural context without which it would be difficult to communicate with anyone in our culture on an emotional level (as opposed to, say, presenting mathematical proofs or engineering schematics to each other).
For example, if you take a close look at various posts on this very site, you will find references to the genres of science fiction and fantasy, as well as media such as movies or anime, which the posters simply take for granted (sometimes too much so, IMO; f.ex., not everyone knows what “tsuyoku naritai” means right off the bat). A person who did not share this common social context would find it difficult to communicate with anyone here.
Note, though, that once again I am describing a situation that exists, not prescribing a behavior. In terms of raw productivity per unit of time, I cannot justify any kind of entertainment at all. While it is true that entertainment has been with us since the dawn of civilization, so has cancer; just because something is old, doesn’t mean that it’s good.
No, this phrasing is too strong. I meant what I said before: many people find it easier to internalize new ideas when they are presented as part of a narrative. This doesn not mean that entertainment is a more efficient way to learn all things for all people, or that it is objectively the best technique for learning things, or anything of the sort.
Why try to justify entertainment in terms of productivity per time? Is there any reason this makes more sense than, say, justifying productivity in terms of how much entertainment it allows for?
Presumably, if your goal is to optimize the world, or to affect any part of it besides yourself in a non-trivial way, you should strive to do so as efficiently as possible. This means that spending time on any activities that do not contribute to this goal is irrational. A paperclip maximizer, for example, wouldn’t spend any time on watching soap operas or reading romance novels—unless doing so would lead to more paperclips (which is unlikely).
Of course, one could argue that consumption of passive entertainment does contribute to the average human’s goals, since humans are unable to function properly without some downtime. But I don’t know if I’d go so far as to claim that this is a feature, and not a bug, just like cancer or aging or whatever else evolution had saddled us with.
A decision theory that leads to the conclusion that we should all work like slaves for a future paradise, the slightest lapse incurring a cost equivalent to untold numbers of dead babies, and the enormity of the task meaning that we shall never experience it ourselves, is prima facie a broken decision theory. I’d even call it the sort of toxic mindwaste that RationalWiki loves to mock.
Once you’ve built that optimised world, who gets to slack off and just live in it, and how will they spend their time?
Why exactly? I mean, my intuition also tells me it’s wrong… but my intuition has a few assumptions that disagree with the proposed scenario. Let’s make sure the intuition does not react to a strawman.
For example, when in real life people “work like slaves for a future paradise”, the paradise often does not happen. Typically, the people have a wrong model of the world. (The wrong model is often provided by their leader, and their work in fact results in building their leader’s personal paradise, nothing more.) And even if their model is right, their actions are more optimized for signalling effort than for real efficiency. (Working very hard signals more virtue than thinking and coming up with a smart plan to make a lot of money and pay someone else to do more work than we could.) Even with smart and honest people, there will typically be something they ignored or could not influence, such as someone powerful coming and taking the results of their work, or a conflict starting and destroying their seeds of the paradise. Or simply their internal conflicts, or lack of willpower to finish what they started.
The lesson we should take from this is that even if we have a plan to work like a slaves for a future paradise, there is very high prior probability that we missed something important. Which means that in fact we do not work for a future paradise, we only mistakenly think so. I agree that the prior probability is so high that even the most convincing reasoning and plans are unlikely to overweight it.
However, for the sake of experiment, imagine that Omega comes and tells you that if you will work like a slave for the next 20 or 50 years, the future paradise will happen with probability almost 1. You don’t have to worry about mistakes in your plans, because either Omega verified their correctness, or is going to provide you corrections when needed and predicts that you will be able to follow those corrections successfully. Omega also predicts that it you commit to the task, you will have enough willpower, health, and other necessary resources to complete it successfully. In this scenario, is committing for the slave work a bad decision?
In other words, is your objection “in situation X the decision D is wrong”, or is it “the situation X is so unlikely that any decision D based on assumption of X will in real life be wrong”?
When Omega enters a discussion, my interest in it leaves.
To that extent that someone is unable to use established tools of thought to focus attention on the important aspects of the problem their contribution to a conversation is likely to be negative. This is particularly the case when it comes to decision theory where it correlates strongly with pointless fighting of the counterfactual and muddled thinking.
Omega has its uses and its misuses. I observe the latter on LW more often than the former. The present example is one such.
And in future, if you wish to address a comment to me, I would appreciate being addressed directly, rather than with this pseudo-impersonal pomposity.
I intended the general claim as stated. I don’t know you well enough for it to be personal. I will continue to support the use of Omega (and simplified decision theory problems in general) as a useful way to think.
For practical purposes pronouncements like this are best interpreted as indications that the speaker has nothing of value to say on the subject, not as indications that the speaker is too sophisticated for such childish considerations.
For practical purposes pronouncements like this are best interpreted as saying exactly what they say. You are, of course, free to make up whatever self-serving story you like around it.
This is evidently not a behavior you practice.
It is counterintuitive that you should slave for people you don’t know, perhaps because you can’t be sure you are serving their needs effectively. Even if that objection is removed by bringing in an omniscient oracle,there still seems to be a problem because the prospect of one generation slaving to create paradise for another isn’t fair. the simple version of utilitiarianism being addressed here only sums individual utilities, and us blind to things that can only be defined at the group level like justice and equaliy.
For the sake of experiment, imagine that air has zero viscosity. In this scenario, would a feather and a cannon ball fall in the same time?
I believe the answer is “yes”, but I had to think about that for a moment. I’m not sure how that’s relevant to the current discussion, though.
I think your real point might be closer to something like, “thought experiments are useless at best, and should thus be avoided”, but I don’t want to put words into anyone’s mouth.
My point was something like, “of course if you assume away all the things that cause slave labour to be bad then slave labour is no longer bad, but that observation doesn’t yield much of an insight about the real world”.
That makes sense, but I don’t think it’s what Viliam_Bur was talking about. His point, as far as I could tell, was that the problem with slave labor is the coercion, not the labor itself.
“Decision theory” doesn’t mean the same thing as “value system” and we shouldn’t conflate them.
Yep. A morality that leads to the conclusion that we should all work like slaves for a future paradise, the slightest lapse incurring a cost equivalent to untold numbers of dead babies, and the enormity of the task meaning that we shall never experience it ourselves, is prima facie a broken morality.
Why ? I mean, I do agree with you personally, but I don’t see why such a decision theory is objectively bad. You ask,
But the answer depends entirely on your goals. These can be as relatively modest as, “the world will be just like it is today, but everyone wears a party hat”. Or it could be as ambitious as, “the world contains as many paperclips as physically possible”. In the latter case, if you asked the paperclip maximizer “who gets to slack off ?”, it wouldn’t find the question relevant in the least. It doesn’t matter who gets to do what, all that matters are the paperclips.
You might argue that a paperclip-filled world would be a terrible place, and I agree, but that’s just because you and I don’t value paperclips as much as Clippy does. Clippy thinks your ideal world is terrible too, because it contains a bunch of useless things like “happy people in party hats”, and not nearly enough paperclips.
However, imagine if we ran two copies of Clippy in a grand paperclipping race: one that consumed entertainment by preference, and one that did not. The non-entertainment version would win every time. Similarly, if you want to make the world a better place (whatever that means for you), every minute you spend on doing other things is a minute wasted (unless they are explicitly included in your goals). This includes watching TV, eating, sleeping, and being dead. Some (if not all) of such activities are unavoidable, but as I said, I’m not sure whether it’s a bug or a feature.
This is proving the conclusion by assuming it.
The words make a perfectly logical pattern, but I find that the picture they make is absurd. The ontology has gone wrong.
Some businessman wrote a book of advice called “Never Eat Alone”, the title of which means that every meal is an opportunity to have a meal with someone to network with. That is what the saying “he who would be Pope must think of nothing else” looks like in practice. Not wearing oneself out like Superman in the SMBC cartoon, driven into self-imposed slavery by memetic immune disorder.
BTW, for what it’s worth, I do not watch TV. And now I am imagining a chapter of that book entitled “Never Sleep Alone”.
Actually, I think that the world described in that SMBC cartoon is far preferable to the standard DC comics world with Superman. I do not think that doing what Superman did there is a memetic immune disorder, but rather a (successful) attempt to make the world a better place.
You would, then, not walk away from Omelas?
I definitely wouldn’t. A single tormented child seems to me like an incredibly good tradeoff for the number of very high quality lives that Omelas supports, much better than we get with real cities.
It sucks to actually be the person whose well-being is being sacrificed for everyone else, but if you’re deciding from behind a veil of ignorance which society to be a part of, your expected well being is going to be higher in Omelas.
Back when I was eleven or so, I contemplated this, and made a precommitment that if I were ever in a situation where I’m offered a chance to improve total wellfare for everyone at the cost of personal torment, I should take it immediately without giving myself any time to contemplate what I’d be getting myself into, so in that sense I’ve effectively volunteered myself to be the tormented child.
I don’t disagree with maximally efficient altruism, just with the idea that it’s sensible to judge entertainment only as an instrumental value in service of productivity.
You’re assuming here that the “veil of ignorance” gives you exactly equal chance of being each citizen of Omelas, so that a decision under the veil reduces to average utilitarianism.
However, in Rawls’s formulation, you’re not supposed to assume that; the veil means you’re also entirely ignorant about the mechanism used to incarnate you as one of the citizens, and so must consider all probability distributions over the citizens when choosing your society. In particular, you must assign some weight to a distribution picked by a devil (or mischievous Omega) who will find the person with the very lowest utility in your choice of society and incarnate you as that person. So you wouldn’t choose Omelas.
This seems to be why Rawls preferred maximin decision theory under the veil of ignorance rather than expected utility decision theory.
In that case, don’t use a Rawlsian veil of ignorance, it’s not the best mechanism for addressing the decision. A veil where you have an equal chance of your own child being the victim to anyone else’s (assuming you’re already too old to be the victim) is more the sort of situation anyone actually deciding whether or not to live in Omelas would face.
Of course, I would pick Omelas even under the Rawlsian veil, since as I’ve said I’m willing to be the one who takes the hit.
Ah, so you are considering the question “If Omelas already exists, should I choose to live there or walk away?” rather than the Rawlsian question “Should we create a society like Omelas in the first place?” The “veil of ignorance” meme nearly always refers to the Rawlsian concept, so I misunderstood you there.
Incidentally, I reread the story and there seems to be no description of how the child was selected in the first place or how he/she is replaced. So it’s not clear that your own child does have the same chance of being the victim as anyone else’s.
Well, as I mentioned in another comment some time ago (not in this thread,) I support both not walking away from Omelas, and also creating Omelases unless an even more utility efficient method of creating happy and functional societies is forthcoming.
Our society rests on a lot more suffering than Omelas, not just in an incidental way (such as people within our cities who don’t have housing or medical care,) but directly, through channels such as economic slavery where companies rely on workers, mainly abroad, who they keep locked in debt, who could not leave to seek employment elsewhere even if they wanted to and other opportunities were forthcoming. I can respect a moral code that would lead people to walk out on Omelas as a form of protest that would also lead people to walk out on modern society to live on a self sufficient seasteading colony, but I reject the notion that Omelas is worse than, or as bad as, our own society, in a morally relevant way.
I cannot fathom why a comment like that would be upvoted by anyone but an unfeeling robot. This is not even the dust-specks-vs-torture case, given that the Omelas is not a very large city.
Imagine that it is not you, but your child you must sacrifice. Would you shrug and say “sorry, my precious girl, you must suffer until you die so that your mommy/daddy can live a happy life”? I know what I would do.
I hope I would have the strength to say “sorry, my precious girl, you must suffer until you die so that everyone in the city can live a happy life.” Doing it just for myself and my own social circle wouldn’t be a good tradeoff, but those aren’t the terms of the scenario.
Considering how many of our basic commodities rely on sweatshop or otherwise extremely miserable labor, we’re already living off the backs of quite a lot of tormented children.
And there I thought that Babyeaters lived only in the Eliezer’s sci-fi story...
The Babyeaters’ babies outnumber the adults; their situation is analogous, not to the city of Omelas, but to a utopian city built on top of another, even larger, dystopian city, on which it relies for its existence.
I would rather live in a society where people loved and cherished their children, but also valued their society, and were willing to shut up and multiply and take the hit themselves, or to their own loved ones, for the sake of a common good that really is that much greater, and I want to be the sort of person I’d want others in that society to be.
I’ve never had children, but I have been in love, in a reciprocated relationship of the sort where it feels like it’s actually as big a deal as all the love songs have ever made it out to be, and I think that sacrificing someone I loved for the sake of a city like Omelas is something I’d be willing to do in practice, not just in theory (and she never would have expected me to do differently, nor would I of her.) It’s definitely not the case that really loving someone, with true depth of feeling, precludes acknowledgment that there are some things worth sacrificing even that bond for.
I’m guessing that neither have most of those who upvoted you and downvoted me. I literally cannot imagine a worse betrayal than the scenario we’ve been discussing. I can imagine one kind-of-happy society where something like this would be OK, though.
Sounds like you need to update your model of people who don’t have children. Also, how aggressively do you campaign against things like sweatshop labor in third-world countries, which as Desrtopa correctly points out are a substantially worse real-world analogue? Do children only matter if they’re your children?
the real problem with omelas: It totally ignores the fact that there are children suffering literally as we speak in every city on the planet. Omelas somehow managed to get it down to one child. How many other children would you sacrifice for your own?
Unlike in the fictional Omelas, there is no direct dependence or direct sacrifice. Certainly it is possible to at least temporarily alleviate suffering of others in this non-hypothetical world by sacrificing some of your fortune, but that’s the difference between active and passive approach, there is a large gap there.
Related. Nornagest put their finger on this being a conflict between the consequentially compelling (optimizing for general welfare) and the psychologically compelling (not being confronted with knowledge of an individual child suffering torture because of you). I think Nornagest’s also right that a fully specified Omelas scenario would almost certainly feel less compelling, which is one reason I’m not much impressed by Le Guin’s story.
The situation is not analogous, since sacrificing one’s child would presumably make most parents miserable for the rest of their days. In Omelas, however, the sacrifice makes people happy, instead.
And I thought that the Babyeaters only existed in Eliezer’s fiction...
As I said in previous comments, I am genuinely not sure whether entertainment is a good terminal goal to have.
By analogy, I absolutely require sleep in order to be productive at all in any capacity; but if I could swallow a magic pill that removed my need for sleep (with no other side-effects), I’d do so in a heartbeat. Sleep is an instrumental goal for me, not a terminal one. But I don’t know if entertainment is like that or not.
Thus, I’m really interested in hearing more about your thoughts on the topic.
I’m not sure that I would regard entertainment as a terminal goal, but I’m very sure I wouldn’t regard productivity as one. As an instrumental goal, it’s an intermediary between a lot of things that I care about, but optimizing for productivity seems like about as worthy a goal to me as paperclipping.
Right, agreed, but “productivity” is just a rough estimate of how quickly you’re moving towards your actual goals. If entertainment is not one of them, then either it enhances your productivity in some way, or it reduces it, or it has no effect (which is unlikely, IMO).
Productivity and fun aren’t orthogonal; for example, it is entirely possible that if your goal is “experience as much pleasure as possible”, then some amount of entertainment would directly contribute to the goal, and would thus be productive. That said, though, I can’t claim that such a goal would be a good goal to have in the first place.
How so ? Imagine that you have two identical paperclip maximizers; for simplicity’s sake, let’s assume that they are not capable of radical self-modification (though the results would be similar if they were). Each agent is capable of converting raw titanium to paperclips at the same rate. Agent A spends 100% of its time on making paperclips. Agent B spends 80% of its time on paperclips, and 20% of its time on watching TV. If we gave A and B two identical blocks of titanium, which agent would finish converting all of it to paperclips first ?
FeepingCreature addressed this better than I could in this comment . I understand that you find the idea of making paperclips (or political movements, or software, or whatever) all day every day with no breaks abhorrent, and so do I. But then, some people find polyamory abhorrent as well, and then they “polyhack” themselves and grow to enjoy it. Is entertainment your terminal value, or a mental bias ? And if it is a terminal value, is it the best terminal value that you could possibly have ?
WARNING: This comment contains explicit discussion of an information hazard.
I decline to do so. What imaginary creatures would choose whose choice has been written into their definition is of no significance. (This is also a reply to the comment of FeepingCreature you referenced.) I’m more interested in the practical question of how actual human beings, which this discussion began with, can avoid the pitfall of being taken over by a utility monster they’ve created in their own heads.
This is a basilisk problem. Unlike Roko’s, which depends on exotic decision theory, this one involves nothing more than plain utilitarianism. Unlike the standard Utility Monster scenario, this one involves no imaginary entities or hypothetical situations. You just have to look at the actual world around you through the eyes of utilitarianism. It’s a very short road from the innocent-sounding “the greatest good for the greatest number” to this: There are seven billion people on this planet. How can the good you could do them possibly be outweighed by any amount of your own happiness? Just by sitting there reading LessWrong you’re killing babies! Having a beer? You’re drinking dead babies. Own a car? You’re driving on a carpet of dead babies! Murderer! Murderer! Add a dash of transhumanism and you can up the stakes to an obligation to bringing about billions of billions of future humans throughout the universe living lives billions of times better than ours.
But even Peter Singer doesn’t go that far, continuing to be an academic professor and paying his utilitarian obligations by preaching utilitarianism and donating twenty percent of his salary to charity.
This is such an obvious failure mode for utilitarianism, a philosophy at least two centuries old, that surely philosophers must have addressed it. But I don’t know what their responses are.
Christianity has the same problem, and handles it in practice by testing the vocation of those who come to it seeking to devote their whole life to the service of God, to determine whether they are truly called by God. For it is written that many are called, yet few are chosen. In non-supernatural terms, that means determining whether the applicant is psychologically fitted for the life they feel called to, and if not, deflecting their mania into some more productive route.
Consider two humans, H1 and H2, both utilitarians.
H1 looks at the world the way you describe Peter Singer here.
H2 looks at the world “through the eyes of utilitarianism” as you describe it here.
My expectation is that H1 will do more good in their lifetime than H2.
What’s your expectation?
And then you have people like H0, who notices H2 is crazy, decides that that means that they shouldn’t even try to be altruistic, and accuses H1 of hypocrisy because she’s not like H2. (Exhibit A)
That is my expectation also. However, persuading H2 of that (“but dead babies!”) is likely to be a work of counselling or spiritual guidance rather than reason.
Well… so, if we both expect H1 to do more good than H2, it seems that if we were to look at them through the eyes of utilitarianism, we would endorse being H1 over being H2.
But you seem to be saying that H2, looking through the eyes of utilitarianism, endorses being H2 over being H1.
I am therefore deeply confused by your model of what’s going on here.
Oh yes, H1 is more effective, heathier, saner, more rational, etc. than H2. H2 is experiencing existential panic and cannot relinquish his death-grip on the idea.
You confuse me further with every post.
Do you think being a utilitarian makes someone less effective, healthy, sane, rational etc.?
Or do you think H2 has these various traits independent of them being a utilitarian?
There’s a lot of different kinds of utilitarian.
WARNING: More discussion of a basilisk, with a link to a real-world example.
It’s a possible failure mode of utilitarianism. Some people succumb to it (see George Price for an actual example of a similar failure) and some don’t.
I don’t understand your confusion and this pair of questions just seems misconceived.
(shrug) OK.
I certainly agree with you that some utilitarians suffer from the existential panic and inability to relinquish their death-grips on unhealthy ideas, while others don’t.
I’m tapping out here.
One could reason that one is better placed to do good effectively when focussing on oneself, ones family, one’s community, etc, simply because one understands them better.
(Warning: replying to discussion of a potential information hazard.)
Gung’f na rknttrengvba (tvira gung ng gung cbvag lbh unqa’g nqqrq zragvbarq genafuhznavfz lrg) -- nf bs abj, vg’f rfgvzngrq gb gnxr zber guna gjb gubhfnaq qbyynef gb fnir bar puvyq’f yvsr jvgu Tvirjryy’f gbc-engrq punevgl. (Be vf ryrpgevpvgl naq orre zhpu zber rkcrafvir jurer lbh’er sebz?)
Infohazard reference with no warning sign. Edit and reply to this so I can restore.
Done. Sorry this took so long, I’ve been taken mostly offline by a biohazard for the last week.
Are you saying that human choices are not “written into their definition” in some measure ?
Also, keep in mind that a goal like “make more paperclips” does leave a lot of room for other choices. The agent could spend its time studying metallurgy, or buying existing paperclip factories, or experimenting with alloys, or attempting to invent nanotechnology, or some combination of these and many more activities. It’s not constrained to just a single path.
On the one hand, I do agree with you, and I can’t wait to see your proposed solution. On the other hand, I’m not sure what this has to do with the topic. I wasn’t talking about billions of future humans or anything of the sort, merely about a single (semi-hypothetical) human and his goals; whether entertainment is a terminal or instrumental goal; and whether it is a good goal to have.
Let me put it in a different way: if you could take a magic pill which would remove (or, at the very least, greatly reduce) your desire for passive entertainment, would you do it ? People with extremely low preferences for passive entertainment do exist, after all, so this scenario isn’t entirely fantastic (other than for the magic pill part, of course).
What is written in to humans by evolution is hardly relevant. The point is that you can’t prove anything about humansby drawing a comparison with imaginary creatures that have had something potentially quite different written into them by their creator.
I have no idea what that even means.
My only solution is “don’t do that then”. It’s a broken thought process, and my interest in it ends with that recognition. Am I a soul doctor? I am not. I seem to be naturally resistant to that failure, but I don’t know how to fix anyone who isn’t.
What desire for passive entertainment? For that matter, what is this “passive entertainment”? I am not getting a clear idea of what we are talking about. At any rate, I can’t imagine “entertainment” in the ordinary meaning of that word being a terminal goal.
FWIW, I do not watch television, and have never attended spectator sports.
Quite.
To rephrase: do you believe that all choices made by humans are completely under the humans’ conscious control ? If not, what proportion of our choices is under our control, and what proportion is written into our genes and is thus difficult, if not impossible, to change (given our present level of technology) ?
You objected to my using Clippy as an analogy to human behaviour, on the grounds that Clippy’s choices are “written into its definition”. My point is that a). Clippy is free to make whatever choices it wants, as long as it believes (correctly or erroneously) such choices would lead to more paperclips, and b). we humans operate in a similar way, only we care about things other than paperclips, and therefore c). Clippy is a valid analogy.
Don’t do what ? Do you have a moral theory which works better than utilitarianism/consequentialism ?
You don’t watch TV or attend sports, but do you read any fiction books ? Listen to music ? Look at paintings or sculptures (on your own initiative, that is, and not as part of a job) ? Enjoy listening to some small subclass of jokes ? Watch any movies ? Play video games ? Stare at a fire at night ? I’m just trying to pinpoint your general level of interest in entertainment.
Just because you personally can’t imagine something, doesn’t mean it’s not true. For example, art and music—both of which are forms of passive entertainment—has been a part of human history ever since the caveman days, and continue to flourish today. There may be something hardcoded in our genes (maybe not yours personally, but on average) that makes us enjoy art and music. On the other hand, there are lots of things hardcoded in our genes that we’d be better off without...
The whole language is wrong here.
What does it mean to talk about a choice being “completely under the humans’ conscious control”? Obviously, the causal connections wind through and through all manner of things that are outside consciousness as well as inside. When could you ever say that a decision is “completely under conscious control”?
Then you talk as if a decision not “completely under conscious control” must be “written into the genes”. Where does that come from?
Why do you specify fiction? Is fiction “passive entertainment” but non-fiction something else?
What is this “us” that is separate from and acted upon by our genes? Mentalistic dualism?
Don’t crash and burn. I have no moral theory and am not impressed by anything on offer from the philosophers.
To sum up, there’s a large and complex set of assumptions behind everything you’re saying here that I don’t think I share, but I can only guess at from glimpsing the shadowy outlines. I doubt further discussion will get anywhere useful.
I think Bugmaster is equating being “written in” in the sense of a stipulation in a thought experiment with being “written in” in the sense of being the outcome of an evolutionary process.
If he is, he shouldn’t. These are completely different concepts.
That has no relevance to morality. Morality is not winning, is not efficiently fulfilling an arbitrary UF.
This decision theory is bad because it fails the “Scientology test.”
That’s hardly objective. The challenge is to formalize that test.
Btw: the problem you’re having is not due to any decision theory but due to the goal system. You want there to be entertainment and fun and the like. However, the postulated agent had a primary goal that did not include entertainment and fun. This seems alien to us, but for the mindset of such an agent “eschew entertainment and fun” is the correct and sane behavior.
Exactly, though see my comment on a sibling thread.
Out of curiosity though, what is the “Scientology test” ? Is that some commonly-accepted term from the Less Wrong jargon ? Presumably it doesn’t involve poorly calibrated galvanic skin response meters… :-/
Not the commenter, but I think it’s just “it makes you do crazy things, like scientologists”. It’s not a standard LW thing.
Optimize it for what?
That is kind of up to you. That’s the problem with terminal goals...
I agree that people spend lots of time talking about these kind of things, and that the more shared topics of conversation you have with someone the easier it is to socialize with them, but I disagree that there are few non-technical things one can talk about other than what you get from passive entertainment. I seldom watch TV/films/sports, but I have plenty of non-technical things I can talk about with people—parties we’ve been to, people we know, places we’ve visited, our tastes in food and drinks, unusual stuff that happened to us, what we’ve been doing lately, our plans for the near future, ranting about politics, conspiracy theories, the freakin’ weather, whatever—and I’d consider talking about some of these topic to build more ‘emotional intimacy’ than talking about some Hollywood movie or the Champions League or similar. (Also, I take exception to the apparent implication of the parenthetical at the end of the paragraph—it is possible to entertain people by talking about STEM topics, if you’re sufficiently Feynman-esque about that.)
I have read very little of that kind of fiction, and still I haven’t felt excluded by that in the slightest (well, except that one time when the latest HPMOR thread clogged up the top Discussion comments of the week when I hadn’t read HPMOR yet, and the occasional Discussion threads about MLP—but that’s a small minority of the time).
This article, courtesy of the recent Seq Rerun, seems serendipitous:
http://lesswrong.com/lw/yf/moral_truth_in_fiction/