Morality is Awesome
(This is a semi-serious introduction to the metaethics sequence. You may find it useful, but don’t take it too seriously.)
Meditate on this: A wizard has turned you into a whale. Is this awesome?
“Maybe? I guess it would be pretty cool to be a whale for a day. But only if I can turn back, and if I stay human inside and so on. Also, that’s not a whale.
“Actually, a whale seems kind of specific, and I’d be suprised if that was the best thing the wizard can do. Can I have something else? Eternal happiness maybe?”
Meditate on this: A wizard has turned you into orgasmium, doomed to spend the rest of eternity experiencing pure happiness. Is this awesome?
...
“Kindof… That’s pretty lame actually. On second thought I’d rather be the whale; at least that way I could explore the ocean for a while.
“Let’s try again. Wizard: maximize awesomeness.”
Meditate on this: A wizard has turned himself into a superintelligent god, and is squeezing as much awesomeness out of the universe as it could possibly support. This may include whales and starships and parties and jupiter brains and friendship, but only if they are awesome enough. Is this awesome?
...
“Well, yes, that is awesome.”
What we just did there is called Applied Ethics. Applied ethics is about what is awesome and what is not. Parties with all your friends inside superintelligent starship-whales are awesome. ~666 children dying of hunger every hour is not.
(There is also normative ethics, which is about how to decide if something is awesome, and metaethics, which is about something or other that I can’t quite figure out. I’ll tell you right now that those terms are not on the exam.)
“Wait a minute!” you cry, “What is this awesomeness stuff? I thought ethics was about what is good and right.”
I’m glad you asked. I think “awesomeness” is what we should be talking about when we talk about morality. Why do I think this?
-
“Awesome” is not a philosophical landmine. If someone encounters the word “right”, all sorts of bad philosophy and connotations send them spinning off into the void. “Awesome”, on the other hand, has no philosophical respectability, hence no philosophical baggage.
-
“Awesome” is vague enough to capture all your moral intuition by the well-known mechanisms behind fake utility functions, and meaningless enough that this is no problem. If you think “happiness” is the stuff, you might get confused and try to maximize actual happiness. If you think awesomeness is the stuff, it is much harder to screw it up.
-
If you do manage to actually implement “awesomeness” as a maximization criteria, the results will be actually good. That is, “awesome” already refers to the same things “good” is supposed to refer to.
-
“Awesome” does not refer to anything else. You think you can just redefine words, but you can’t, and this causes all sorts of trouble for people who overload “happiness”, “utility”, etc.
-
You already know that you know how to compute “Awesomeness”, and it doesn’t feel like it has a mysterious essence that you need to study to discover. Instead it brings to mind concrete things like starship-whale math-parties and not-starving children, which is what we want anyways. You are already enabled to take joy in the merely awesome.
-
“Awesome” is implicitly consequentialist. “Is this awesome?” engages you to think of the value of a possible world, as opposed to “Is this right?” which engages to to think of virtues and rules. (Those things can be awesome sometimes, though.)
I find that the above is true about me, and is nearly all I need to know about morality. It handily inoculates against the usual confusions, and sets me in the right direction to make my life and the world more awesome. It may work for you too.
I would append the additional facts that if you wrote it out, the dynamic procedure to compute awesomeness would be hellishly complex, and that right now, it is only implicitly encoded in human brains, and no where else. Also, if the great procedure to compute awesomeness is not preserved, the future will not be awesome. Period.
Also, it’s important to note that what you think of as awesome can be changed by considering things from different angles and being exposed to different arguments. That is, the procedure to compute awesomeness is dynamic and created already in motion.
If we still insist on being confused, or if we’re just curious, or if we need to actually build a wizard to turn the universe into an awesome place (though we can leave that to the experts), then we can see the metaethics sequence for the full argument, details, and finer points. I think the best post (and the one to read if only one) is joy in the merely good.
- Philosophical Landmines by Feb 8, 2013, 9:22 PM; 165 points) (
- Pinpointing Utility by Feb 1, 2013, 3:58 AM; 94 points) (
- Outline of Possible Sources of Values by Jan 18, 2013, 12:14 AM; 26 points) (
- Feb 7, 2013, 8:51 PM; 16 points) 's comment on Confusion about Normative Morality by (
- Jan 4, 2013, 4:22 PM; 12 points) 's comment on Case Study: the Death Note Script and Bayes by (
- Mar 24, 2013, 12:47 AM; 6 points) 's comment on Reflection in Probabilistic Logic by (
- Feb 13, 2013, 4:11 AM; 3 points) 's comment on Is suicide high-status? by (
- Jan 6, 2013, 11:09 PM; 3 points) 's comment on Meetup : Vancouver Fashion for Rationalists by (
- Feb 23, 2023, 9:41 AM; 2 points) 's comment on Superintelligent AI is necessary for an amazing future, but far from sufficient by (
- Jan 8, 2013, 12:23 PM; 2 points) 's comment on Closet survey #1 by (
- Apr 5, 2013, 12:53 AM; 2 points) 's comment on We Don’t Have a Utility Function by (
- Oct 16, 2013, 2:00 PM; 1 point) 's comment on Open Thread, October 7 - October 12, 2013 by (
- Aug 30, 2016, 3:22 PM; 0 points) 's comment on Open Thread, Aug. 22 − 28, 2016 by (
- Feb 19, 2013, 7:28 AM; 0 points) 's comment on Falsifiable and non-Falsifiable Ideas by (
- Ethical Awesomeness by Apr 20, 2022, 11:17 PM; -1 points) (EA Forum;
I wish! Both metaethics and normative ethics are still mysterious and confusing to me (despite having read Eliezer’s sequence). Here’s a sample of problems I’m faced with, none of which seem to be helped by replacing the word “right” with “awesome”: 1 2 3 4 5 6 7 8 9. I’m concerned this post might make a lot of people feel more clarity than they actually possess, and more importantly and unfortunately from my perspective, less inclined to look into the problems that continue to puzzle me.
I’d just like to say that although I don’t have anything to add, there are all excellent questions and I don’t think people are considering questions like these enough. (Didn’t feel like an upvote was sufficient endorsement for everything in that comment!)
Is it more awesome to have a 1% chance of there being 100 identical copies of you running on a simulation, or to a certainty of one copy of you running on a simulation? If you can’t answer that, it’s because you are ambivalent about the outcomes.
(Imagines going to the Cambridge Center for the Study of Awesome, located overlooking a gorgeous flowering canyon, inside a giant, dark castle in a remote area which you can only reach by piloting a mechanical T-Rex with rockets strapped to it. Inside, scientists with floor-length black leather lab coats are examining...)
I rest my case.
Are you at all familiar with the webcomic The Adventures of Dr. McNinja? I was strongly reminded of the whole King Radical arc going on while reading your post, and even more so with Eliezer’s comment about the Cambridge Center for the Study of Awesome. I basically just want to know if the parallels I see are real or entirely from my own pattern-matching.
Not familiar at all.
So “awesome” now means making reality closer to fiction? What happened to your old posts about joy in the merely real, dragons vs zebras, and so on?
I don’t think that follows.
Rocket-boosted mechanical T-Rexes are possible; therefore, they are as “merely real” as anything else. The point of making life awesome is seeing the entire world as one vast game of Calvinball.
Think of the rocket-boosted mechanical T-Rex as a metaphor for indulging your inner child; you can replace it with anything you could imagine doing on a lark with infinite resources. The point of living in a Universe of Awesome is that you can wake up and say “dude, you know what would be awesome? A frikin metal T-Rex with rockets boosters!” And then you and your best friend spend 15 seconds air-guitaring before firing up the Maker and chunking out the parts and tools, then putting it together and flying it around. And then one of you turns to the other and says, “okay, that was awesome for like, five minutes. Now what?”
I’m thinking of it more like Minecraft in real life. I want a castle with a secret staircase because it would be awesome. What I did was spend a day of awesomeness building it myself instead of downloading it and only having five minutes of awesomeness.
right, hence the phrases “chunking out the parts and tools” and “putting it together”.
I find woodworking and carpentry fun. However, I buy my lumber at Home Depot, rather than hiking out to the woods and felling trees myself, then painstakingly hewing and sanding them into planks.
Part of making the world more awesome is automating things enough that when you have an insanely awesome idea for a project, your starting point is fun rather than tedious. Since this is different for different people, the best solution is to have a system that can do it all for you, but that lets you do as much for yourself as you want.
I’ve seen a suggestion that the reason cooking is a fairly common hobby these days is that a lot of the dreary parts (plucking chickens, hauling wood and drawing water, keeping an eye on your rice, pureeing, etc.) are handled by machines.
Don’t underestimate the importance of keeping a relatively constant temperature, also. Even simple dishes on an uneven flame require enormous attention to avoid burning.
Actually that last description sounds like it would plateau really fast.
Fair enough, but I still think the “universe as a vast game of Calvinball” description still stands in principle. (Or if you want a less coloquial descriptor, check out Finite and Infinite Games ).
I frankly think the Cambridge Center for the study of Awesome would become run of the mill in a few months, and that’s WITH the rest of the world being “ordinary” for comparison purposes.
Just because a literal flying t-rex gets old faster than they expected, doesn’t mean you couldn’t have a great deal of fun in a world like that.
Of course, presumably self imposed challenges (eg videogames that don’t just let you win) would be fairly commonplace.
Empirically, that general type of thing is good for at least a week worth of awesome. http://www.burningman.com/
Morality needs a concept of awfulness as well as awesomeness. In the depths of hell, good things are not an option and therefore not a consideration, but there are still choices to be made.
Gloomiest sentence of 2013 so far. Upvoted.
“Least not-awesome choice” is isomorphic to “most awesome choice”.
Curiously, I like everything about your comment but this, it’s central point. Indeed, a concept of negative utility is probably useful; but this is not why.
I think that ‘awesome’ loses a lot of value when you are forced to make the statement “Watching lot of people die was the most awesome choice I had, because any intervention would have added victims without saving anyone.”
I propose ‘lame’ and ‘bummer’ as antonyms for ‘awesome’. Instead of trying to figure out the most awesome of a series of bad options, we can discuss the least lame.
Sucks less sucks less.
What’s the adjectival form of suck?
Sucky. As in: “That movie was really sucky.”
ETA: It’s even in the dictionary!
Sucky. (It’s kind of sucky, but oh well.)
Is “suckier” awesomer than “lamer”?
Sounds like an excellent idea.
I’m sigquoting that if you don’t mind.
Not that that means anything anymore, but I’m old school that way.
Couldn’t resist meming this.
Awesome line. Up goes the vote.
Concur. Upvoted.
[META] Why is this so heavily upvoted? Does that indicate actual value to LW, or just a majority of lurking septemberites captivated by cute pixel art?
It was just hacked out in a couple of hours to organize my thoughts for the meetup. It has little justification for anything, very little coherent overarching structure, and it’s not even really serious. It’s only 90% true, with many bugs. Very much a worse-is-better sort of post.
Now it’s promoted with 50-something upvotes. I notice that I would not predict this, and feel the need to update.
What should I (we) learn from this?
Am I underestimating the value of a given post-idea? (i.e. should we all err on the side of writing more?)
Are structure, seriousness, watertightness and such are trumped by fun and clarity? Is it safe to run with this? This could save a lot of work.
Are people just really interested in morality, or re-framing of problems, or well-linked integration posts?
Because you make few assertions of substance, there is a lot of empty space (where people, depending on their mood, may insert either unrealistically charitable or unrealistically uncharitable reconstructions of reasoning) and not a lot of specific content for anyone to disagree with. In contrast, if I make 10 very concrete and substantive suggestions in a post, and most people like 9 of them but hate the 10th, that could make them very reluctant to upvote the post as a whole, lest their vote be taken as a blanket endorsement for every claim.
Because the post is vague and humorous, people leave it feeling vaguely happy and not in a mood to pick it apart. Expressing this vague happiness as an upvote reifies it and makes it more intense. People like ‘liking’ things they like.
The post is actually useful, as a way of popularizing some deeper and more substantive meta-ethical and practical points. Some LessWrongers may be tired of endlessly arguing over which theory is most ideal, and instead hunger for better popularizations and summaries of the extant philosophical progress we’ve already made, so that we can start peddling those views to the masses. They may view your post as an important step in that Voltairean process, even if it doesn’t advance the distinct project of constructing the substance-for-future-popularization in the first place.
Meta-ethics is hard. There are very few easy answers, and there’s a lot of disagreement. Uncertainty and disagreement, and in general lack of closure, create a lot of unpleasant dissonance. Your article helps us pretend that we can ignore those problems, which alleviates the dissonance and makes us feel better. This would help explain why applause-lighting posts in areas like meta-ethics or the hard problem of consciousness see better results than applause-lighting posts in areas where substantive progress is easier.
The post invites people to oversimplify their utility calculations via the simple dichotomy ‘is it awesome, or isn’t it?‘. Whether or not your post is useful, informative, insightful ,etc., it is quite clearly ‘awesome,’ as the word is ordinarily used. So your post encourages people to simplify their evaluation procedure in a way that favors the post itself.
Given at least moderate quality, upvotes correlate much more tightly with accessibility / scope of audience than quality of writing. Remember, the article score isn’t an average of hundreds of scalar ratings—it’s the sum of thousands of ratings of [-1, 0, +1] -- and the default rating of anyone who doesn’t see, doesn’t care about, or doesn’t understand the thrust of a post is 0. If you get a high score, that says more about how many people bothered to process your post than about how many people thought it was the best post ever.
Yes, to counter this effect I tend to upvote the math-heavy decision theory posts and comment chains if I have even the slightest idea what’s going on, and the Vladimirs seem to think it’s not stupid.
Ironically, this is my most-upvoted comment in several months.
As one of the upvoters, here is my thought process, as far as I recall it:
WTF?!! What does it even mean?
Wait, this kind of makes sense intuitively.
Hey, every example I can try actually works. I wonder why.
OK, so the OP suggests awesomeness as an overriding single intuitive terminal value. What does he mean by “intuitive”?
It seems clear from the comments that any attempt to unpack “awesome” eventually fails on some example, while the general concept of perceived awesomeness doesn’t.
He must be onto something.
Oh, and his approach is clearly awesome, so the post is self-consistent.
Gotta upvote!
Drat, I wish I made it to the meetup where he presented it!
Totally. Hence the link to fake utility functions. I could have made this clearer; you’re not really supposed to unpack it, just use it as a rough pointer to your built-in moral intuitions. “oh that’s all there is to it”.
Don’t worry. I basically just went over this post, then went over “joy in the merely good”. We also discussed a bit, but the shield against useless philosophy provided by using “awesome” instead of “good” only lasted so long...
That said, it would have been nice to have you and your ideas there.
I have typically been awful at predicting which parts of HPMOR people would most enjoy. I suggest relaxing and enjoying the hedons.
Karma votes on this site are fickle, superficial, and reward percieved humour and wit much more than they do hard work and local unconventionality; you’re allowed to be unconventional to the world-at-large, even encouraged to, if it’s conventional in LW; the reverse is not encouraged.
Your work was both novel and completely in line with what is popular here, and so it thrived. Try to present a novel perspective arguing against things that are unanymously liked yet culture-specific, such as sex or alcohol or sarcasm or Twitter or market economies as automatic optimizers, and you might not fare as well.
You can pick up on those trends by following the Twitter accounts of notable LWers, watch them pat each other on the back for expressing beliefs that signal their belonging to the tribe, and mimick them for easy karma, which you can stock reserves of for the times where you feel absolutely compelled to take a stand for an unpopular idea.
This problem is endemic of Karma systems and makes LW no worse than any other community. It’s just that one would expect them to hold themselves to a higher standard.
Awesome post, BTW. Nice brain-hacking.
Yes, humour tends to be upvoted a lot, but it’s just not true that you can never get good karma by arguing against the LW majority position. For example, the most upvoted top-level post ever expresses scepticism about the Singularity Institute.
I never said “never”; I implied that it’s not the most probable outcome.
You indeed didn’t say “never”, but the implied meaning was closer to it than to the “not the most probable outcome” interpretation.
Also, saying that LW tends to upvote LW-conventional writings seems a little tautological, unless you have got a karma-independent way to assess LW-conventionality. Do you?
Count the number of comments that express the same notion. Or count the number of users that express said thought and contrast it with the number of users that contradict the thought.
Thank you, werd.
This is my failure as a communicator and I apologize for it.
I notice that you’re discussing what “they” do on LW. Not that I can honestly object; I’m often tempted to do so myself. It really helps when trying to draw the line between my own ideas, and all those crazy ideas everyone else here has.
But I think we are both actually fairly typical LWers, which means that it would be more correct to say something like “It’s just that one would expect us to hold ourselves to a higher standard”. This is a very different thought somehow, more than one would expect from a mere pronoun substitution.
“Them” as in “the rest of the community, excepting the exceptions”. I hold myself to those standards just fine, and there may well be others who do.
Relatedly, I often find replacing “one would expect” with “I expect” has similar effects.
Especially when it turns out the latter isn’t true.
It seems to me that the change is that with “us” the speaker is assumed to identify with the group under discussion. Specifically, it seems like they consider(ed) LW superior, and are disappointed that we have failed in this particular; whereas with “they” it seems to be accusing us of hypocrisy.
hear, hear!
My impression of this post (which may not be evident from my comments) went something like this:
1) Hah. That’s a really funny opening.
2) Oh, this is really interesting and potentially useful, AND really funny, which is a really good combination for articles one the internet.
3) How would I apply this idea to my life?
4) think about it a bit, and read some comments, think some more
5) Wait a second, this idea actually isn’t nearly as useful as it seemed at first.
5a) To the extent that it’s true, it’s only the first thesis statement of a lengthy examination of the actual issue
5b) The rest of the sequence this would need to herald to be truly useful is not guaranteed to be nearly as fun
5c) Upon reflection, while “awesome” does capture elements of “good” that would be obscured by “good’s” baggage, “awesome” also fails to capture some of the intended value.
5d) This post is still useful, but not nearly as useful as my initial positive reaction indicates
5e) I am now dramatically more interested in the subject of how interesting this post seemed vs how interesting it actually was and what this says about the internet and people and ideas, then about the content of the article.
Yep. It’s intended as an introduction to the long and not-very-exciting metaethics sequence.
Yeah, it tends to melt down under examination. (because “awesome” is a fake utility function, as per point 2). The point was not to give a bulletproof morality procedure, but to just reframe the issue in a way that bypasses the usual confusion and cached thoughts.
So I wouldn’t expect it to be useful to people who have their metaethical shit together (which you seem to, judging by the content of your rituals). It was explicitly aimed at people in my meetup who were confused and intimidated by the seeming mysteriousness of morality.
Yes the implications of this are very interesting.
I DUNT KNOW LETS TRY
It’s not necessarily that a highly upvoted post is deemed better on average, each individual still only casts one vote. The trichotomy of “downvote / no vote / upvote” doesn’t provide nuanced feedback, and while you’d think it all equals out with a large number of votes, that’s not so because of a) modifying visibility by means secondary to the content of the post, b) capturing readers’ interest early to get them to vote in the first place and c) various distributions of opinions about your post all projecting onto potentially the same voting score (e.g. strong likes + strong dislikes equalling the score of general indifference), all three of which can occur independently of the post’s real content.
The visibility was increased with the promotion of your post. While you did need initial upvotes to support that promotion, once achieved there’s no stopping the chain reaction: People want to check out that highly rated top post, they expect to see good content and often automatically steelman / gloss over your weaker points. Then there’s a kind of implied peer pressure similar to Ash’s conformity experiments; you see a highly upvoted post, then monkey see monkey do kicks in, at least skewing your heuristics.
Lastly people you keep invested until the end of your post are more likely to upvote than downvote, and your pixel art does a good job of capturing attention, the opening scene of a movie is crucial. The lower the entry barrier into a post, the more people will tag along. A lesson well internalized by television. Compare the vote counts of some AIXI related posts and yours.
You are also called nyan_sandwich, have a good reputation on this site (AFAICT), yet provide us with some guilty pleasures (of an easy-to-parse comfort-food-for-thought post, talk about nomen est omen, nom nom). In short, you covered all your populist bases. They are all belong to you.
I don’t think it was promoted until it had >30, so maybe that helped a bit, but I have another visibility explanation:
I tend to stick around in my posts and obsessively reply to every comment and cherish every upvote, which means it gets a lot of visibility in the “recent comments” section. My posts tend to have lots of comments, and I think it’s largely me trying to get the last word on everything. (until I get swamped and give up)
It is kind of odd that unpromoted posts in main have strictly less visibility than posts in discussion...
This is a good explanation. I get it now I think. Now the question is if we should be doing more of that?
EDIT: also, what does this mean:
Basically, name causes behavior, as far as I can tell. Your nickname is indeed very aptronymical (?) to providing a quick and easy lunch for the hungry mind in a humorous or good-feeling manner.
I thought the question was “Does this post have value?” or “Can you quantify the extent to which these here upvotes correlate with value?” and not “How did I get upvotes?”
Pointing out how the genesis of the upvotes is based on mechanisms only weakly related to the content value seems pertinent to answering the two questions in the quote.
It’s definitely pertinent, but it seems a bit one-sided? As an upvoter, I was trying really hard confess my love for whale and quantify it alongside my appreciation for fun and clarity. So I’m concerned that the above reads more like “it was probably all nyans and noms” as opposed to “nyans and noms were a factor.”
The whale, the fun and the clarity (and the wardrobe, too) all belong on the same side of “structure, seriousness, watertightness” versus “fun and clarity” as per the dichotomy in my initial comment’s quote. It would be weird if content hadn’t been a factor, albeit one that’s been swallowed whole by a vile white whale.
I must confess I don’t understand half of what you guys are referring to.
You’re not missing much, it’s just some throwaway references that aren’t central to the point.
“The whale, the fun and the clarity” has the same structure as the movie “The Lion, the Witch and the Wardrobe” and also starts with an animal.
Swallowed whole by the whale was supposed to say that the content factor was secondary to the “whale factor”. The “swallowed” also allures to the whole Jonah story (who lived in a whale’s stomach), the whole / [wh]ile / whale was just infantile switching out of vowels, since interestingly all have a Hamming distance of just 1 (you only need to swap one letter).
Your name contains a food item, and you provide guilty comfort food for thought with your post, so “nomen est omen” applies, i.e. your name is a sign of your purpose. The “nom nom” I just appended because it keeps with the food theme, and also because interestingly the “nom nom” is a partial anagram of “nomen est omen”.
Yea … not exactly essential to my arguments. Which in a way does support my points! :)
So it had nothing to do with Moby Dick?
No, of course not!
My own guess, based on nothing much other than a hunch: Morality as Awesomeness sounds simple and easy to do. It also sounds fun and light, unlike many of the other Ethical posts on LW. People have responded positively to this change of pace.
For me, high (insight + fun) per (time + effort).
It’s an interesting perspective and it presents previous thinking on the subject in a more accessible manner.
Hence, one upvote. I don’t know that it’s worth sixty-three upvotes (I don’t know that it’s not), but I didn’t upvote it sixty-three times. Also, I see from the comments that it’s encouraged some interesting conversations (and perhaps some reading of the meta-ethics sequence, which I think is actually fairly well written if a little dense).
In other words, congratulations on writing something engaging! It’s harder than it looks.
I upvoted it because I loved what you did. (I did feel it was, er… awesome, but before reading that comment I couldn’t have put it down in words.)
I think
a community in which people have a good idea but err on the side of not writing it up will tend toward a community in which people err on the of side of not bothering to flesh out their ideas?
fun and clarity are good starting points for structure, seriousness and watertightness? Picking out the bugs feels like a useful exercise for me, having just read the bit of the sequence talking about the impact of language.
I thought it was fun and clear and I liked the cute whale, but also it made me think. ^_^
I would tentatively advocate this (especially since there is already a system in place for filtering content into ‘promoted’ material for those who want a slower stream). More writing ⇒ more good writing.
LW is broken. Aspiring rationalistis should welcom argument contrary to their biases, but actually downvote it. Aspiring rationalists should not welcome pandering, dumbed-down ideas that don’t really solve problems or challenge them, but do.
Have you considered the possibiltiy that some people actually found this useful?
You see to be assuming that if someone judges something to be useful, that is the last word on the subject.
I am assuming that if someone judges something to be useful, they are likely to upvote based on that. This is an alternative hypothesis to the one presented in your comment, that “aspiring rationalists” upvoted this because it is a “pandering, dumbed-down idea” that “do[es]n’t really solve problems or challenge them”.
Not really differrent. If you pander to someone by presenting dumbed-down ideas as profound, they are liable to like them and judge them to be useful. People judge junk food to be worth eating, after all.
Are you arguing that judgments of usefulness have, in this case, (and others?) been distorted by the halo effect? Or have I misunderstood this comment?
Not the halo effect specifically. People are likely to make bad judgements about usefulness because such judgements are not easy to make. It takes some training. Someone who has been fed a dumbed-down diet is not going to be in a position to make a reliable judgement of the usefulness of the dumbed-down diet they have been fed.
Could you taboo “dumbed-down”? Because it appears I have no idea what you’re talking about (or you could be talking gibberish, I suppose.)
Alright, who’s downvoting all comments in this conversation? If you have some objection to this line of discussion, come out and say it; don’t karmassassinate people.
EDIT: Ok, I may have misused “karmassassinate” there. I’m not sure. it’s annoying and unhelpful, whatever you call it.
I reject the idea that there’s something wrong with silent downvotes. (And “karmassassination” typically refers to downvoting a large chunk of a particular user’s posts without reference to their content, not to silent downvoting, nor to downvoting an entire conversational branch based on the branch’s content.)
eg “Metathics isn’t complicated, it just awesomeness”. However “Metaethics is complicated, but you can get an initial toehold on it by considering awesomeness” is OK. That;s just introductory.
I’m aware that you think this is an example. Could you tell me what you mean?
Presenting something that is simplistic (too simple, lossy) as if it were adequate, or even superior to standard versions.
I see. And you claim that overexposure to such material has rendered the average LW member unable to detect oversimplification?
Thats my explanation for the upvoting that puzzled the articles’ author.
Thank you for clarifying.
Whether to use “awesome” instead of “virtuous” is the question, not the answer. This is the question asked by Nietzsche in Beyond Good and Evil. If you’ve gotten to the point where you’re set on using “awesome” instead of “good”, you’ve already chosen your answer to most of the difficult questions.
The challenge to awesome theory is the same one it has been for 70 years: Posit a world in which Hitler conquered the world instead of shooting himself in his bunker. Explain how that Hitler was not awesome. Don’t look at his outcomes and conclude they were not awesome because lots of innocent people died. Awesome doesn’t care how many innocent people died. They were not awesome. They were pathetic, which is the opposite of awesome. Awesome means you build a space program to send a rocket to the moon instead of feeding the hungry. Awesome history is the stuff that happened that people will actually watch on the History Channel. Which is Hitler, Napoleon, and the Apollo program.
If you don’t think Hitler was awesome, odds are very good that you are trying to smuggle in virtues and good-old-fashioned good, buried under an extra layer of obfuscation, by saying “I don’t know exactly what awesome is, but someone that evil can’t be awesome.” Hitler was evil, not bad.
That’s exactly right. Including “awesome”. Tornadoes, hurricanes, earthquakes, and floods are awesome. A God who will squish you like a bug if you dare not to worship him is awesome, awe-full, and awful.
Saying that it’s good because it’s vague, because it’s harder to screw up when you don’t know what you’re talking about, is contrary to the spirit of LessWrong.
Awesome already refers to the same things good is supposed to refer to, for those people who have already decided to use “awesome” instead of “good”. The “Is this right?” question that invokes virtues and rules is not a confused notion of what is awesome. It’s a different, incompatible view of what we “ought” to do.
I sometimes get the impression that I am the only person who reads MoR who actually thinks MoR!Hermione is more awesome than MoR!Quirrell. Of course I have access to at least some info others don’t, but still...
Canon!Luna is more awesome than MoR!Hermione too.
However, a universe with MoR!Hermione in it is likely to be far more awesome than a universe with canon!Luna substituted in. MoR!Hermione is a heck of a lot more useful to have around for most purposes, including the protection of awesome things such as canon!Luna.
MoR!Quirrel certainly invokes “Fictional Awesomeness”. That thing that makes many (including myself) think “Well he’s just damn cool and I’m glad he exists in that fictional universe (which can’t have any direct effect on me)”. Like Darth Vader is way more awesome than Anakin Skywalker even though being a whiny brat is somewhat less dangerous than being a powerful, gratuitously evil Sith Lord. I personally distinguish this from the ‘actual awesomeness’ of the kind mentioned here. I’m not sure to what extent others consider the difference.
Let’s say they’re different kinds of awesome to me. Overall, I think Quirrell is more awesome… until I remember Hermione is twelve.
I didn’t, and still don’t… but now I’m a little bit disturbed that I don’t, and want to look a lot more closely at Hermione for ways she’s awesome.
Quirrell scans, to me, as more awesome along the “probably knows far more Secret Eldrich Lore than you” and “stereotype of a winner” axes, until I remember that Hermione is, canonically, also both of those things. (Eldrich Lore is something one can know, so she knows it. And she’s more academically successful than anyone I’ve ever known in real life.)
So when I look more closely, the thing my brain is valuing is a script it follows where Hermione is both obviously unskillful about standard human things (feminism, kissing boys, Science Monogamy) and obviously cares about morality, to a degree that my brain thinks counts as weakness. When I pay attention, Quirrell is unskillful about tons of things as well, but he doesn’t visibly acknowledge that he is/has been unskillful. He also may or may not care about ethics to a degree, but his Questionably Moral Snazzy Bad Guy archetype doesn’t let him show this.
It does come around to Quirrell being more my stereotype of a winner, in a sense. Quirrell is more high-status than Hermione—when he does things that are cruel, wrong or stupid he hides it or recontextualizes it into something snazzy—but Hermione is more honorable than Quirrell. She confronts her mistakes and failings publicly, messily and head-on and grows as a person because of that. I think that’s really awesome.
Yeah, that sounds like either a miscalibrated sense of awe (i.e. very different priorities), or like a reaction to private information.
Well, to a first approximation, on a moral level, Quirrell is who I try not to be and Hermione is who I wish I was, and on the level of intelligence, it’s not possible for me to be viscerally impressed with either one’s intellect since I strictly contain both. Ergo I find Hermione’s choices more impressive than Quirrell’s choices.
Quirrel strikes me as the sort of character who is intended to be impressive. Pretty much all his charactaristics hit my “badass” buttons. The martial arts skills, the powerful magical field brushing at the edges of Harry’s little one, etc. However, I wouldn’t want to be like Quirrel, and I can’t imagine being Quirrel-like and still at all like myself. Whereas Hermione impresses me in the sense of being almost like a version of myself that gets everything I try to be right and is better than me at everything I think matters. Hermione is more admirable to me than Quirrel, but my sense of awe is triggered more by badass-ness than admiration.
This surprises me, but I’m not sure what I’ve mismodelled. To my mind, Hermione is trusting about moral rules in a way that I wouldn’t have expected you to like that much, but perhaps it’s just a trait that I don’t like that much.
Harry seems more awesome to me because he has a strong drive to get to the bottom of things—not the same thing as intelligence, though it might be a trait that wouldn’t be as interesting in an unintelligent character. (Or would it be? I can’t think of an author who’s tried to portray that.)
I would be fascinated to read a character who can Get Curious and think skeptically and reflectively about competing ideas, but is only of average intelligence.
Trying to model this character in my head has resulted in some sort of error though, so there’s that...
Arguably Watson is an attempt at this.
Except Watson was intended to be above average intelligence, but below Sherlock level intelligence, so he fails on the last account. He was very intelligent, just not as absurdly inelligent as Sherlock, so he appeared to be of average or lower intelligence.
The Millionaire Next Door may include a bunch of people who can think clearly without being able to handle a lot of complexity.
Amazon link. The primary takeaway from the book is that high consumption and high wealth draw from the same resource pool, and so conflict.
In general, I wonder if this shows up as characters who see virtue as intuitive, rather than deliberative. Harry sometimes gets the answer right, but he has to think hard and avoid tripping over himself to get there; Hermione often gets the answer right from the start because she appears to have a good feel for her situation.
Moving back to wealth, and generalizing from my parents, it’s not clear to me that they sat down one day and said “you know how we could become millionaires? Not spending a lot of money!” rather than having the “consume / save?” dial in their heads turned towards save, in part because “thrift ⇒ wealth” is an old, old association.
If you model intelligence differences as primarily working memory differences, it seems reasonable to me that high-WM people would be comfortable with nuance and low-WM people would be uncomfortable with it; the low-WM person might be able to compensate with external devices like writing things down, but it’s not clear they can synthesize things as easily on paper as a high-WM person could do in their head (or as easily as the high-WM person using paper!).
Maybe Next Door? Or am I missing something?
Just a typo (now corrected) rather than a joke or reference.
I can imagine writing this character, because it’s the way I feel a lot of the time… Knowing I read some important fact once but not being able retrieve it, lacking the working memory to do logic problems in my head and having to stop and pull out pen and paper, etc. I’m arguably of somewhat higher than average intelligence, but I’m quite familiar with the feeling of my brain not being good enough for what I want to do.
This is exactly what I was trying to describe, and this happens to me as well. If you ever do write such a figure, be sure to let me know, I’d like to read about them. :)
One of my previous novels somewhat touches on this. The main character is quite intelligent, but has grown up illiterate, and struggles with this limitation. If you want to check it out, see here.
Coincidences are funny: my name happens to be Asher.
I’ll put this on my reading list.
That’s a dangerous combination.
Those personality traits are not just correlated with intelligence, they almost certainly cause it—thinking is to some degree a skill set, and innate curiosity + introspection + skepticism would result in constant deliberate practice. So those traits + average intelligence can only coexist if the character has recently undergone a major personality change, or suffered brain damage.
Time to taboo intelligence.
Question for those who’ve tracked MOR more carefully than I have: How much is Harry’s curiosity entangled with his desire for power?
That’s probably why. For many mere mortals like myself MoR!Quirrell is simply awesome: competent, unpredictable, in control, a level above everyone else. Whereas MoR!Hermione is, while clever and knowledgeable, too often a damsel in distress, and her thought process, decisions and actions are uniformly less impressive than those of Harry or Quirrell. Not sure if this is intentional or not. At this point I’m rooting for Quirrell to win. Maybe there will be an alternate ending which explores this scenario.
Is this simply a case of rooting for whoever looks like they’re going to win?
You think that [I think that] Quirrell/Voldemort is going to win? O.O I wish. After all, what’s the worst that can happen if he does?
Well, I meant the question as a question, not as a rhetorical statement.
That aside, I do think it’s possible to be affected by the tendency to admire what appears currently to be the winning team even if I suspect, or even believe, that they will eventually lose. Human knowledge is rarely well-integrated.
That aside, I haven’t read HP:MOR in a very long time, so any estimates of who wins I make would be way obsolete. I don’t even quite know what Quirrell/Voldemort’s “win conditions” are. So I have no idea what can happen if he does.
That said, I vaguely recall EY making statements about writing Quirrell that I took at the time to mean that EY is buying into the sorts of narrative conventions that require Quirrell to not win (though not necessarily to lose).
I think either Harry will win, or everybody will lose.
Wait, all of her? Including the obnoxious controlling parts?
I’ll hazard a guess that your concepts have more internal structure than those of most people. You’ve probably looked at the interactions between the concepts you’ve learned, analyzed them, and refined them to be more intensional and less extensional. Whereas for most people, the concept “awesome” is a big bag of all the stuff they were looking at when someone said “Awesome!”
And that you probably haven’t watched stuff like Triumph of the Will to understand why Nazi aesthetics and propaganda could be so effective.
Clearly reducing the number of disgusting Untermenschen and increasing the Lebensraum for the master race is awesome if you consider yourself to be one of the latter.
[EDIT] Hmm, feels like a knee-jerk downvote. Maybe I’m missing something.
You totally are. The point of Goetz’s comment and mine was not that Hitler was ‘awesome’ simply because of ordinary in-group/out-group dynamics which apply to like every other leader ever and most of whom are not particularly ‘awesome’; the point was that Hitler and the Nazis were unusually ‘awesome’ in appreciating shock and awe and technocratic superiority and Nazi Science (sneers at unimpressive projects) and geez I even named one of the stellar examples of this, Triumph of the Will, which still remains one of the best examples of the Nazi regime’s co-option of scientists and artists and film-makers and philosophers to glorify itself and make it awesome. It’s an impressive movie, so impressive that
or
(Oh, how that must burn in the hearts of American feminists: the great female filmmaker is not American, and was a lackey of the Nazis.)
I watched it once, and was very impressed, personally.
Take a moment to savor that. Hitler and Naziism are the ultimate embodiment of evil in the West; modern cinema, invented in America & Europe, are world-around still recognized as one of the quintessentially Western mediums. But here we have a propaganda movie, produced to glorify Hitler and Naziism, 100% glorifying Hitler and Naziism, featuring only Hitler and Naziism, commissioned by Hitler who “served as an unofficial executive producer”, etc etc and not only has it not been consigned to the dust-heap of history, it is still watched, admired, and studied by those who are sworn to hate Naziism and anything even slightly like it such as eugenics or criticism of Israel.
Now that is “awesome”.
Also, I had a fit of the far view, and it occurred to me that Germany was rather a medium-sized country (I’m so used to continental superpowers, but the world wasn’t always like that), and it tried to become a large country, and it took a big alliance of the other major powers to take it down. This is awesome from a sufficient distance.
They had population of 70 million (probably after eating Austria) which was quite a lot at the time, compared to 48 million of Britain and about as much of France. The only more populous independent countries in 1939 were China, USA, the USSR and perhaps Japan.
Meh. Japan does seem like it was higher, according to projections. (http://www.wolframalpha.com/input/?i=country+populations+1939)
Wikipedia says 73 million in 1940. For Germany it says 69,3 million in 1939 and almost exactly 70 million, but apparently without the annexed populations of Austria and Sudetenland, which I estimate at about 10 million or more.
Edit: not sure whether to include the population of Korea into Japan’s statistics, which would make Japan more populous than Germany with certainty. The 73 million figure is without Korea.
That was the German narrative, was it not? Starting from the avowed English-French ‘encircling’ of Germany—why do you think they were allied in the first place with decrepit Poland?
I don’t understand why this was downvoted :( I upvoted it because it’s a good point and true. Is it too understanding to Nazis?
Probably. I remember a similar conversation where I posted a Wittgenstein lambasting mindless British nationalism in a WWII context, and VladimirM stepped in to defend said nationalism to much upvotes.
Not very rational to vote down a fact >:( it’s not even politics like that one just the things they believed. Is there any post on bias against the poor Nazis, it seems a bad plan if you want human rationality to tar facts about them with the same brush as their evil deeds.
Not really. It falls under standard biases like ‘horns effect’ (dual of ‘halo effect’). Sometimes LWers point out in comments good aspects of the Nazis, like their war on cancer & work on anti-smoking, or animal cruelty laws, but no one’s written any sort of comprehensive discussion of this.
The closest I can think of is Yvain’s classic post on religion: http://lesswrong.com/lw/fm/a_parable_on_obsolete_ideologies/
I’m thinking this evil halo effect regarding Nazis is the most common bias in our civilization, we all know about Godwin ;) but most people who come here probably have a bit of this stuff in their head. If we know this is true maybe it should be fought (or is the benefit from no Jew bashing allowed so huge its OK?)
There’s not really any benefit from fixing that bias, though. So the Nazis were expressing a general German sentiment in disliking the Franco-British grand-strategic encirclement. So they had some great policies on health and animals. Why does any of that really matter to non-historians?
The best I can think of is it makes for an interesting sort of critical thinking or bias test: give someone a writeup of, say, Nazi animal welfare policies & reforms, and see how they react. Can they emit a thoughtful reply rather than canned outrage?
That is, if they react ‘incredible how evil Nazis were! They would even steal animal rights to fool good people into supporting them!’ rather than ‘huh’ or ‘I guess no one is completely evil’ or ‘I really wonder how it is possible for us humans to compartmentalize to such an extent as to be opposed to animal cruelty and support the Holocaust’, you have learned something about them.
In most people Eugenics (even the good ones) is evil Nazi stuff and this can count even helpful GM as evil.
But we fail the test thus our sanity waterline could be raised :(
We don’t fail the eugenics test, though. So that’s evidence that maybe our waterline could be higher but it is higher than elsewhere.
I realize this is super belated and may not actually be seen, but if I get an answer, that’d be cool:
If we see the horns effect in how people talk about Nazis as evidence that our sanity waterline could be raised, wouldn’t trying to fight the thing you’re calling “bias against the poor Nazis” be like trying to treat symptom of a problem instead of the problem itself?
Examples I can think of that might illustrate what I mean:
Using painkillers instead of (or before?) finding out a bone is broken and setting it.
Trying to teach a martial arts student the routine their opponent uses instead of teaching them how to react in the moment and read their opponent.
Teaching the answers to a test instead of teaching the underlying concept in a way that the student can generalize.
It seems to me that doing that would only lead to reducing the power of the “Nazi response” as evidence of sanity waterline.
sidenote: I’m finding this framing really fascinating because of how I see the underlying problem/topic generalizing to other social biases I feel more strongly affected by.
Minor note: According to an article in Wired recently, the Nazis invented 3D movies.
Can’t we resolve this simply by amending the statement to “Morality is awesome for everybody.” Dying pathetically is not an awesome outcome for the people who had to do it. Arguing that innocent people were pathetic actually emphasizes the point. If Hitler’s actions made tons of people pathetic instead of awesome then those actions were most certainly immoral.
Incidentally, I do not expect nyan_sandwich to retitle the OP based on my comment. I think that the “for everybody” part can probably just be implicit.
Exactly right. In fact I do this explicitly, by invoking “fake utility functions” in point 2.
You’re right I’m playing fast and loose a bit here. I guess my “morality is awesome” idea doesn’t work for people who are in possession of the actual definition of awesome.
In that case, depending on whether you are being difficult or not, I recommend finding another vaguely good and approximately meaningless word that is free of philosophical connotations to stand in for “awesome”, or just following the “if you are still confused” procedure (read metaethics).
Perhaps. I certainly wouldn’t endorse it in general. I have inside view reasons that it’s a good idea (for me) in this particular case, though; I’m not just pulling a classic “I don’t understand, therefore it will work”. Have you seen the discussion here?
I’m confused about what you are saying. Here you seem to be identifying consequentialism with “awesome”, but above, you used similar phrasings and identified “awesome” with Space Hitler, which nearly everyone (including consequentialists) would generally agree was only good if you don’t look at the details (like people getting killed).
Can you clarify?
Was Space Hitler awesome? Yes. Was Space Hitler good? No. If you say “morality is what is awesome,” then you are either explicitly signing on to a morality in which the thing to be maximized is the glorious actions of supermen, not the petty happiness of the masses, or you are misusing the word “awesome.”
This doesn’t seem to pose any kind of contradiction or problem for the “Morality is Awesome” statement, though I agree with you about the rest of your comment.
Is Space Hitler awesome? Yes. Is saving everyone from Space Hitler such that no harm is done to anyone even more awesome? Hell yes.
Remember, we’re dealing with a potentially-infinite search space of yet-unknown properties with a superintelligence attempting to maximize total awesomeness within that space. You’re going to find lots of Ninja-Robot-Pirate-BountyHunter-Jedi-Superheroes fighting off the hordes of Evil-Nazi-Mutant-Zombie-Alien-Viking-Spider-Henchmen, and winning.
And what’s more awesome than a Ninja-Robot-Pirate-BountyHunter-Jedi-Superhero? Being one. And what’s more awesome than being a Ninja-Robot-Pirate-BountyHunter-Jedi-Superhero? Being a billion of them!
Suppose a disaster could be prevented by foresight, or narrowly averted by heroic action. Which one is more awesome? Which one is better?
Tvtropes link: Really?
Preventing disaster by foresight is more likely to work than narrow aversion by heroic action, so the the awesomeness of foresight working gets multiplied by a larger probability than the awesomeness of heroic action working when you decide to take one approach over the other. This advantage of the action that is more likely to work belongs in decision theory, not your utility function. Your utility function just says whether one approach is sufficiently more awesome than the other to overcome its decision theoretic disadvantage. This depends on the probabilities and awesomeness in the specific situation.
My numerous words are defeated by your single link. This analogy is irrelevant, but illustrates your point well.
Anyway, that’s pretty much all I had to say. The initial argument I was responding to sounded weak, but your arguments now seem much stronger. They do, after all, single-handedly defeat an army of Ninja-Robot-… of those.
Reading this comment thread motivated me to finally look this up—the words “cheesy” and “corny” actually did originally have something to do with cheese and corn!
Upvoted; whatever its relationship to what the OP actually meant, this is good.
Reminding yourself of your confusion, and avoiding privileging hypotheses, by using vague terms as long as you remember that they’re vague doesn’t seem so bad.
I would say that the world being taken over by an evil dictator is a lot less awesome than the world being threatened by an evil dictator who’s heroically defeated.
I think your post is aimed too high. Nyan is not trying to resolve the virtue ethics / deontology / consequentilist dispute.
Instead, he’s trying to use vocabulary to break naive folks out of the good --> preferences --> good.
At that level of confusion, the distinction between good, virtue, or utility is not yet relevant. Only after people stop defining good in an essentially circular fashion is productive discussion of different moral theories even possible.
Attacking Nyan for presuming moral realism is fighting the hypothetical.
You appear to have invented your own highly specific meaning of “awesome”, which appears synonymous with “effective”. As such, “awesome” (in my experience generally used as a contentless expression of approval, more or less, with connotations of excitingness) is not fulfilling it’s intended goal of intuition-pump for you. Poor you. Those of us who use “awesome” in the same way as nyan_sandwich, however, have no such problem.
That is explicitly the goal here—to use the vague goodness of “awesome” as a hack to access moral intuitions more directly.
Actually, aren’t there existing connotations of “awesome”—exciting, dramatic and so on—for everyone?
Not ones that interfere wit the technique, hopefully.
The purpose of using awesome instead of good failed in this case. If you think that rocketry is more awesome than genocide is lame, (e.g.), then you think Hitler increased awesomeness.
Is it “awesome” to crush your enemies, see them driven before you, and hear the lamentations of the women?
Is it “awesome” to be the one who gets crushed?
Maybe. (Though Conan would disagree. I’m sure we could have a nice discussion/battle about it.) I think the balance of awesomeness does come out against it. As awesome as it is to crush your enemies, I don’t really like people getting hurt.
I notice that using “awesomeness” gives a different answer (more ambiguous) than “right” or “good” in this case. I think this is a win because the awesomeness criteria is forcing me to actually evaluate whether it is awesome, instead of just trying to signal.
No.
I would have said “hell yes!” to the first one. At least it’s awesome for you… but not so much for the people you’re crushing. As Mel Brooks said, it’s good to be the king—having power is awesome, but being subject to someone else’s power generally isn’t.
Awesomeness for who?
I would suggest the heuristic that hedonism is about maximizing awesomeness for oneself, while morality is about maximizing everyone’s awesomeness.
I’d pretty much agree. Plus cooperation and… whatever we call the stuff behind the golden rule can lead to weak optimization of awesomeness for everybody with only self-awesomeness strongly desired.
Uh… “morality” is about maximising everyone’s awesomeness? Using what metric?
This is the entire basis for good economists’ objections to the supposed utilitarian basis for the State: it is plain that utility (awesomeness) is not summable-across-people… in fact in all likelihood it is not intertemporally summable for an individual (at a given point in time) since discount rates are neither time-stable or predictable.
So seeking to maximise the present value of all future social utility (the claimed rationale of ‘democracy’ advocates) seems to me an exercise so laden with hubristic nonsense, that only megalomaniacal sociopaths would dare pretend that they could do so (and would do so in order to live in palaces at everyone else’s expense).
How about this: morality is about letting individuals do as they like, so long as their doing so does not impose costs on others.
Then morality would be about letting babies eat pieces of broken glass, and yet that’s not the moral calculation that our brain makes. Indeed our brain might calculate as more “moral” a parent who vaccinates his children against their will, than a parent who lets them eat broken glass as they will.
I wonder if you’re mistaking the economico-political injuctions of e.g. libertarianism as to be the same as moral evaluations. Even if you’re a libertarian, they’re really really not. What’s the optimal system for the government to do (or not do) has little to do with what is calculated as moral by our brains.
Babies are people and a lacerated oesophagus is a cost. You need a better example.
Kratoklastes spoke about morality being “letting individuals do as they like, so long as their doing so does not impose costs on others.” The baby’s action imposes a cost on itself.
Again ArisKatsaris—the “correct-line-ometer” prevents me from responding directly to your comment (way to stifle the ability to respond, site-designers!). So I’ma put it here...
It was a comment—not a thesis, not a manifesto, not a monograph, and certainly not a “description of what morality entails”.
To assert otherwise is to be dishonest, or to be sufficiently stupid as to expect a commenter’s entire view on an important aspect of moral philosophy to be able to be transmitted in (roughly) 21 words (the bold bit at the end). Or to be a bit of both, I guess—if you expect that it will advance your ends, maybe that suffices.
Here’s something to print out and sticky-tape to your monitor: if ever I decide to give a complete, sufficient explanation of what I think is a “description of what morality entails”, it will be identified as such, will be significantly longer than 21 words, and will not have anything to do with programming an AI (on which: as a first step, and having only thought about this once since 1995, it seems to me that it would make sense to build in the concept of utility-interdependence, the notion of economic efficiency, and an understanding of what happens to tyrants in repeated, many-player dynamic games.)
Downvoted, as I will be downvoting every comment of yours that whines about downvotes from now on. Your downvotes have nothing to do with your positions, which are pretty common in their actual content around these parts, and everything to do with your horrid manner and utter incapacity of forming sentences that actually communicate meanings.
And as such it was judged and found wanting.
Then it shouldn’t have started with the words “morality is about...”
It’s you who put it in bold letters. Perhaps you should start not emphasizing sentences which aren’t important ones.
A sentence can be important without being the complete rendition of one’s views on a topic: you’re being dishonest (again).
Seriously, if you spent as much mental effort on bringing yourself up to speed with core concepts as you do on misdirection and trying to be everyone’s schoolmarm, the community (for which you obviously purport to speak) would be better off.
I note that you didn’t bleat like a retarded sheep and nitpick the idea to which I was responding, namely that morality was about maximising global awesomeness (or some other such straight-line-to-tyranny). No demand for a definitions of terms, no babble about how that won’t do for coding your make-believe AI, no gabble about expression.
And last but not least—given that you’ve already exhibited ‘bounded literacy’: what gives you the right to judge anybody?
I’m not going demand that we compare academic transcripts—you don’t have a hope on that metric—just some indication apart from “I feel strongly about this” will suffice. Preferably one that doesn’t confuse the second person possessive with the second person (present tense) of the verb “to be”.
I suggest you don’t be so hasty to accuse people of dishonesty. Downvoted without comment from now on, since you seem incapable of doing anything other than insulting and accusing them of various crimes.
I also find it bitterly amusing that someone who admitted to taking delight out of trolling people presumes to even have an opinion about morality, let alone accuse others of dishonesty. And indeed you clearly don’t have an opinion about morality, you obviously only have opinions about politics and keep confusing the two concepts. All your babble about tyranny-tyranny-state-violence-whatever, would still not be useful in helping a five-year old learn why he should be nice to his sister or polite to his grandmother, or explain why our brain evaluates it morally better to make someone feel happy than to make them feel sad, all else being equal.
If you have a moral sense, instead of just political lectures, you’ve yet to display it at all.
Yeah, so I’ll just leave this here… (since in the best tradition of correct-line-ism, mention of ‘correct line’ cultism perpetrated the morally-omniscient Aris Katsaris results in… ad hoc penalisation by the aforementioned Islamophonbe and scared “China and Russia will divide and conquer Europe” irrational fearmonger).
Not only are you an economic ignoramus (evidenced by the fact that you had no idea what transitivity of preferences even MEANT until late December 2012) but you’re also as dishonest as the numbskull who is the front-man for Scientology.
If you can’t read English, then remedial language study is indicated: apart from that you’re just some dilettante who thinks that he doesn’t have to read the key literature in ANY discipline before waffling about it (“I haven’t read Coase”… “I haven’t read Rand”… “I haven’t read anything on existentialism”… “Can someone on this forum tell me if intransitive preferences implies irrationality?”).
You’re a living, breathing advertisement for Dunning-Kruger.
Wait—don’t tell me… you aren’t aware of their work. Google it.
Here’s the thing: if I was as dishonest as you are, I would get together 6 mates and drive your ‘net’ Karma to zero in two days. It is so stupidly easy that nobody who’s not a retard thinks it’s worth doing.
And the big problem you face is that I don’t give a toss what number my ‘karma’ winds up at: this is the internet.
I’ve been on the web for a decade longer than you (since the WANK hack, if that means anything to you, which I doubt): I know this stuff back to front. I’ve been dealing with bloviating self-regarding retards like you since you were in middle-school (or the Greek equivalent).
You do NOT want this war: you’re not up to it, as evidenced by the fact that you think that all you need to do outside of your narrow disciplline (programming, no) is bloviate. Intellectual battles are not won or lost by resorting to stupid debating tactics: they are won by the people who do the groundwork in the relevant discipline. You’re a lightweight who does not read core material in disciplines on which you pontificate, which makes you sound like a pompous windbag anywhere other than this site.
You would be better off spending your time masturbating over Harry Potter (which is to literature what L Ron Hubbard is to theology) or hentai… and writing turgid pretentious self-absorbed fan fiction.
Ga Muti. (or Ka muti if you prefer a hard gamma).
You not only made a ludicrous attempt to supposedly shame me by googling previous stuff about me, but your attempt to do so is as much of a failure as everything else you’ve posted—it took me a min to figure out what the hell you were even referring to in regards to “transitive preferences”. You are referring to an Ornery forum discussion where someone else asked that question, and I answered them—not a question I asked others.
Your reading comprehension fails, your google-fu fails, etc, etc...
You also don’t seemingly see a discrepancy between your constant accusations of me supposedly being “dishonest” and yet how I openly admit my levels of ignorance whenever such ignorance may be relevant to a discussion?
Nor do you seemingly see a problem with so easily accusing me of such a serious moral crime as dishonesty, without the slightest shred of evidence. Is this what your moral sense entails, freely making slanderous accusations?
Really? You went straight for a baby right off the bat? A baby is an actor which is specifically not free to do as it wishes, for a large range of very sensible reasons—including but not limited to the fact that it is extremely reliant on third parties (parents or some other adult) to take care of it.
I’m not of the school that a baby is not self-owning, which is to say that it is rightly the property of its parents (but it is certainly not the property of uninvolved third parties, and most certainly not property of the State): I believe that babies have agency, but it is not full agency because babies do not have the capacity to rationally determine what will cause them harm.
Individuals with full agency should be permitted to self-harm—it is the ultimate expression of self-ownership (regardless of how squeamish we might be about it: it imposes psychic costs on others and is maybe self-centred, but to deny an individual free action on the basis that it might make those nearby feel a bit sad, is a perfect justification for not freeing slaves).
Babies are like retards (real retards, not internet retards [99% of whom are within epsilon of normal]): it is rational to deny them full liberty. Babies do not get to do as they will. (But let’s not let the State decide who is a retard or mentally ill—anybody familiar with the term ‘drapetomania’ will immediately see why).
Yes. Of course. It was the blatant flaw in your description of morality. Unless one addresses that one first, there’s hardly a need to discuss subtleties.
If you’re creating exceptions to your definition of morality for “sensible reasons”, you should hopefully also understand that these sensible reasons won’t be automatically understood by an AI unless they’re actually programmed in. Woe unto us if we think the AI will just automatically understand that when we say “letting individuals do what they will” we mean “individuals with the exception of babies and other people of mentally limited capacity, in whose case different rules apply, mostly having to do with preserving them from harm rather that letting them do whatever”.
In short your description of what morality entails isn’t sufficient, isn’t complete, because it relies on those unspoken and undescribed “sensible reasons”. Once the insufficiency was shown to you you were forced to enhance your description of morality with ideas like “full agency” and the “capacity to rationally determine what will cause them harm”.
And then you conceded of course that it’s not just babies, but other people with mentally limited mental capacity also fall in that category. Your description of morality seems more and more insufficient to explain what we actually mean by morality.
So, do you want to try to redescribe “morality” to include all these details explicitly, instead of just going “except in cases where common sense applies”?
That is not that great of an idea. As Muga says nearly any action imposes a cost on others. High powered individuals cannot necessarily step lightly. Plus your idea is even more vulnerable to utility monsters than utilitarianism since it only requires people with moderate unusual or nosy preferences (If you kiss your gay/lesbian lover in public, are you imposing costs on homophobes?) and will lead to stupidly complex, arbitrary rules for determining what counts as imposing costs on others. Plus the goddamned coordination problems.
I’m not saying your criticisms are bad (I don’t actually understand them). I am just saying that non-initiation of force, or forms of it, is distinctly unworkable as a base rule no matter how good of a heuristic it is.
That might be what you imagine “my” idea to involve, but it isn’t.
There is a perfectly sensible, rational way to determine if people’s supposed hurt feelings impose actual costs: ask them to pay to ameliorate them. Dislike watching gay folks kiss? Pay them not to. (I dislike watching anybody kiss—that’s just me—but not enough to be prepared to pay to reduce the incidence of public displays of affection).
What’s that? There are folks who are genuinely harmed, but don’t have the budget to pay for amelioration? That’s too bad—and it’s certainly not a basis for permitting the creation of (or continued existence of) an entity whose purposes have—always and everywhere—been captured and perverted, and ruined every economic system in history.
And not for nothin’… it’s all fine and dandy to blithely declare that “high powered people cannot necessarily step lightly” as if that disposes of 500 years worth of academic literature criticising the theoretical basis for the State: at this point in time no State is raining death on your neighbourhood (but yours is probably using your taxes—plus debt written in your name—to rain death on others).
Let’s by all means have a discussion on the idea that the non-initiation of force is ‘unworkable’ - that’s the same line of reasoning that declared that without the Church holding a monopoly to furnish moral guidance, we would all descend to amoral barbarism. These days churches are voluntary (and Popes still live in palaces) - and violent crime is on a secular downtrend that has lasted the best part of a century. And so it will be when the State goes away.
Are you really saying that an action can be recognized as moral or immoral depending on whether other people are willing to pay money to stop it, or am I grossly misunderstanding you?
That would mean that the hiring of thugs to beat up other people who engage in e.g. “sinful behavior” would serve as proof (not just evidence, but effective proof) that person doing the hiring is on the moral side, just because they’re willing to pay money to so beat such people up.
Your description of morality is becoming more and more incoherent.
There are a large number of things whose utitlity is very subjective (free Justin Bieber CDs anyone?) and a small number of things that are of utility to almost everyone. These include health, eductation and money, which are just the sorts of thnigs states tend to concern themselves with. It seems the problem has already been solved.
The problem has been ‘solved’ only to the extent that people accept 2nd-year Public Finance as it’s taught to 3rd years economics students (which stops short of two of the most critical problems with the theory - bureaucratic capture/corruption, and war—and a third… the general equilibrium effects of a very large budget-insensitive actor in goods and factor markets).
The basic Pub Fi model is that the existence of publicness characteristics in some goods means that some markets (health, education, defence) are underexpanded, and some (pollution, nuisance) are over-expanded, due to social benefits and costs not being taken into account… which—on a purely utilitarian basis—are to be resolved by .gov… from there you learn that diminishing marginal utility of money means that it imposes the lowest “excess burden” on society if you tax income progressively. The nyou add up all the little Harberger (welfare) triangles and declare that the State serves a utility-optimising function.
HUGE problem (which I would have thought any rational individual would have spotted) is that the moment you introduce a ‘.gov’, you set on the table a giant pot of money and power. In democracies you then say “OK, this pot of money and power is open for competition: all you have to do is convince 30% of the voting public, which is roughly the proportion of the population who can’t read the instructions on a tin of beans (read any LISS/ALSS survey)… and if you lie your head off in doing so, no biggie because nobody will punish you for it.”
Who gets attracted by that set of incentives? Sociopaths… and they’re the wrong people to have in charge of the instrument that decides which things “are of utility to everyone” (and more to the point, decides how much of these ‘things’ will be produced… or more accurately how much will be spent on them).
Others’ education is not of utility to me, beyond basic literacy (which the State is really bad at teaching, especially if you look at value for money) - and yet States operate high schools and universities, the value of which is entirely captured by higher lifetime incomes (i.e., it’s a private benefit).
Others’ health is not of utility to me, except (perhaps) for downstream effects from vaccination and some minimal level of acute care- and yet States run hospitals with cardiac units (again, things with solely private benefits).
And State-furnished money is specifically and deliberately of lower utility year on year (it is not a store of constant value).
You seem to think that in the absence of a State, the things you mention will ‘go to zero’ - that’s not what “public goods” implies. It simply implies that the amount produced of those goods will be lower than a perfectly informed agent (with no power in or effect on factor markets) would choose—that is, that the level of production that actually occurs will be sub-optimal, not that it will be zero. And always in utils—i.e., based on the assumption that taking a util from a sickly child and giving it to Warren Buffet, is net-social-utility-neutral.
Also… what do States do intertemporally? Due to the tendency of bureaucracies to grow, States always and everywhere grow outside of the bounds of their “defensible” spheres of action. Taxes rise, output quality falls (as usual for coercive monopolies), debts accumulate. Cronies are enriched.
And then there’s war: modern, industrial scale, baby-killing. All of those Harberger triangles that were accumulated by ‘optimally’ expanding the public goods are blown to smithereens by wasting them on cruise missiles and travelling bands of State sociopaths.
There is a vast literature on the optimality of furnishing all ‘critical’ State functions by competitive processes—courts, defence, policing etc—Rothbard, Hoppe and others in that space make it abundantly clear that the State is a net negative, even if you use the (ludicrously simplistic) utilitarian framework to analyse it.
So yeah… the problem has been solved: just not in the way you think it has.
Ignoring for a moment the fact that pretty much all actions impose “costs” on others via opportunity costs, and ignoring the fact that economists are not ethicists … this is intended simply as an intuition pump. If you want details and “metrics” then read the metaethics sequence.
This one’s for you, ArisKatsaris—the “correct-line-ometer” prevents me from responding directly to your response, so I’ma put it here.
I have checked what I wrote, and nowhere did I write that the (well understood) Coase-style arguments about how to ameliorate nuisances, had anything to do with morality. They are something that any half-decent second-year Economics student has to know, on pain of failing an important module in second-year Microeconomics: it would be as near to impossible as makes no odds, to get better than a credit for 2nd year Micro without having read and understood Coase. So if you think I made it up from whole cloth, I suggest you’ve missed a critical bit of theory.
So anyhow… if you want to go around interpolating things that aren’t there, well and good—it might pass muster in Sociology departments (assuming universities still have those), but it’s not going to advance the ball any.
As to ‘hiring thugs to beat up people who engage in’ [insert behaviour here]… well, that seems a perfect analysis of what the State does to people who disagree with it: since I advocate the non-aggression principle, I would certainly never support such a thing (but let’s say I did: at least I would not be extorting the money used to pay for it).
This is a funny place—similar to a Randian cult-centre with its correct-line “persentio ergo rectum” clique and salon-intellectualism.
Kratoklastes, your arguments are clumsy, incoherent, borderline unreadable. Your being downvoted has nothing to do with “correct lines” or not, since we have a goodly number of libertarians in here (and in fact this site has been accused of being a plutocrat’s libertarian conspiracy in the past), it has to do with your basic inability to form coherent arguments or to address the points that other people are making. And also your overall tone, which is constantly rude as if that would earn you points—it doesn’t.
And yet your argument DOES support it: You argued that someone can prove that a cost is incurred on them by being willing to pay money to stop such behaviour. You’ve argued that morality is about letting people do as they will as long as they aren’t incurring costs on others.
Can you really not see how these two statements fit together so that your argument ends up excusing all that state violence which you decry? The theocratic Iranian state after all is composed of people, which prove that sinners are incurring costs against them by being willing to pay money (e.g. morality police wages) to stop such sinful behavior.
That’s your argument, though you didn’t realize you make it—because you just never seem to realize the precise meaning and consequences of your words.
And as for your babble about Coase, I never mentioned Coase, I’ve never read Coase, and that whole paragraph is just a further example of your incohererence.
“Your [sic] being downvoted”,,, hilarious: you’re showing the world that you can’t write an English sentence—which is hilarious given your prior waffle about “the precise meaning and consequences of [] words”.
Pretend it was a typo (which just happened to be the “you’re/your” issue, which is second to “then/than”, with “loose/lose” in third, as a marker of a bad second-rate education).
Make sure you go back and cover your tracks: you can edit your comments to remove glaring indications of a lack of really fundamental literacy. Already screencapped it anyhow.
That’s worth an upvote, for the pleasure it has brought me today.
“Your” is indeed the correct form, as it modifies the gerund phrase “being downvoted.”
Awkward.
ArisKatsarsis used the correct form of “your” here.
Also, you guys are filling the recent comments section with your flamewar. Please take it to a private conversation, as it has little value for the rest of us.
Oh please… what sad, sophomoric nonsense. “Down-votes” are for children: In my entire life on the internet (beginning in 1993) I’ve never down-voted anything in my whole life—anywhere—because down-votes are for self-indulgent babies who are obsessed with having some miniscule, irrelevant punitive capacity. It’s the ultimate expression of weakness.
Key point: if you have never read Coase—a fundamental (arguably the fundamental) contribution to the literature on nuisance-abatement in economics and the law—then you’re starting from a handicap so great that you can’t even participate sensibly in a discussion of the concept, because (here’s the thing...) it starts with Coase. It would be like involving yourself in an argument about optimal control, then bridling at being expected to have heard of calculus.
You brought the Coase issue into play it by implicitly asserting that that the “willingness-to-pay-to-abate” idea was mine—showing that you were gapingly ignorant of a massive literature in Economics that bears directly on the point.
While we’re being middle-school debaters, I will point out that your paraphrase should properly have been “Letting people do as they will as long as they are not imposing costs on others” (don’t mis-paraphrase my paraphrase: it’s either sloppy or dishonest—or both).
But let’s look at that statement: what bit of that would preclude the right to hire gangs to do violence to a peaceful individual? The bit about imposing costs maybe?
And nonsense like “That’s your argument, but you don’t even know it” is simply ludicrous—it’s such a hackneyed device that it’s almost not worth responding to.
Let’s just say that right through to Masters level I had no difficulty in making clear what my arguments were (I dropped out of my PhD once my scholarship ran out), and nothing has changed in the interim. Maybe you’re just so much smarter than the folks who graded me, and thus have spotted flaws that they missed. All while never having had to stoop to read Coase. Astounding hubris.
Another thing: the people of Iran do not contribute willingly to the funding of their state. That was not even a straw man argument—it was more like the ashes of a particularly sad already-burned straw man, that was made by a kid from the short bus.
Now some Iranians might be perfectly willing to fund State terror (just as some Americans are happy to fund drone strikes on Yemeni children) - ask yourself what budget there would be for the ‘religious police’ in Iran if the payment of taxes was entirely voluntary. The whole thing about a State is that it specifically denies expression of individual preference on issues of importance to the ruling clique: war, state ideology, internal policing, and revenue-collection.
I am amused that you used Iran as the boogie-man du jour, given that Iran actually has no purpose-specific “religious police”. The Saudi mutaween are far more famous, and their brief is specifically and solely to enforce Shari’a. The Iranian government has VEVAK (internal security forces, who do not police religious issues) and the Basij—the Basij does some enforcement of dress codes, but apart from that they’re nowhere near the level of oppression as in Saudi Arabia, and do not exist specifically to enforce religious doctrine (unike the mutaween).
Hey there AriKatsaris… So we’re move back up here now—with Katsaris the Morally-Unimpeachable taking full advantage of the highly-biased comment-response system (which prevents people from responding to Katsaris’ gibberish directly unless they have sufficiently fellated the ruling clique here). And “downvoting without comment”—apart from being so babyish that it qualifies for child support - enables something of an attempt to control the dialogue.
Eventually we will get to he nub, which is that Katsaris the Morally-Unimpeachable thinks that the State is necessary (of course without ever having examined what his betters throughout history have thought about the issue—reading the literature is for lesser mortals): in other words, he has no understanding whatsoever of the dynamic consequences of the paradigm to which he subscribes.
Dishonesty takes many forms, young Aris: first and foremost is claiming expertise in a discipline in which you’re an ignoramus. Legally it’s referred to as “misleading and deceptive conduct” to attempt to pass yourself off as an expert in a field in which you have no training: compensation only happens if there had been a contract that relied on your economic expertise, of course… we can be thankful that’s not the case, however the overarching principle is that claiming to be an expert when you’re not is dishonest—the legal sanction is subsidiary to the moral wrong.
Secondly, it is dishonest to perform actions for reasons other than those that you give as justifications. Your Euro-fearmongering and Islamophobia (the Harris-Hitchens “We can bomb the brown folks coz some of them are evil” nonsense) mark you out as someone who has staunch political views, and those inform your decisions (your stated reasons are windows-dressing—and hence dishonest).
You practice misdirection all the time. Again, dishonest.
You’re innumerate, too. That’s not a mark of dishonesty, it’s just a sign of someone who does not have the tools to be a decent analyst of anything.
We can do this for as long as you like: right up until you have to go stand in line for the next outburst of sub-moronic schlock from J K Rowling, if you like.
EDIT: some more stuff, just to clarify...
Here’s the thing: I don’t expect Aris “I Don’t Need to Read the Literature Before I Bloviate” Katsaris (the Morally-Unimpeachable) to have a sudden epiphany, renounce all the nonsense he believes, and behave like an adult.
What I expect to happen is that over time a small self-regarding clique will change their behaviour—because unless it changes, this site will be even more useless than it now is, and right now it’s pretty bad. Not Scientology bad, but close enough to be well outside any sensible definition of ‘rational’, and heading in the wrong direction.
Given you have enemies you hate deeply enough? Yes.
Having such enemies in the first place? Definitely not.
There are entire cultural systems of tracking prestige based around having such enemies; the vestiges of them survive today as modern “macho” culture. Having enemies to crush mercilessly, and then doing so, is an excellent way to signal power to third parties.
One could argue that that’s context-sensitive, but I think the best answer to that is mu.
So it seems to me that the positive responses to this post have largely been of the form “hey, this is a useful intuition pump!” and the negative responses to this post have largely been of the form “hey, this is a problematic theory of morality.” For what it’s worth, my response was in the former camp, so I’d like to say a little more in its defense.
One useful thing that using the word “awesome” instead of the word “moral” accomplishes is that it redefines the search space of moral decisions. The archetypal members of the category “moral decisions” involve decisions like whether to kill, whether to steal, or whether to lie. But using the word “awesome” makes it easier to realize that a much larger class of decisions can be thought of as moral decisions, such as what kind of career to aim for.
With the archetypal answers being “no”. Perhaps the word “morality” cues proscriptive, inhibitory, avoidance thoughts for you, while awesomeness cues prescriptive, excitatory, attractive ones.
A good point.
Awesome and moral clearly have overlap. How much?
There’s a humorous, satirical news story produced by The Onion, where the US Supreme Court rules that the death penalty is “totally badass”. And it is, even though badass-ness is not a criteria to decide the death penalty’s legality.
Similarly, awesomeness makes me think of vengeance. Though some vengeance is disproportionate with the initial offense, and thus not so awesome, vengeance seems on the whole to have that aura of glorious achievement that you’d find at the climax of an action / adventure film. And yet that doesn’t really match my ideas of morality, though maybe I don’t feel strongly positively enough for the restoration of justice.
The idea that vengeance is awesome but not moral might be an artifact of looking at it from the victor’s side vs target’s side. So maybe we should distinguish between awesome experiences and awesome futures / histories / worlds.
But those were just the first distinctions between morality and awesomeness I thought of while reading. I’m probably missing a lot of stuff, since morality and awesomeness are both big, complicated things. They’re probably too big to think about all at once in detail, much less retrieve on a whim. Are there lists of of moral and/or awesome stuff we can look at to better define their overlap?
Schwartz is a psychologist who came up with 10 factors of culturally universal values. I’d say the factors of power, achievement, pleasure, excitement, self-direction, and tradition sound like things you’d find in awesome worlds, while pleasure, universalism, benevolence, conformity, and security sounds like things you find in worlds that are moral but not as awesome. Lovely and boring lives worth living. I included pleasure in both worlds, because that’s a hard one to skip on in valuable futures. I wonder how good of a weirdtopia someone could write that didn’t involve pleasure.
Anyway, that’s less haphazard, but still crude analysis. I mean, some tradition looks like narrative myths and impressive ceremonies, which are awesome, and some tradition looks like shaming people for being sexually abused, which is not awesome. So “tradition” doesn’t cut at the joints of awesome vs moral.
Is there a more fine grained list?
43 things is a popular website where people can list their goals, keep track of their progress, talk about why they failed, things like that. It’s probably biased toward far-mode endorsements, and misses out on a lot of aesthetics which aren’t neatly expressible as goal content, but it’s still an interesting source of data on morality and awesomeness. The developers of 43 things have a blog, where they do shallow statistical analysis, like listing top habit goals, but the lists are very short and have a lot of overlap.
“A Hierarchical Taxonomy of Human Goals” (Chulef+ 2001) lists 138 unique goals derived from psychological literature and 30 goal clusters.
If you look through the list, you’ll see a bunch of goals that start with “being”. Being ambitious, responsible, respected, etc. Also some appearance ones like “looking fit”. I think its fair to say that human goal content includes a fair bit of virtuousness, and that we could make a virtue theory of awesomeness just as much as a virtue theory of morality (though it might be too narrow a theory).
Sorting the big list turns out to be pretty hard, because the goals are a mix of awesomeness, boring morality, and other things. Like “Living close to family” initially sounds like a boring moral thing, but it sounds way cooler when they’re riding mechanical rocket dinosaurs with you and helping you take down the Dark Evil’s super weapon. Or even just “being a good parent”. That doesn’t sound as exciting as rocket dinosaurs, but neither can I quite bring myself to say that being a good parent is not totally awesome.
I did start sorting though. One thing that stood out is that awesome goals are more often about seeking and boring moral goals are more often about having. But that’s going off what I had when I accidentally closed the document unsaved, so small sample bias. I think I might have drifted back toward thinking about awesome experiences instead of awesome futures too. Or even just features of a high status life. And status clearly isn’t equivalent with morality. Oops.
In summary, I still really don’t know whether awesome futures are the same as moral futures, or whether awesome moral futures are the same as valuable ones.
--
This comment is stupid. Morality is usually used to describe actions, not experiences or world states or histories. Calling awesomeness the same as morality is a type mismatch. Also I put universalism in the boring category, even though I had just said that justice-y vengeance is awesome. And I said goals on 43 things are biased to far mode, which they’re not if you just look at them (neither are they near mode), and it doesn’t matter either way because I didn’t do anything with 43 things other than name drop it, like that stupid Onion sketch. And why did I bring up goal content at all? Goals are the products of valuation thought processes, not the constituents. We have psychology and neuroscience; we can just look at how awesomeness feelings work instead of imaging situations associated with goals and decided, “hrm, yes, trench coats definitely sound awesome, I wonder what that tells me about morality”. And goals aren’t all that interesting for characterizing human values that aren’t moral ones, in that specific, social sense that Haidt talks about. Like I’m pretty sure not being horribly burned by fire is a common human value, and yet no one on 43 things wrote about it, and it’s only weakly implicit in Schwartz’ value taxonomy with the pleasure and security factors. And yet not burning people alive is probably a more important thing to ensure in the design of humanity’s future than making sure people can stay with their families or have high self esteem or have math parties.
Heinlein’s The Rolling Stones has a very elegant balance of home and adventure. The family lives in a space ship.
As badass as shooting a fish in a barrel. Which is to say, no, not really.
Not “badass” in the sense of jumping a motorcycle into a helicopter, bailing at the last moment, and landing safely in a rooftop swimming pool. But badass in the sense of atavistic, ceremonial, conducive to a kind of gravitas. I’m pretty sure there are more executions in movies than wasting-away-for-70-years-in-prison.
Metaethics is about coping with people disagreeing about what is and is not awesome.
It happens after all. Some people think that copying art so that more people can experience it is awesome. Others think that it’s so non-awesome that preventing the first group from doing it is awesome. Yet another group is indifferent to copying, but thinks the prevention is so non-awesome that preventing that is awesome. (This was the least mind-killing example I could think of. Please don’t derail the discussion arguing it. The point is that there are nontrivial numbers of human beings in all three camps.)
We also observe that the vast majority of humans agree on some questions of awesomeness. Being part of a loving family: awesome! Killing everyone you meet whose height in millimeters is a prime number: not awesome! Maybe this is just an aspect of being human: sharing so much dna and history. Or maybe all these people are independently rediscovering some fundamental truth that no one can quite put their finger on, in much the same way that cultures around the world invented arithmetic long before Peano.
What can you do when you meet someone who disagrees with you and a lot is at stake? Well, there’s always force. Or if you have some ideas about awesomeness in common, you can argue based on those. Is there another option? In practice, people do change their idea of awesomeness after talking to other people.
The “people” who disagree don’t have to be separate beings. They can be subminds within a single human. The situation is pretty much the same (except you can’t really use force).
Metaethics is that art of trying to deal with this. It hasn’t gotten very far.
This is a good summary of the metaethics problem.
I disagree with your conclusion. (It hasn’t gotten very far). I think EY’s metaethics sequence handles it quite nicely. Specifically joy in the merely good and the idea of morality as an abstract dynamic procedure encoded in human brains.
So is abortion right or wrong?
That’s applied ethics, not metaethics.
(I think it’s sad, but better than the alternatives.)
ಠ_ರೃ I disapprove of your example, sah!
ETA: Just joking with you I wanted to use that face.
We can use PURE WILLPOWER.
Great! This means that in order to develop an AI with a proper moral foundation, we just need to reduce the following statements of ethical guidance to predicate logic, and we’ll be all set:
Be excellent to each other.
Party on, dudes!
Is the first time that movie’s ever been mentioned in the context of this site? Well done.
He does say that if you need more detailed knowledge you should read the metaethics sequence.
Thank you. For a short summary of the whole situation, this is fantastically non-confused and seems like a good intuition pump.
This idea is awesome.
I think “awesome” implies that something is extraordinary. I would hope you’d continue to enjoy parties with all your friends inside superintelligent starship-whales, but eventually you’d get used to them unless more awesomeness got added.
Whether everyone and every moment can be awesome—by their standards, not ours—is a worthwhile question. Even if the answer is ‘no’, how close can you get?
So “morality is awesome” implies that hedonic treadmills are part of morality, not problems? Not a big bullet to bite.
I’m not sure what you mean by hedonic treadmills not being a problem.
The article has caused me to realize that awesome has to be part of morality, but that too much of morality might be implicitly aimed at only achieving safety and comfort—with a smaller contingent saying that nothing is important but awesomeness—and that a well-conceived morality needs to have a good balance between the ordinary and the extraordinary.
I used to be someone who prioritized making the world as weird and interesting a place as possible. Then an incredibly important thing happened to me this year, at Burning Man:
Halfway through the week, a gigantic mechanical squid wandered by, spouting fire from its tentacles.
And I didn’t care. Because Burning Man is just uniformly weird all over the place and I had already (in 2 days) hedonic treadmilled on things-in-the-reference-class-of-giant-mechanical-squid-that-shoot-fire-from-their-tentacles.
(I had spend the previous 12 years of my life wishing rather specifically for a gigantic mechanical squid to wander by randomly. I was really pissed off when I didn’t care when it finally happened)
“Giant parties in space whales” isn’t all that much different than “heaven involves lots of gold and niceness and nobody having to work ever again.” I’m near-certain that the ideal world is mostly ordinary (probably whatever form of ordinary can be most cheaply maintained) with punctuated moments of awesome that you can notice and appreciate, and then reminisce about after you return to normalcy.
Some of those punctuations should certainly involve giant space-whale parties (and since giant space whale parties would probably blow my mind TOO much, you’d want fluctuations in what normalcy means, so you can have a period where normalcy is something closer to space whales, appreciating the space whales exactly enough when they arrive, but then have normalcy shift back to something simpler later on so you can also still appreciate the occasional space-sea-urchin as well.
Hm.
So, there was a time when being cold at night and in the winter was pretty much the standard human experience in the part of the world I live in. Then we developed various technologies for insulating and heating, and now I take for granted that I can lounge around comfortably in my underwear in my living room during a snowstorm.
If I lived during that earlier period, and I shared your reasoning here, it seems to me I would conclude that the ideal world involved being uncomfortably cold throughout most of the winter. We would heat the house for parties, perhaps, and that would make parties awesome, and we could reminisce about that comfortable warmth after the party was over and we’d gone back to shivering in the cold under blankets. That way we could appreciate the warmth properly.
Have I understood your reasoning correctly, or have I missed something important?
Yes and no.
I deliberately wander around outside in the cold before I come in and drink hot chocolate. (In this case, strong cold is preferable, for somewhere between 30 minutes to 2 hours before additional cold stops making the experience nicer).
I don’t deliberately keep my house freezing in the winter, but when I’m in control of the temperature (not often, with roommates), I don’t turn the heat on until it’s actually interfering with my ability to do work. I know people who keep it even colder and they learn to live with it. I’m not sure what’s actually optimal—it may very from person to person, but overall you probably aren’t actually benefiting yourself much if you keep your house in the 70s during winter
Part of the key is variation, though. I also deliberately went to a giant party in the desert. It turns out that it is really hard to have fun in the desert because learning to properly hydrate yourself is hard work. But this was an interesting experience of its own right and yes, it was extremely nice to shower when I got back.
It’s probably valuable to vary having at least one element of you life be extremely “low quality” by modern western standards, most of the time.
(nods) I sympathize with that reasoning. Two things about it make me suspicious, though.
The first is that it seems to elide the difference between choosing to experience cold when doing so is nice, and not having such a choice. it seems to me that this difference is incredibly important.
The second is its calibration against “modern” standards.
I suspect that if I lived a hundred years ago I would similarly be sympathetic to the idea that it’s valuable to have at least one element of my life be extremely low quality by “modern” standards, and if I’m alive a hundred years from now I will similarly be sympathetic to it.
Which leads me to suspect that what’s going on here has more to do with variety being a valuable part of constructing an optimal environment than it does with ordinariness.
Hedonism is hard work. If you want to optimize your enjoyment of life, how hungry should you be before you eat?
Beats me.
If I actually wanted to answer this question, I would get into the habit of recording how much I’m enjoying my life along various dimensions I wanted to optimize my enjoyment of life along, then start varying how hungry I am before I eat and seeing how my enjoyment correlated with that over time.
I think the important missing piece here is that different drivers of happiness have different degrees of hedonic adjustment. Intuitively, I would expect simpler, evolutionarily older needs (approximately those lower on Maslow’s Pyramid to adjust less.
This would imply that things like adequate food and sleep (I would guess that many of us are lacking in the later), moderate temperatures, lack of sickness and injury, etc. should be maintained consistently, while things like mechanical squids should be more varied.
I’m not sure I understand what a “degree of hedonic adjustment” is, here. Also, I’m not sure how we got from talking about ideal worlds and awesomeness, to talking about happiness.
So, OK.… being a little more concrete: “food” is very low on Maslow’s Pyramid, and “acceptance of facts” is very high. If I’ve understood you right, then in order to maximize happiness I should arrange my life so as to maintain a relatively consistent supply of food, but a highly variable supply of acceptance of facts.
Yes? No?
Odd. To me, getting completely used to the bizarre is one of the best part of living surrounded in awesomeness. Sure, breaking into a military base is kinda cool, but if you’re really awesome you do it twice without breaking a sweat and it takes a drugged chase through a minefield to faze you.
I came to a somewhat similar conclusion from watching movies with a lot of CGI. Even if the individual scenes are good, one after another gets dull. For that matter, Fantasia II didn’t work very well—for me, it was some sort of repetitive beauty overload.
Do you have a new priority, and if so, what is it?
I had other priorities at the time, they just shifted around a bit. Mostly focusing on effective altruism and humanist culture nowadays, with occasional random parties with Hide-and-seek-in-a-dark-apartment-building for novelty and fun.
I probably don’t do nearly enough exciting things right now, though.
I mean that once you get a magic pill that makes the thirtieth cake of a huge pile just as tasty as the first cake after three days of fasting, your reaction is to say “Boring!” and throw away the pill, not “Awesome.”.
I had a similar realization, but from a different angle. We don’t actually want to rid the world of clever supervillains, though it is preferable to letting supervillains do evil things. We want to maximize their cleverness-to-evil ratio. This is usually accomplished by making them fictional, but we can do better.
Heh, yes. The world needs more pranksters! Like these guys, for example.
Copyright cretins don’t understand the Internet; is this the one?
Yeah, pretty much.
I… approve of this for nonexperts and nonAI purposes. This might actually be pretty cool.
.
What do you mean?
e means that e hopes I write more half-baked posts in the middle of the night.
The latest seems to be doing well, too.
“Morality is awesome”, as a statement, scans like “consent is sexy” to me. Neither of these statements are true enough to be useful except as signalling or a personal goal (“I would like to find X thing I believe to be moral more awesome, so as to hack my brain to be more moral”).
In some cases of assessing morality/awesomeness or consent/sexiness correlation, one would sometimes have to lie about their awesomeness/sexiness preferences, and ignore those preferences in order to be a Perfectly Moral Good Individual who does not Like Evil Things.
It was secretly meant to be parsed the other way: “awesome is morality”. Sorry to confuse.
It’s not about signalling, it’s supposed to be an entirely personal thing.
It’s not about hacking your brain to find your current conception of morality more awesome either. It’s about flushing out your current conception of morality and rebuilding it from intuition without interference from Deep Wisdom or philosophical cached thoughts.
I assume the capitals are about signaling “goodness”. Sometimes one will have to lie about what is actually moral, in order to appear “moral”. The awesomeness basis is orthogonal to this, except that it seems to make the difference between what is actually good and “morality” more explicit.
I use Meaningful Initial Caps to communicate tone, but recognize that it’s nonstandard. Sorry for any confusion.
So as far as I can tell, you’re saying that “awesomeness” is a good basis for noticing what one’s brain currently considers moral, so it can then rebuild its definitions from there.
To extend the metaphor, “sexiness is (perceived by the intuitive parts of your brain, absent intervention from moralizing or abstract-cognition parts, as) consent” is a good thing to pay attention to, so you can know what that part of you actually cares about, which gives you new information that isn’t simply from choosing a side on the “Sexiness is about evopsych and golden ratios and trading meat for sex!” versus “Sexiness is about communication and queer theory praxis and bucking stereotypes!” battle.
What I’m curious about is:
What, then, do you rebuild your current conception of morality from? “Blowing up people, when I have vague evidence that they’re mooks of the Forces of Evil, by the dozens, is a bad idea, even though it seems awesome” seems like a philosophical cached thought to me. Do you think it’s something else?
Counterfactual terrorism—“but those mooks may not be mooks!”—isn’t a good tool for discerning actual bad ideas.
If I respond to “Consent is sexy!” by saying “But some of my brain doesn’t think that!”, noticing what those brainbits actually think, then change those brainbits to find sexy what I think of as “consent”, I’m not in a very different situation from the person who’s cheering blindly for consent being sexy. I just believe my premise more on the ground level, which will blind me to ways in which my preconceived notions of consent might suck.
In other words, both my intuitive models of awesomeness and my explicit models of morality might be lame in many invisible ways. What then?
I recognize the idiom (I’ve read most of c2 wiki, and other places where such is used), just unsure how to parse it in this case. The closest match of “Perfectly Moral Good Individual” is a noun emphasizing apparent nature, rather than true nature.
Or did you mean “ignore those preferences in order to be a Perfectly Moral Good Individual who does not Like Evil Things.” to be taken literally in the sense that you have to lie about something to be moral? That seems odd. Lie to who?
Yes, it’s a cached thought, but one that has a solid justification that is easy to port. I have no trouble with bringing those over. The ones the “switch to awesome” procedure targets are cached thoughts like “I am confused about morality”, or the various bits of Deep Wisdom that act as the explosive in the philosophical landmine.
(Though of course many people in this thread managed to port their confusion and standard antiwisdom as well.)
The fact that you were forced to explicitly import “this is a bad idea because of X and Y” shows that it is generally working.
Not sure what you are getting at here.
Metaethics is about how to decide how to decide if something is awesome.
Metaethics is describing the properties of the kind of theory that is capable of deciding if something is awesome.
This is why I claim to not know what it is… Everybody gets confused.
Is morality objective? Is morality universal? --> Metaethics.
When is lying wrong? When is stealing wrong? --> Ethics.
Meta-ethics is usually the process of thinking about awesomeness while getting confused by the facts that words feel like they have have platonic essences, and that the architecture of the human goal system isn’t obvious and intuitive (things feel like they can be innately motivating). Good meta-ethics could be called dissolving those confusions. [Meta-ethics is best not practiced unless you’re confident you can get it right, since getting it wrong can lead you to absurd conclusions.]
IMHO I think this awesomeness equating with morality is very wrong. Say those soldiers who shot down a number of innocent civilians, check the vid, it was pretty awesome for them. When it obviously isn’t awesome to others. Perhaps we have to respect some universal agreed upon boundaries withing giving exceptions.
I upvoted this post because it was clear, interesting, and relatively novel, but I’m concerned that it could tend to lead to what I’m going to call “narrative bias” even though I think that already means something.
Imagine someone who’s living a fairly mediocre life. Then, they get attacked—mugged or something. This isn’t fun for them, but they acquire a wicked keen scar, lots of support from their friends, and a Nemesis who gives them Purpose in Life. They spend a long time hunting their nemesis, acquiring skills to do so, etc. etc., and eventually there is a kickass showdown where the nemesis—fairly old by this point, wasn’t going to last long even absent violence—is taken down.
Or, for a simpler case: the death of Batman’s parents. Batman’s parents’ death was not particularly awesome, but Batman got really awesome as a result.
It is not moral to attack mediocre people or orphan impressionable rich children, regardless.
I dunno, maybe this is just me complaining about consequentialism-in-general again with a different vocabulary.
If it reliably resulted in more superheros and nobel-proze winners and such, I think it would be awesome (and moral) to traumatize kids.
If it’s not reliable, and only some crazy black swan, then not.
This does seem to be the substance of your example.
Agreed. Most people already agree that it is moral to force kids to go to school for years, which can be a traumatizing experience for some, and school is not even all that reliable at producing what it claims to want to produce, namely productive members of society.
Even if net awesomeness increases though, do awesome ends justify non-awesome means?
The point of having LW posts around is not to take their titles as axioms and work from there. My hardware, corrupted as it is, has no intrinsic interest in traumatizing children, so I don’t suspect my brain of doing something wrong when it tells me “if it were reliably determined that traumatizing children led to awesome outcome X, then we should traumatize children, especially considering we are in some sense already doing this.”
In other words, I think an argument against traumatizing children to make superheroes, if it were determined that this would actually work, is either also an argument against mandatory education or else has to explain why it isn’t suffering from status quo bias (why are we currently traumatizing children exactly the right amount?).
Edit: I’m not sure I said quite what I meant to say above. Let me say something different: the post you linked to is about how, when humans say things like “doing superficially bad thing X has awesome consequence Y, therefore we should do X” you should be skeptical because humans run on corrupted hardware which incentivizes them to justify certain kinds of superficially bad things. But what you’re being skeptical of is the premise “doing superficially bad thing X has awesome consequence Y,” or at least the implicit premise that it doesn’t have counterbalancing bad consequences. In this discussion nyan_sandwich and I are both taking this premise for granted.
(nods) I think so. Supposing that Bruce Wayne being Batman is a good thing, and supposing that his parents being killed was indispensible to him becoming Batman, then a consequentialist should endorse his parents having been killed. (Of course, we might ask why on earth we’re supposing those things, but that’s a different question.)
Disagree. P(parents killed | becoming like Batman) being high doesn’t imply that P(becoming like Batman | parents killed) is high.
I agree with your assertion, but I suspect we’re talking past each other, probably because I was cryptic.
Let me unpack a little, and see if you still disagree.
There’s 30-year-old Bruce over there, and we have established (somehow) that he is Batman, that this is a good thing, and that it would not have happened had his parents not been killed. (Further, we have established (somehow) that his parents’ continued survival would not have been an even better thing.)
And the question arises, was it a good thing that his parents were killed? (Not, “could we have known at the time that it was a good thing”, merely “was it, in retrospect, a good thing?”)
I’m saying a consequentialist answers “yes.”
If your disagreement still applies, then I haven’t followed your reasoning, and would appreciate it if you unpacked it for me.
As a consequentialist, I think the only good reason to judge past actions is to help make future decisions, so to me the question “was it a good thing that his parents were killed?” cashes out to “should we adopt a general policy of killing people’s parents?” and the answer is no. (I think Alicorn agrees with me.)
It seems to me like a bad idea to judge past actions on the basis of their observed results; this leaves you too susceptible to survivorship bias. Past actions should be judged on the basis of their expected results. If I adopt a bad investment strategy but end up making a lot of money anyway, that doesn’t imply that my investment strategy was a good idea.
OK, that’s clear; thanks.
I of course agree that adopting a general policy of killing people’s parents without reference to their attributes is a bad idea. It would most likely have bad consequences, after all. (Also, it violates rules against killing, and it’s something virtuous people don’t do.)
I agree that for a consequentialist, the only good reason to judge past actions is to help make future decisions.
I disagree that the question “was it a good thing that his parents were killed?” cashes out to “should we adopt a general policy of killing people’s parents?” I would say, rather, that it cashes out to “should we adopt a general policy of killing people who are similar to Bruce Wayne’s parents at the moment of their death?” (“People’s parents” is one such set, but not the only one, and I see no reason to privilege it.)
And I would say the consequentialist’s answer is “yes, for some kinds of similarity; no, for others.” (Which kinds of similarity? Well, we may not know yet. That requires further study.)
My answer’s still no because of my first comment. The death of his parents is only one factor involved in Bruce Wayne’s becoming Batman. In Batman Begins, for example, another important factor is his training with the League of Shadows. The latter is not a predictable consequence of the former.
Ah, I see your point. Sure, that’s true.
This may be a minor nit, but… is this forum collectively anti-orgasmium, now?
Because being orgasmium is by definition more pleasant than not being orgasmium. Refusing to become orgasmium is a hedonistic utilitarian mistake, full stop.[1] (Well, that’s not actually true, since as a human you can make other people happier, and as orgasmium you presumably cannot. But it is at least on average a mistake to refuse to become orgasmium; I would argue that it is virtually always a mistake.)
[1] We’re all hedonistic utilitarians, right?
… no?
http://lesswrong.com/lw/lb/not_for_the_sake_of_happiness_alone/
Interesting stuff. Very interesting.
Do you buy it?
That article is arguing that it’s all right to value things that aren’t mental states over a net gain in mental utility.[1] If, for instance, you’re given the choice between feeling like you’ve made lots of scientific discoveries and actually making just a few scientific discoveries, it’s reasonable to prefer the latter.[2]
Well, that example doesn’t sound all that ridiculous.
But the logic that Eliezer is using is exactly the same logic that drives somebody who’s dying of a horrible disease to refuse antibiotics, because she wants to keep her body natural. And this choice is — well, it isn’t wrong, choices can’t be “wrong” — but it reflects a very fundamental sort of human bias. It’s misguided.
And I think that Eliezer’s argument is misguided, too. He can’t stand the idea that scientific discovery is only an instrument to increase happiness, so he makes it a terminal value just because he can. This is less horrible than the hippie who thinks that maintaining her “naturalness” is more important than avoiding a painful death, but it’s not much less dumb.
[1] Or a net gain in “happiness,” if we don’t mind using that word as a catchall for “whatever it is that makes good mental states good.”
[2] In this discussion we are, of course, ignoring external effects altogether. And we’re assuming that the person who gets to experience lots of scientific discoveries really is happier than the person who doesn’t, otherwise there’s nothing to debate. Let me note that in the real world, it is obviously possible to make yourself less happy by taking joy-inducing drugs — for instance if doing so devalues the rest of your life. This fact makes Eliezer’s stance seem a lot more reasonable than it actually is.
Very well, let’s back up Eliezer’s argument with some hard evidence. Fortunately, lukeprog has already written a brief review of the neuroscience on this topic. The verdict? Eliezer is right. People value things other than happiness and pleasure. The idea that pleasant feelings are the sole good is an illusion created by the fact that the signals for wanting something and getting pleasure from it are comingled on the same neurons.
So no, Eliezer is not misguided. On the contrary, the evidence is on his side. People really do value more things than just happiness. If you want more evidence consider this thought experiment Alonso Fyfe cooked up:
Damn but that’s a good example. Is it too long to submit to the Rationality Quotes thread?
You can argue that having values other than hedonistic utility is mistaken in certain cases. But that doesn’t imply that it’s mistaken in all cases.
Choices can be wrong, and that one is. The hippy is simply mistaken about the kinds of differences that exist between “natural” and “non-natural” things, and about how much she would care about those differences if she knew more chemistry and physics. And presumably if she was less mistaken in expectations of what happens “after you die”.
As for relating this to Eliezer’s argument, a few examples of wrong non-subjective-happiness values is no demonstration that subjective happiness is the only human terminal value. Especially given the introspective and experimental evidence that people care about certain things that aren’t subjective happiness.
I see absolutely no reason that people shouldn’t be allowed to decide this. (Where I firmly draw the line is people making decisions for other people on this kind of basis.)
I’m not arguing that people shouldn’t decide that. I’m not arguing any kind of “should.”
I’m just saying, if you do decide that, you’re kind of dumb. And by analogy Eliezer was being kind of dumb in his article.
Okay. What do you mean by “dumb”?
In this case: letting bias and/or intellectual laziness dominate your decision-making process.
So if I wanted to respond to the person dying of a horrible disease who is refusing antibiotics, I might say something like “you are confused about what you actually value and about the meaning of the word ‘natural.’ If you understood more about about science and medicine and successfully resolved the relevant confusions, you would no longer want to make this decision.” (I might also say something like “however, I respect your right to determine what kind of substances enter your body.”)
I suppose you want me to say that Eliezer is also confused about what he actually values, namely that he thinks he values science but he only values the ability of science to increase human happiness. (I don’t think he’s confused about the meaning of any of the relevant words.)
I disagree. One reason to value science, even from a purely hedonistic point of view, is that science corrects itself over time, and in particular gives you better ideas about how to be a hedonist over time. If you wanted to actually design a process that turned people into orgasmium, you’d have to science a lot, and at the end of all that sciencing there’s no guarantee that the process you’ve come up with is hedonistically optimal. Maybe you could increase the capacity of the orgasmium to experience happiness further if you’d scienced more. Once you turn everyone into orgasmium, nobody’s around to science anymore, so nobody’s around to find better processes for turning people into orgasmium (or, science forbid, find better ethical arguments against hedonistic utilitarianism).
In short, the capacity for self-improvement is lost, and that would be terrible regardless of what direction you’re trying to improve towards.
I surmise from your comments that you may not be aware that Eliezer’s written quite a bit on this matter; http://wiki.lesswrong.com/wiki/Complexity_of_value is a good summary/index (http://lesswrong.com/lw/l3/thou_art_godshatter/ is one of my favorites). There’s a lot of stuff in there that is relevant to your points.
However, you asked me what I think, so here it is...
The wording of your first post in this thread seems telling. You say that “Refusing to become orgasmium is a hedonistic utilitarian mistake, full stop.”
Do you want to become orgasmium?
Perhaps you do. In that case, I direct the question to myself, and my answer is no: I don’t want to become orgasmium.
That having been established, what could it mean to say that my judgment is a “mistake”? That seems to be a category error. One can’t be mistaken in wanting something. One can be mistaken about wanting something (“I thought I wanted X, but upon reflection and consideration of my mental state, it turns out I actually don’t want X”), or one can be mistaken about some property of the thing in question, which affects the preference (“I thought I wanted X, but then I found out more about X, and now I don’t want X”); but if you’re aware of all relevant facts about the way the world is, and you’re not mistaken about what your own mental states are, and you still want something… labeling that a “mistake” seems simply meaningless.
On to your analogy:
If someone wants to “keep her body natural”, then conditional on that even being a coherent desire[1], what’s wrong with it? If it harms other people somehow, then that’s a problem… otherwise, I see no issue. I don’t think it makes this person “kind of dumb” unless you mean that she’s actually got other values that are being harmed by this value, or is being irrational in some other ways; but values in and of themselves cannot be irrational.
This construal is incorrect. Say rather: Eliezer does not agree that scientific discovery is only an instrument to increase happiness. Eliezer isn’t making scientific discovery a terminal value, it is a terminal value for him. Terminal values are given.
Why are we doing that...? If it’s only about happiness, then external effects should be irrelevant. You shouldn’t need to ignore them; they shouldn’t affect your point.
[1]Coherence matters: the difference between your hypothetical hippie and Eliezer the potential-scientific-discoverer is that the hippie, upon reflection, would realize (or so we would like to hope) that “natural” is not a very meaningful category, that her body is almost certainly already “not natural” in at least some important sense, and that “keeping her body natural” is just not a state of affairs that can be described in any consistent and intuitively correct way, much less one that can be implemented. That, if anything, is what makes her preference “dumb”. There’s no analogous failures of reasoning behind Eliezer’s preference to actually discover things instead of just pretend-discovering, or my preference to not become orgasmium.
I have never used the word “mistake” by itself. I did say that refusing to become orgasmium is a hedonistic utilitarian mistake, which is mathematically true, unless you disagree with me on the definition of “hedonistic utilitarian mistake” (= an action which demonstrably results in less hedonic utility than some other action) or of “orgasmium” (= a state of maximum personal hedonic utility).[1]
I point this out because I think you are quite right: it doesn’t make sense to tell somebody that they are mistaken in “wanting” something.
Indeed, I never argued that the dying hippie was mistaken. In fact I made exactly the same point that you’re making, when I said:
What I said was that she is misguided.
The argument I was trying to make was, look, this hippie is using some suspect reasoning to make her decisions, and Eliezer’s reasoning looks a lot like her’s, so we should doubt Eliezer’s conclusions. There are two perfectly reasonable ways to refute this argument: you can (1) deny that the hippie’s reasoning is suspect, or (2) deny that Eliezer’s reasoning is similar to hers.
These are both perfectly fine things to do, since I never elaborated on either point. (You seem to be trying option 1.) My comment can only possibly convince people who feel instinctively that both of these points are true.
All that said, I think that I am meaningfully right — in the sense that, if we debated this forever, we would both end up much closer to my (current) view than to your (current) view. Maybe I’ll write an article about this stuff and see if I can make my case more strongly.
[1] Please note that I am ignoring the external effects of becoming orgasmium. If we take those into account, my statement stops being mathematically true.
I don’t think those are the only two ways to refute the argument. I can think of at least two more:
(3) Deny the third step of the argument’s structure — the “so we should doubt Eliezer’s conclusions” part. Analogical reasoning applied to surface features of arguments is not reliable. There’s really no substitute for actually examining an argument.
(4) Disagree that construing the hippie’s position as constituting any sort of “reasoning” that may or may not be “suspect” is a meaningful description of what’s going on in your hypothetical (or at least, the interesting aspect of what’s going on, the part we’re concerned with). The point I was making is this: what’s relevant in that scenario is that the hippie has “keeping her body natural” as a terminal value. If that’s a coherent value, then the rest of the reasoning (“and therefore I shouldn’t take this pill”) is trivial and of no interest to us. Now it may not be a coherent value, as I said; but if it is — well, arguing with terminal values is not a matter of poking holes in someone’s logic. Terminal values are given.
As for your other points:
It’s true, you didn’t say “mistake” on its own. What I am wondering is this: ok, refusing to become orgasmium fails to satisfy the mathematical requirements of hedonistic utilitarianism.
But why should anyone care about that?
I don’t mean this as a general, out-of-hand dismissal; I am asking, specifically, why such a requirement would override a person’s desires:
Person A: If you become orgasmium, you would feel more pleasure than you otherwise would.
Person B: But I don’t want to become orgasmium.
Person A: But if you want to feel as much pleasure as possible, then you should become orgasmium!
Person B: But… I don’t want to become orgasmium.
I see Person B’s position as being the final word on the matter (especially if, as you say, we’re ignoring external consequences). Person A may be entirely right — but so what? Why should that affect Person B’s judgments? Why should the mathematical requirements behind Person A’s framework have any relevance to Person B’s decisions? In other words, why should we be hedonistic utilitarians, if we don’t want to be?
(If we imagine the above argument continuing, it would develop that Person B doesn’t want to feel as much pleasure as possible; or, at the least, wants other things too, and even the pleasure thing he wants only given certain conditions; in other words, we’d arrive at conclusions along the lines outlined in the “Complexity of value” wiki entry.)
(As an aside, I’m still not sure why you’re ignoring external effects in your arguments.)
If I become orgasmium, then I would cease to exist, and the orgasmium, which is not me in any meaningful sense, will have more pleasure than I otherwise would have. But I don’t care about the pleasure of this orgasmium, and certainly would not pay my existence for it.
The difficulty here, of course, is that Person B is using a cached heuristic that outputs “no” for “become orgasmium”; and we cannot be certain that this heuristic is correct in this case. Just as Person A is using the (almost certainly flawed) heuristic “feel as much pleasure as possible”, which outputs “yes” for “become orgasmium”.
Why do you think so?
What do you mean by “correct”?
Edit: I think it would be useful for any participants in discussions like this to read Eliezer’s Three Worlds Collide. Not as fictional evidence, but as an examination of the issues, which I think it does quite well. A relevant quote, from chapter 4, “Interlude with the Confessor”:
Humans are not perfect reasoners.
[Edited for clarity.]
I give a decent probability to the optimal order of things containing absolutely zero pleasure. I assign a lower, but still significant, probability to it containing an infinite amount of pain in any given subjective interval.
Is this intended as a reply to my comment?
reply to the entire thread really.
Fair enough.
Is this intended as a reply to my comment?
… why? Humans definitely appear to want to avoid pain and enjoy pleasure. i suppose I can see pleasure being replaced with “better” emotions, but I’m really baffled regarding the pain. Is it to do with punishment? Challenge? Something I haven’t thought of?
Agreed, pretty much. I said significant probability, not big. I’m not good at translating anticipations into numbers, but no more than 5%. Mostly based on extreme outside view, as in “something I haven’t thought of”.
Oh, right. “Significance” is subjective, I guess. I assumed it meant, I don’t know, >10% or whatever.
No. Most of us are preferentists or similar. Some of us are not consequentialists at all.
For as long as I’ve been here, which admittedly isn’t all that long.
Here’s your problem.
I’m anti-orgasmium, but not necessarily anti-experience-machine. I’m approximately a median-preference utilitarian. (This is more descriptive than normative)
No thanks. Awesomeness is more complex than can be achieved with wireheading.
I can’t bring myself to see the creation of an awesomeness pill as the one problem of such huge complexity that even a superintelligent agent can’t solve it.
I have no doubt that you could make a pill that would convince someone that they were living an awesome life, complete with hallucinations of rocket-powered tyrannosaurs, and black leather lab coats.
The trouble is that merely hallucinating those things, or merely feeling awesome is not enough.
The average optimizer probably has no code for experiencing utility, it only feels the utility of actions under consideration. The concept of valuing (or even having) internal experience is particular to humans, and is in fact only one of the many things that we care about. Is there a good argument for why internal experience ought to be the only thing we care about? Why should we forget all the other things that we like and focus solely on internal experience (and possibly altruism)?
Can’t I simulate everything I care about? And if I can, why would I care about what is going on outside of the simulation, any more than I care now about a hypothetical asteroid on which the “true” purpose of the universe is written? Hell, if I can delete the fact from my memory that my utility function is being deceived, I’d gladly do so—yes, it will bring some momentous negative utility, but it would be a teensy bit greatly offset by the gains, especially stretched over a huge amount of time.
Now that I think about it...if, without an awesomeness pill, my decision would be to go and do battle in an eternal Valhalla where I polish my skills and have fun, and an awesomeness pill brings me that, except maybe better in some way I wouldn’t normally have thought of...what is exactly the problem here? The image of a brain with the utility slider moved to the max is disturbing, but I myself can avoid caring about that particular asteroid. An image of an universe tiled with brains storing infinite integers is disturbing; one of an universe tiled with humans riding rocket-powered tyrannosaurs is great—and yet, they’re one and the same; we just can’t intuitively penetrate the black box that is the brain storing the integer. I’d gladly tile the universe with awesome.
If I could take an awesomeness pill and be whisked off somewhere where my body would be taken care of indefinitely, leaving everything else as it is, maybe I would decline; probably I won’t. Luckily, once awesomeness pills become available, there probably won’t be starving children, so that point seems moot.
[PS.] In any case, if my space fleet flies by some billboard saying that all this is an illusion, I’d probably smirk, I’d maybe blow it up with my rainbow lasers, and I’d definitely feel bad about all those other fellas whose space fleets are a bit less awesome and significantly more energy-consuming than mine (provided our AI is still limited by, at the very least, entropy; meaning limited in its ability to tile the world to infinity; if it can create the same amount of real giant robots as it can create awesome pills, it doesn’t matter which option is taken), all just because they’re bothered by silly billboards like this. If I’m allowed to have that knowledge and the resulting negative utility, that is.
[PPS.] I can’t imagine how an awesomeness pill would max my sliders for self-improvement, accomplishment, etc without actually giving me the illusion of doing those things. As in, I can imagine feeling intense pleasure; I can’t imagine feeling intense achievement separated from actually flying—or imagining that I’m flying—a spaceship—it wouldn’t feel as fulfilling, and it makes no sense that an awesomeness pill would separate them if it’s possible not to. It probably wouldn’t have me go through the roundabout process of doing all the stuff, and it probably would max my sliders even if I can’t imagine it, to an effect much different from the roundabout way, and by definition superior. As long as it doesn’t modify my utility function (as long as I value flying space ships), I don’t mind.
This is a key assumption. Sure, if I assume that the universe is such that no choice I make affects the chances that a child I care about will starve—and, more generally, if I assume that no choice I make affects the chances that people will gain good stuff or bad stuff—then sure, why not wirehead? It’s not like there’s anything useful I could be doing instead.
But some people would, in that scenario, object to the state of the world. Some people actually want to be able to affect the total amount of good and bad stuff that people get.
And, sure, the rest of us could get together and lie to them (e.g., by creating a simulation in which they believe that’s the case), though it’s not entirely clear why we ought to. We could also alter them (e.g., by removing their desire to actually do good) but it’s not clear why we ought to do that, either.
Do you mean to distinguish this from believing that you have flown a spaceship?
Don’t we have to do it (lying to people) because we value other people being happy? I’d rather trick them (or rather, let the AI do so without my knowledge) than have them spend a lot of time angsting about not being able to help anyone because everyone was already helped. (If there are people who can use your help, I’m not about to wirehead you though)
Yes. Thinking about simulating achievement got me confused about it. I can imagine intense pleasure or pain. I can’t imagine intense achievement; if I just got the surge of warmth I normally get, it would feel wrong, removed from flying a spaceship. Yet, that doesnt mean that I don’t have an achievement slider to max; it just means I can’t imagine what maxing it indefinitely would feel like. Maxing the slider leading to hallucinations about performing activities related to achievement seems too roundabout—really, that’s the only thing I can say; it feels like it won’t work that way. Can the pill satisfy terminal values without making me think I satisfied them? I think this question shows that the sentence before it is just me being confused. Yet I can’t imagine how an awesomeness pill would feel, hence I can’t dispel this annoying confusion.
[EDIT] Maybe a pill that simply maxes the sliders would make me feel achievement, but without flying a spaceship, hence making it incomplete, hence forcing the AI to include a spaceship hallucinator. I think I am/was making it needlessly complicated. In any case, the general idea is that if we are all opposed to just feeling intense pleasure without all the other stuff we value, then a pill that gives us only intense pleasure is flawed and would not even be given as an option.
Regarding the first bit… well, we have a few basic choices:
Change the world so that reality makes them happy
Change them so that reality makes them happy
Lie to them about reality, so that they’re happy
Accept that they aren’t happy
If I’m understanding your scenario properly, we don’t want to do the first because it leaves more people worse off, and we don’t want to do the last because it leaves us worse off. (Why our valuing other people being happy should be more important than their valuing actually helping people, I don’t know, but I’ll accept that it is.)
But why, on your view, ought we lie to them, rather than change them?
I attach negative utility to getting my utility function changed—I wouldn’t change myself to maximize paperclips. I also attach negative utility to getting my memory modified—I don’t like the normal decay that is happening even now, but far worse is getting a large swath of my memory wiped. I also dislike being fed negative information, but that is by far the least negative of the three, provided no negative consequences arise from the false belief. Hence, I’d prefer being fed negative information to having my memory modified to being made to stop caring about other people altogether. There is an especially big gap between the last one and the former two.
Thanks for summarizing my argument. I guess I need to work on expressing myself so I don’t force other people to work through my roundaboutness :)
Fair enough. If you have any insight into why your preferences rank in this way, I’d be interested, but I accept that they are what they are.
However, I’m now confused about your claim.
Are you saying that we ought to treat other people in accordance with your preferences of how to be treated (e.g., lied to in the present rather than having their values changed or their memories altered)? Or are you just talking about how you’d like us to treat you? Or are you assuming that other people have the same preferences you do?
For the preference ranking, I guess I can mathematically express it by saying that any priority change leads to me doing stuff that would be utility+ at the time, but utility- or utilityNeutral (and since I could be spending the time generating utility+ instead, even neutral is bad) now. For example, if I could change my utility function to eating babies, and babies were plentiful, this option would result in a huge source of utility+ after the change. Which doesn’t change the fact that it also means I’d eat a ton of babies, which makes the option a huge source of utility- currently—I wouldn’t want to do something that would lead to me eating a ton of babies. If I greatly valued generating as much utility+ for myself at any moment as possible, I would take the plunge; however, I look at the future, decide not to take what is currently utility- for me, and move on. Or maybe I’m just making up excuses to refuse to take a momentary discomfort for eternal utility+ - after all, I bet someone having the time of his life eating babies would laugh at me and have more fun than me—the inconsistency here is that I avoid the utility- choice when it comes to changing my terminal values, but I have no issue taking the utility- choice when I decide I want to be in a simulation. Guess I don’t value truth that much. I find that changing my memories leads to similar results as changing my utility function, but on a much, much smaller scale—after all, they are what make up my beliefs, preferences, myself as a person. Changing them at all changes my belief system and preference; but that’s happening all the time. Changing them on a large scale is significantly worse in terms of affecting my utility function—it can’t change my terminal values, so still far less bad than directly making me interested in eating babies, but still negative. Getting lied to is just bad, with no relation to the above two, and weakest in importance.
My gut says that I should treat others as I want them to treat me. Provided a simulation is a bit more awesome, or comparably awesome but more efficient, I’d rather take that than the real thing. Hence, I’d want to give others what I myself prefer (in terms of ways to achieve preferences) - not because they are certain to agree that being lied to is better than angsting about not helping people, but because my way is either better or worse than theirs, and I wouldn’t believe in my way unless I though it better. Of course, I am also assuming that truth isn’t a terminal value to them. In the same way, since I don’t want my utility function changed, I’d rather not do it to them.
I don’t understand this. If your utility function is being deceived, then you don’t value the true state of affairs, right? Unless you value “my future self feeling utility” as a terminal value, and this outweighs the value of everything else …
No, this is more about deleting a tiny discomfort—say, the fact that I know that all of it is an illusion; I attach a big value to my memory and especially disagree with sweeping changes to it, but I’ll rely on the pill and thereby the AI to make the decision what shouldn’t be deleted because doing so would interfere with the fulfillment of my terminal values and what can be deleted because it brings negative utility that isn’t necessary.
Intellectually, I wouldn’t care whether I’m the only drugged brain in a world where everyone is flying real spaceships. I probably can’t fully deal with the intuition telling me I’m drugged though. It’s not highly important—just a passing discomfort when I think about the particular topic (passing and tiny, unless there are starving children). Whether its worth keeping around so I can feel in control and totally not drugged and imprisoned...I guess that’s reliant on the circumstances.
So you’re saying that your utility function is fine with the world-as-it-is, but you don’t like the sensation of knowing you’re in a vat. Fair enough.
My first thought was that an awesomeness pill would be a pill that makes ordinary experience awesome. Things fall down. Reliably. That’s awesome!
And in fact, that’s a major element of popular science writing, though I don’t know how well it works.
Psychedelic drugs already exist...
One time my roommate ate shrooms, and then he spent about 2 hours repeatedly knocking over an orange juice jug, and then picking it up again. It was bizarre. He said “this is the best thing ever” and was pretty sincere. It looked pretty silly from the outside though.
I dunno, I feel like judgments of awesomeness are heavily path-dependent and vary a lot from person to person. I don’t hold out a lot of hope for the project of Coherent Extrapolated Volition, but I hold out even less for Coherent Extrapolated Awesomeness. So the vision of the future is people pushing back and forth, the chuunibyous trying to fill the world with dark magic rituals and the postmodernists wincing at their unawesome sincerity and trying to paint everything with as many layers of awesome irony as they can.
Also, from a personal perspective, I rather like quiet comfort, although I cannot really say it’s “awesome”. You can say, it doesn’t matter what I like, it only matters what’s awesome, but to hell with your fascist notions.
This is the unextrapolated awesomeness. I think we would tend to agree on much more of what is awesome if we extrapolated.
This is a serious bug. Non-exciting and comfortable can be awesome, even though the word doesn’t bring it to mind. Thanks.
Reference for those not keeping up with current anime.
Interesting rephrasing of morality...but would it still hold if I asked you to taboo “awesome”?
No.
If I taboo “awesome” directly, I’d miss something. (complexity of value)
The point of taboo is usually to remove a problematic concept that has too much attached confusion, or to look inside a black box.
The point of saying “Awesome” is actually the opposite: it was deliberately chosen for it’s lack of meaning (points 1 and 4), and to wrap up everything we know about morality (that we go insane if we look at at the wrong angle) into a convenient black box that we don’t look inside, but works anyway (point 2,3,5).
But again,
In other words “taboo awesome” is a redirect to the metaethics sequence.
This is interesting. Can we come up with a punchy name for good uses of “reverse tabooing”?
One reason I particularly like the choice of the word “awesome,” which is closely related to and maybe just a rephrasing of your first point, is that it is much less likely to trigger redirects to cached thoughts that sound deep. Moreover, since “awesome” is not itself a word that sounds deep, talking about morality using the language of awesomeness inoculates against the trying-to-sound-deep failure mode.
“blackboxing”?
As in “I blackedboxed metaethics by using the word ‘awesome’”.
Nice, I wish I could blackbox in newcomb’s problem (like eliezer does).
How about “Plancheting”?
A ‘planchet’ is a blank coin, ready to be minted—which seems analogous to what’s being done with these words (and has the delightful parallelization of metaphor with “coining a phrase”).
The downside of this is that most people won’t know what “planchet” means. The advantage of “taboo” is it’s already intuitive what is meant when you here it.
That’s a good point.
“Reverse tabooing” seems like a fine phrase. It’s at least somewhat clear what it means, and it doesn’t come pre-loaded with distracting connotations. I think it would be difficult to improve upon it.
Isn’t what “naming” means in the first place?
Heh, good point(s) there. I never thought removing meaning would actually make an argument clearer, but somehow it did.
Where did the idea come from that only consequentialists thought about consequences? If I’m a deontologist, and I think the rules include “Don’t murder,” I’m still allowed to notice that a common consequence of pointing a loaded gun at a person and pulling the trigger is “murder.”
Don’t consequentialists think that only consequences matter?
That’s the idea.
EDIT:
If you follow this line of reasoning as far as it goes, I think you find that there isn’t really any good reason to distinguish “I caused this” and “I failed to prevent this”. In other words, as soon as you allow some consequentialism into your moral philosophy, it takes over the whole thing. I think...
There’s something screwy going on in your reasoning. Imagine the following closing argument at a murder trial:
Edit 1/11: Because it isn’t clear: No reasonable deontologist would find this reasoning persuasive. nyan is at risk of strawmanning the opposing position—see our further conversation below.
This is a result of screwy reasoning within Deontology, not within nyan_sandwich’s post.
Sounds like a shoe-in for an insanity plea...
I think perhaps there’s something screwy going on with that person’s reasoning.
As a good consequentialist, I would not take such an argument seriously.
Neither should a good deontologist.
Ok, I wonder what we are even saying here.
I claim that consequentialism is correct because it tends to follow from some basic axioms like moral responsibility being contagious backwards through causality, which I accept.
I further claim that deontologists, if they accept such axioms (which you claim they do), degenerate into consequentialists given sufficient reflection. (To be precise, there exists some finite sequence of reflection such that a deontologist will become a consequentialist).
I will note that though consequentialism is a fine ideal theory, at some point you really do have to implement a procedure, which means in practice, all consequentialists will be deontologists. (Possibly following the degenerate rule “see the future and pick actions that maximize EU”, though they will usually have other rules like “Don’t kill anyone even if it’s the right thing to do”). However, their deontological procedure will be ultimately justified by consequentialist reasoning.
What do you think of that?
My main objection is that this further claim wasn’t really argued in the original point. It was simply assumed—and it’s just too controversial a claim to assume. The net effect of your assumption was an inflationary use of the term—if consequentialist means what you said, all the interesting disputants in moral philosophy are consequentialists, whether they realize it or not.
It might be the case that your proposition is correct, and asserted non-consequentialists are just confused. I was objecting to assuming this when it was irrelevant to your broader point about the advantages of the label “awesome” in discussing moral reasoning. The overall point you were trying to make is equally insightful whether your further assertion is true or not.
Agreed. This is usually called “rule utilitarianism” – the idea that, in practice, it actually conserves utils to just make a set of basic rules and follow them, rather than recalculating from scratch the utility of any given action each time you make a decision. Like, “don’t murder” is a pretty safe one, because it seems like in the vast majority of situations taking a life will have a negative utility. However, its still worth distinguishing this sharply from deontology, because if you ever did calculate and find a situation in which your rule resulted in lesser utility – like pushing the fat man in front of the train – you’d break the rule. The rule is an efficiency-approximation rather than a fundamental posit.
The point of rule utilitarianism isn’t only to save computational resources. It’s also that in any particular concrete situation we’re liable to have all sorts of non-moral motivations pulling at us, and those are liable to “leak” into whatever moral calculations we try to do and produce biased answers. Whereas if we work out ahead of time what our values are and turn them into sufficiently clear-cut rules (or procedures, or something), we don’t have that option. Hence “don’t kill anyone even if it’s the right thing to do”, as nyan_sandwich puts it—I think quoting someone else, maybe EY.
(A tangential remark, which you should feel free to ignore: The above may make it sound as if rule utilitarianism is only appropriate for those whose goal is to prioritize morality above absolutely everything else, and therefore for scarcely anyone. I think this is wrong, for two reasons. Firstly, the values you encode into those clear-cut rules don’t have to be only of the sort generally called “moral”. You can build into them a strong preference for your own welfare over others’, or whatever. Secondly, you always have the option of working out what your moral principles say you should do and then doing something else; but the rule-utilitarian approach makes it harder to do that while fooling yourself into thinking you aren’t.)
Isn’t that the awesomest goal? ,:-.
But the moment you allow sub-rules as exceptions to the general rules like in the quoted part above, you set the ground for Rule-consequentialism to collapse into Act-consequentialism via an unending chain of sub-rules. See Lyons, 1965.
Further, as a consequentialist, you have to think about the effects of accepting a decision-theory which lets you push the fat man onto the train tracks and what that means for the decision processes of other agents as well.
Let’s use examples to tease apart the difference.
A consequentialist says: “Death is bad. Person A could have donated their income and saved that child, but they didn’t. The consequence was death. Person B killed a child with a gunshot. The consequence was death. These two situations are equivalent.”
The deontologist says “It was not the duty of person A to save a stranger-child. However, it is the duty of every person not to murder children. Person B is worse than person A, because he did not do his duty.”
The virtue ethicist says—“By his actions, we deduce that Person A is either unaware of the good he could have done, or or he lacks the willpower, or he lacks the goodness to save the child. We deduce that person B is dangerous, impulsive, and should be locked up.”
I think the crux of the divide is that virtue and deontological ethics are focused with evaluating whether an agent’s actions were right or wrong, whereas consequentialist ethics is focused on creating the most favorable final outcome.
Personally, I use virtue ethics for evaluating whether my or another’s action was right or wrong, and use consequentialism when deciding which action to take.
Only an extremely nearsighted consequentialist would say this. Whoever’s saying this is ignoring lots of other consequences of Person A and Person B’s behavior. First, Person A has to take an opportunity cost from donating their income to save children’s lives. Person B doesn’t take such an opportunity cost. Second, Person B pays substantial costs for killing children in some combination of possible jail time, lost status, lost allies, etc. Third, the fact that Person B decided to murder a child is strong evidence that Person B is dangerous, impulsive, and should be locked up in the sense that locking up Person B has the best consequences. Etc.
What’s the difference?
Truth be told at the end of the day those three moral systems are identical, if examined thoroughly enough. This type of discussion is only meaningful if you don’t think about it too hard...once the words get unpacked the whole thing dissolves.
But if we don’t think too hard about it, there is a difference. Which moral philosophy someone subscribes to describes what their “first instinct” is when it comes to moral questions.
From wikipedia:
But—if the deontologist thinks hard enough, they will conclude that sometimes lying is okay if it fulfills your other duties. Maximizing duty fulfillment is equivalent to maximizing moral utility.
When a virtue ethicist is judging a person, they will take the intended consequences of an action into account. When a virtue ethicist is judging their own options, they are looking at the range of there own intended consequences...once again, maximizing moral utility.
And, as you just illustrated, the farsighted consequentialist will in fact take both intention and societal repercussions into account, mimicking the virtue and deontological ethicists, respectively.
It’s only when the resources available for thinking are in short supply that these distinctions are meaningful...for these three moral approaches, the starting points of thought are different. It’s a description of inner thought process. Conclusions of these different processes only converge after all variables are accounted for...and this doesn’t always happen with humans.
So in answer to your question, when planning my own actions I first begin by taking possible consequences into account, whereas when judging others I begin by taking intentions into account, and asking what it says about that person’s psychology. Given enough time and processing power, I could use any of these systems for these tasks and come to the same conclusions, but since I do not possess either, it does make a practical difference which strategy I use.
While my personal values tend to align with the traditional consequentialism you affirm (e.g. denial of the doctrine of double effect), note that caring solely about “consequences” (states of the four-dimensional spacetime worm that is the universe) does not exclude caring about right or wrong actions. The “means” as well as the “ends” are part of the worldstates you have preferences over, though non-timeless talk of “consequences” obscures this. So you’re far too quick to get the standard consequentialist norms out of your approach to morality.
Totally agree. When I figured this out, everything clicked into place.
I don’t think I’m doing anything hasty. We should care about possible histories of the universe, and nothing else. This follows from basic moral facts that are hard to disagree with.
In a strict sense, we care enough about who does what and how that it can’t be fully thrown out. In a more approximate sense, it gets utterly swamped by other concerns on the big questions (anything involving human life), so that you approach “classical” conseqentialism as your task becomes bigger.
Except, you know, the original literal meaning. The one that is means “able to cause the experience of awe”.
Things that cause awe may also be awesome/excellent/great/cool but it isn’t the same thing.
Shhhhhh; that’s a basilisk.
I’ll leave it as is, and hope no one tries to maximize literal “awe-causingness”. I think the explanation and implication is robust enough to prevent any funny business like that.
F’rinstance.
You nailed it. This post is exactly what is needed to cut away all the bullshit that gets thrown in to morality and ethics.
Well… if you had replaced all instances of “awesome” in the post with “cool”, someone trying to maximize literally that would manipulate the climate so as to bring about a new ice age. :-)
Or (if it is the planet Earth that is to be cooled) would arrange to have Earth leave the orbit of Sol and then blow it up into tiny pieces.
Maximize total coolness: Stop stellar fusion everywhere!
still, clarifying that you did not “mean” the dictionary definition of the word at the center of your piece would have been better. yes, it’s obvious, but why leave easily closed holes open
Did you retract this because they said it was a Basilisk?
Where I come from, it connotes “I’m an idiot”. Exrpessing such a high degree of approbation about antyhing is seen as sign of mental feebleness.
Are you sure this is universally true, or just true of certain ways of such expression? I used to be fairly sparse in my praise but after practicing it I found that you can praise people both genuinely and in a high-status way. The best description I can come up with is that it involves expressing only gratitude/admiration without tinging it at all with jealousy/the desire to manipulate/insecurity; the problem with this description is that the jealousy/desire to manipulate/insecurity part is often largely subconscious, so the usefulness of this depends on how good one is at introspecting.
I didn’t even say ti was universally true.
I meant universally true (across all ways of saying “awesome”), but restricted to where you come from.
You know, I’m not sure that I’d rather be turned into a whale than orgasmium. In fact, I’m not really sure that I’d rather be an unmodified human than be turned into orgasmium, but I don’t lean towards orgasmium as the most awesome thing I could possibly be.
I think I’d be sad to turn into orgasmium without having been a whale first. :(
Not necessarily. If I tell a story of how I went white water rafting, and the person I’m talking to tells me that what I did was “awesome,” is he or she really thinking of the consequences of my white water rafting? Probably not. Instead, he or she probably thought very little before declaring the white water rafting awesome. That’s an inherent problem to using awesome with morality. Awesome is usually used without thought. If you determine morality based on awesomeness, then you are moralizing without thinking at all, which can often be a problem.
To say that something’s ‘consequentialist’ doesn’t have to mean that it’s literally forward-looking about each item under consideration. Like any other ethical theory, consequentialism can look back at an event and determine whether it was good/awesome. If you going white-water rafting was a good/awesome consequence, then your decision to go white-water rafting and the conditions of the universe that let you do so were good/awesome.
That misses my point. When people say awesome, they don’t think back at the consequences or look forward for consequences. People say awesome without thinking about it AT ALL.
OK, let’s say you’re right, and people say “awesome” without thinking at all. I imagine Nyan_Sandwich would view that as a feature of the word, rather than as a bug. The point of using “awesome” in moral discourse is precisely to bypass conscious thought (which a quick review of formal philosophy suggests is highly misleading) and access common-sense intuitions.
I think it’s fair to be concerned that people are mistaken about what is awesome, in the sense that (a) they can’t accurately predict ex ante what states of the world they will wind up approving of, or in the sense that (b) what you think is awesome significantly diverges from what I (and perhaps from what a supermajority of people) think is awesome, or in the sense that (c) it shouldn’t matter what people approve of, because the ‘right’ think to do is something else entirely that doesn’t depend on what people approve of.
But merely to point out that saying “awesome” involves no conscious thought is not a very strong objection. Why should we always have to use conscious thought when we make moral judgments?
Those are both good points. I view it as a bug because I feel like too much ethical thought bypasses conscious thought to ill affect. This can range from people not thinking about the ethics homosexuality because their pastor tells them its a sin to not thinking about the ethics of invading a country because people believe they are responsible for an attack of some kind, whether they are or not. However, Nyan_Sandwich’s ethics of awesome does appear to bypass such problems, to an extent. It’s hardly s, but it appears like it would do its job better than many other ethical systems in place today.
I should note that it wasn’t ever intended to be a very strong objection. As a matter of fact, the original objection wasn’t to the conclusions made, but to the path taken to get to them. If an argument for a conclusion I agree with is faulty, I usually attempt to point out the faults in the argument so that the argument can be better.
Also, I apologize for taking so long to respond. life (and Minecraft playing) interfered with me checking LessWrong, and I’m not yet used to checking it regularly as I’m new here.
OK, so how else might we get people to gate-check the troublesome, philosophical, misleading parts of their moral intuitions that would have fewer undesirable side effects? I tend to agree with you that it’s good when people pause to reflect on consequences—but then when they evaluate those consequences I want them to just consult their gut feeling, as it were. Sooner or later the train of conscious reasoning had better dead-end in an intuitively held preference, or it’s spectacularly unlikely to fulfill anyone’s intuitively held preferences. (I, of course, intuitively prefer that such preferences be fulfilled.)
How do we prompt that kind of behavior? How can we get people to turn the logical brain on for consequentialism but off for normative ethics?
Am I to understand that you’re suggesting that we apply awesomeness to the consequences, and not the actions? Because that would be different from what I thought was being implied by saying “‘Awesome’ is implicitly consequentialist.” What I took that to mean is that, when one looks at an action, and decides whether or not it is awesome, the person is determining whether or not the consequences are something that they find desirable. That is distinct from looking at consequences and determining whether or not the consequences are awesome. That requires one to ALREADY be looking at things consequentially.
I think that, after thinking of things, when people use the term “awesome” they use it differently depending on how they view the world. If someone is already a consequentialist, that person will look at things consequentially when using the word awesome. If someone is already a dentologist, that person will look at the fulfillment of duties when using the word awesome. This is just a hypothesis, and I’m not very certain that it’s true, at the moment.
I’m not entirely sure how to prompt that sort of behavior, to be honest.
I meant that we should be looking at the awesomeness of outcomes and not actions, and that “awesome” is more effective at prompting this behavior than “good”. It looks like you get it, if I understand you correctly.
I find that somewhat implausible. If they are a hardcore explicit deontologist who,against the spirit of this article, has attempted to import their previous moral beliefs/confusions into their interpretation of “awesomism”, then yeah. For random folks who intuitively lean towards deontology for “good”, I think “awesome” is still going to be substantially more consequentialist. I would expect variation, though.
I wonder how you could test this. Maybe next year’s survey could have some scenarios that ask for an awesomeness ranking, and some other scenarios that ask for a goodness raking, and some more with a rightness ranking. Then we could see how people’s intuitions vary with whether they claim to be deontologist or consequentialist, and with prompting wording. This could put the claims in the OP here on a more solid footing than “this works for me”.
Oh! That does make sense. I can see your point with that.
Possibly. I’m honestly not sure which hypothesis would be more correct, at the moment. Testing it would probably be a good idea, if we had the resources to do it. (Do we have the resources for that? I wouldn’t expect it, but weirder things have happened.)
I don’t think that would work. People here tend to be more consequentialist than I’ve seen from people not from here, so we’d probably not be able to see as much of a difference. Plus, the people here are hardly what I’d call normal and are more homogeneous than a more standard set of people. To effectively test that, we’d have to conduct that survey with a more random group of people. I mean, that survey would work, but the sample should be different than the contributors of LessWrong.
If the number of deontologists isn’t big enough to power our inference, the stats should tell us this. There are some though.
And I think going outside LW is unnecessary. This essay is hardly aimed at people-in-general.
That’s true. Perhaps we could sort them by what their results with “good” show us about which normative ethical theory they follow, then compare the results of each of the groupings between “good” and “awesome”. That would show us the results without consequentialists acting as white noise.
Good point, though it would be interesting to see if it could be applied to people outside of LW.
I like the word awesome a lot, but as a particular useful word in English have noticed it becoming very overused of late.
Awesome decomposes to full of awe, or inspiring of awe. Wondersome, full of wonder or inspiring of wonder, seems like it would be similarly useful.
Can anyone coin relevant neologisms?
The canonical english word is “Wonderful”, not “Wondersome”.
Note that “Awful” is also a word.
Yes, as is “Wondrous”. But “Wondersome” isn’t, usually.
I so wanted to come up with an objection or a counter-argument, after all, the whole premise is silly on the face of it. But instead I only recall an old commercial for something, which goes something like “Don’t fight awesome. It will only make it awesomer!”. Can’t find a link, though.
.
What does that mean? I can remove the thing about how it is for my meetup, I guess.
.
Thanks for the spelling error. I’ll move to main.
happyness → happiness
(The content of this article is awesome. Spelling mistakes are not awesome...?)
Awesome summary, thanks!
Good post, but I can easily imagine awesome ways to starve hundreds of children.
“Awesome” to me means impressive and exciting in a pleasant manner. You seem to use it to mean desirable. If morality just means desirability, then there’s no reasons to use the word morality. I think that for morality to have any use, it has to be a component of desirability, but not interchangeable with it.
You are right that it shouldn’t directly be that which is desirable. I guess there is a bug in the OP explanation in that “awesome” does not automatically feel like it should be outside yourself.
The excitingness connotation is a bug.
There’s a question in OkCupid that asks “In some sense, wouldn’t nuclear war be exciting?” which [I immediately answered no and rated everyone who said yes as completely undateable] I think falls into this same class of bug, but I can’t quite put my finger on how to describe it.
In some sense, it probably would, it’s just a sense that doesn’t have any weight to speak of in deciding whether a nuclear war is a good idea. Even reliably settled arguments are not one-sided; there are usually considerations aligned against even the most obviously right decisions, and denying the existence or correctness of such considerations damages one’s epistemic rationality.
I agree, but I’m really confused about the how the creators of that website intended for the question to help in deciding whether a user should date a certain individual. I wouldn’t be able to tell if they answered “yes” because “Yay! Explosions!” and completely disregarded human deaths, or if they were saying “Indeed, there exists a sense in which nuclear war would create more excitement than a lack of one.”
I feel like “excitement” carries a positive connotation, particularly in American culture, which makes me uneasy about any “yes” answers. :(
I think this is a stock trick personality tests use: give test-takers a question where the denotation & connotation conflict; see how each test-taker resolves the conflict; people who resolve it in the same way (i.e. give the same answer) are presumably more similar in personality/weltanschauung than people who resolve it differently.
This seemed obvious to me. The problem is the lack of “meta” options; where’s the hidden checkbox for people who saw all six possible chains of reasoning, analyzed each of them, have probabilistic answers on four of those, along with an objection to the premises of the fifth and want to scream at the sixth for its stupidity? (bogus example)
Some of us don’t like limiting ourselves to only one possible interpretation of a statement or question. Some of us consider at least four different interpretations by default as a matter of convenience, and only then afterwards settle on the one most likely to have been “intended” within context.
This behavior is the one I prefer, not the behavior of automatically resolving to one specific preferred interpretation without noticing the others. The particular example question, like so many others on that site, provides no means of distinguishing between these behaviors, other than a very time-consuming reading of all the comments (which also requires time investment from the question-answerer by writing a comment in explanation, but this in turn requires a specific response, which partly defeats the point of going meta).
Other websites sometimes sidestep the issue entirely by first testing for traits that have these effects and often outright rejecting those (potential members) that would “question” the questions, thereby pre-filtering members for compatibility with their testing methodologies.
It seems to me that people who argue with questionnaires might have a good bit in common with each other, and likewise for people who don’t argue with questionnaires.
A crude approach would be to just match up people by the number of questions they argue with and the amount they write. It would be more sophisticated to just let people see each other’s comments on the questionnaire.
You picked the answer among the first four for which your probabilistic answer is highest, mark all the first four answers as acceptable in a potential date, and explain your reasoning in the comment section.
Oh, no, I think you misunderstand what parts of the question-problem I was talking about. To better characterize the bogus example, let’s flesh it out a bit:
Q: Which is healthier?
( ) Bleggs
( ) Rubes
( ) Both
( ) Neither
Now obviously, the first four chains of reasoning go as follow:
In most specific cases, presented an arbitrary thought-experiment-style choice of being handed a blegg or a rube, neither is good. Owning either a Blegg or a Rube will make you less physically healthy. So among the four options, “Neither” is clearly better. This is pretty certain, though some crack scientists do claim conclusive evidence that owning both at the same time can be healthy. But I don’t put much faith in their suspicious results.
“But!”, screams the more logically-minded, “the question isn’t about which of the four choices presented is better—it’s clear that the fourth option is intended to mean ‘neither bleggs nor rubes are healthier’, not that you should pick neither. So the thought experiment implied means you have to pick one of the two, and in that case Bleggs are clearly marginally better!” Okay, fine. So Bleggs are most likely healthier if you have to choose one of the two—they’re unlikely to be equally unhealthy or healthy, after all.
But let’s take a step back for a moment. If you look at the grand scheme of things, at a macro scale, Rubes do reduce the total amount of Bleggs and Rubes, because each Rube will destroy at least five Bleggs. So in the grand scheme of things, having Rubes is healthier than having Bleggs, if we can’t attack the source! Clearly, both of the previous chains of reasoning are too narrow-minded and don’t think of the big picture. On a large scale, the Rubes are indeed healthier-per-unit than the Bleggs. Probably.
Ah, but what if it is implied that this is an all-or-nothing paradigm, and what if others interpret it this way? Then, obviously, the complete absence of both Bleggs and Rubes would be a Very Bad Thing™, since we require Bleggs and Rubes to produce Tormogluts, a necessary component of modern human prosperity! Thus, both are (probably) healthier than only having one or the other (and obviously better than neither).
...
On the other hand, Bleggs and Rubes are unnatural, unsustainable in the long term, and we will soon need to research new ways to produce Tormogluts. Most people who see you advocating for them will automatically match you as The Enemy, so you should pick “Neither”, even though that’s not what the question implies. But this is a shitty situation, and if someone reading my answer to this question interprets it this way, I don’t care to befriend them anyway. So I reject this answer.
And let’s not even think of what the Kurgle fanatics have to say about this question. The horror.
Assuming all of the above went through your mind in a few seconds very rapidly when you first read the question… what answer do you choose? Do you also put a preference filter for other people’s answers? Just choosing the higher or most confident probability from the above isn’t going to cut it if this question matters to you a lot.
I used to pick the “least bad” answer in such cases, but then I decided to clear all my previous answers, and now when I see a question to which the answer I wish I could give is “Mu” or “ADBOC” or “Taboo $word” or “Avada Ked--[oh right, new censorship policy, sorry]”, I just skip it.
Good idea, although it’s still useful to be able to choose in case you need to fill in all the questions.
looks baffled
Avada Kedavra is a spell trigger for a death spell. In the context implies the urge to a respond to a stupid, misleading or perhaps disingenuous question with the application of power, violence or killing rather than compliance with the form of the question. Since it isn’t specific or in reference to any actual people this doesn’t technically violate the new censorship policy but it is still close enough that I laughed at Army’s joke!
I’d guess that “people who write match questions on OkCupid” is specific enough that actually advocating violence against them would be against the spirit of the policy. (I can’t imagine someone actually doing that, or imagine someone imagining someone doing that, but I’ve lost count of the times I’ve underestimated the validity of Poe’s Law.)
Ah, right. That is pretty funny, actually.
Yep, that’s the reasoning I followed in the earlier comment. A person who saw all six possible chains would decide the question wasn’t useful and would refuse to answer it, hopefully. ^_^
It’s up to the users: you have to both provide your own answer and decide which answers you would consider acceptable in a potential match (and specify how much of a big deal would it be for a potential match to pick a different answer). If you want to provide your answer but don’t want to discriminate potential matches based on their answers, you can mark all possible answers as acceptable or equivalently mark the question as irrelevant. (And many of the questions are written by users of the site, rather than by its creators; I don’t remember whether the one about nuclear war is.)
The matching algorithm is described here. (Its unBayesianity makes me cringe—the rarer a particular answer to a particular question is, the larger the effect of someone picking that answer ought to be—but still.)
I wish you could condition on whether the user ignored the question or not, but I don’t think you can. Also, I’m pretty sure the nuclear war one wasn’t a user question.
That’s the point: you should (in particular) be comfortable with entertaining arguments for horrible things that carry positive connotations (just don’t get carried away :-). The correctness of these arguments won’t in general depend on whether their connotations align with those of the decisions reached upon considering all relevant arguments.
You’re saying a lot of technically correct things that don’t seem to be engaging what I’m saying. =/
Yes, I agree that there is some value in entertaining a “yes, during a nuclear war, there will be may be some more (positively) exciting things than in peacetime.” This is something to take into account when deciding whether or not you should go to nuclear war.
Meanwhile, if you’re trying to use the question to gauge the moral compass of the answering person, the “nuclear war is great and fun thing!” answer is not readily distinguishable from the “I am carefully entertaining the argument for a horrible thing” answer. Which is similar to the way “awesomeness” sometimes leads to … awesome starvation schemes?
I think that using factual beliefs to signal something other than knowledge about the world is a bad idea. It encourages lying to yourself and others.
Still, in questions which have one correct answer (e.g. “Which is bigger, the earth or the sun” or “STALE is to STEAL as 89475 is to...”), I only mark the correct answer and “I don’t know” (if it’s there) as acceptable, and I mark the question as “Mandatory” if the “I don’t know” answer is available. It’s OK to be ignorant, but it’s not OK to not admit it.
That’s not even the worst one: “Which is worse: starving children or abused animals?” with possible answers “Starving children”, “Abused animals”, “Neither, both are good” and “Neither, they are equally bad.” I’m curious to know whether there’s actually somebody who picks “Neither, both are good.”
I saw someone who answered that way, but it must have been as a joke. Not a good thing to do for matching purposes...
Making jokes in data that’s going to have statistics done on by computers is one of my pet peeves. For example, I’ve seen obviously sarcastic book reviews on Amazon.com where the number of stars matched the letter of the review rather than its spirit. (And IIRC, Yvain mentioned how people answered stuff like “over 9000” in the LW survey.)
Oh yeah! I remember that one now that you mention it! I wish I knew how they came up with these.
I think, personally, I would answer “Both are bad, but I’m not going to bother quantifying which is worse until I am confronted with an actual situation in which I have to.” Which is definely not the same as “both are equally bad.” Bah!
“There is nothing so exhilarating in all the world as being shot at with no result.”
Attributed, in various forms, to Winston Churchill. What war is, is intense. Soldiers who have seen combat duty often miss it in peacetime, or in civilian life. In Britain after the second World War, even many civilians found the peace a bit of a let down.
So yes, in a very ordinary sense, nuclear war would be exciting, especially if you survive it.
Imagine you have the superpower: Movie Hero. You are guaranteed to escape from all situations, however dire, and whatever extremity of privation and suffering you may have to go through (but you will have to go through it) in the process of clawing your way to the Happy Ending. You also get a chance to play a pivotal role in whatever world crisis forces itself on you. How would you then feel about seeing the world slide towards imminent nuclear war?
Pretty miserable.
Unfortunately, I think I would prefer not to go on dates with people who spend too much time imagining that they are Movie Heroes. A little bit is okay! =P
Wouldn’t the failure to acknowledge all the excitement nuclear war would cause be an example of the horns effect?
I can understand answering no for emotional or political reasons, but rating the epistemically correct answer as undateable? That’s… a good reason for me to answer such questions honestly, actually.
So they have a mechanism for you to write an explanatory comment, right? But they don’t allow you to filter on the existence of an explanatory comment, which would allow someone to explain their thought process—which I think is really necessary because “exciting” does strongly connote “good idea” the way “awesome” does. In which case, I would expect a person trying to avoid the horns effect to just refuse to answer the question on the grounds that it’s misleading as a moral compass gauge because answering “yes” might cause them to get filtered out because you can’t condition on the existence of an explanation. So I expected most “yes” answers to be generally unaware people. I don’t think that question was intended for rational arguments for or against nuclear war; I think it was intended for … morality. I admit “completely undateable” is an exaggeration, but I think I decided engaging that question was a red flag for immaturity.
But that’s why I’m really confused why that question was there in the first place because it doesn’t distinguish those two groups of people—the ones that are thinking really really carefully and the ones that aren’t thinking at all. It’s bad for morality!
What senses of ‘exciting’ do you think exist? Why wouldn’t you date someone who thinks that interesting times (in the Chinese sense) are exciting?
What’s the Chinese sense? o.O
This is a reference to an alleged Chinese proverb.
Well, it might be exciting for people fighting it from within their anti-radiation shelters, but it’d be such a drag for people having their friends and family killed. The total excitingness I’m pretty sure would be negative.
That’s why I thought it was a useless question. Because it’s not asking for the overall total excitingness of nuclear war. It’s asking if there exists a type of excitingness that nuclear war does have a net positive in. Which, probably yes, but it’s so not very relevant to anything. =P
I dunno—it has the very nice (IMO) side effect of automagically importing Fun Theory into ethics.
They would be awesome to you but awful to the children. Back to the problem of inter-personal utility comparisons…
Great! I really like this.
According to Leibniz, this is the most awesome of all possible worlds.
Falsified by diarrhea. Next!
(This is not a good characterization of Leibniz’s actual conceptual system, for what it’s worth;---the arguments that this is the “best of all possible worlds” are quite technical and come from the sort of intuitions that would later inspire algorithmic information theory; certainly neither blind optimism nor psychologically contingent enthusiasm about life’s bounties were motivating the arguments. Crucially, “best” or similar, unlike “awesome”, is potentially philosophically simple (in the sense of algorithmic information theory), which is necessary for Leibniz’s arguments to go through. (This comment is directed more at the general readership than the author of the comment I’m replying to.))
My recollection of Leibniz’s view is dim but I recollect that the essence of it is that the perfection of the world is a consequence of the perfection of God. It would reflect poorly on the Omnipotence, Omniscience, Benevolence & Supreme Awesomeness &c of the Deity and Designer if he bashed out some second-rate less than perfectly good (or indeed merely averagely awesome) world. For the benefit of the general readership, the book to read on this is Candide by Voltaire. You will never see rationalists in quite the same way again… :-)
Link to Candide
I think this comment reinforces Will_Newsome’s point. The textbook Rhetoric, Logic, and Argumentation: A Guide for Student Writers by Magedah Shabo (quite correctly) uses Voltaire’s Candide as the very first example of a straw man fallacy on page 95.
Nice catch!
All the more reason to read Candide I would say…
Personally I read Candide as a parody and a satire not anything that pretends for one millisecond to be rational argument.
It’s an extended riff on the meme “the best of all possible worlds” and it’s a lot of fun.
I don’t agree with Leibniz, but I do find his “best of all possible worlds” concept really useful for talking about what utilitarians try to do.
I don’t either but I find “the best of all possible worlds” concept very interesting along with the related notion God could not possibly create a world that was anything other than “the most awesome of all possible worlds”—given the predicates traditionally ascribed to God.
You can take this is a reductio ad absurdam of the notion of God as many do. But presumably the task of friendly AI (at its most benevolent) must be to perform (or figure out) what actions should be taken to promote awesomeness?
More extravagantly, the task of friendly AI is to ‘build God.’
(And get Her right this time :-)
I tend to use the word fun.
I’m sorry, this is all I could think about the whole time reading your post: http://www.youtube.com/watch?v=3b1iwLIMmRQ
Seriously though, this terminology has been employed by Starglider in SL4 circles for ages. His explanation of a positive Singularity would be something like, “Everything gets really, really awesome really really fast.” I do think it’s a great word, despite the negative connotations.
That’s good to know.
In a world of hardship and mediocrity (hopefully not yours), even when implementing “degree of awesomeness” on a scale, it may be a bit of a stretch going “Hey, this cubicle job is more awesome than that cubicle job! … twitching smile, furtive glances around”, or even “Awesome, another ration of rice from USAID, that means I may survive yet another day! :-))”
What if I’m constructed such that my sense of “awesome” involves something that Babyeaters would blanch at?
(If you need a mental image, let’s say… infinities of skinless, eyeless, limbless children being perpetually raped to death and then resuscitated while their parents watch on, with needles injected into everyone’s brains to maximally amplify the negative aspects of their experiences and to permanently strip them of the capacity to become numb to it)
Mental image not required.
That doesn’t sound awesome.
If you thought it was, I’d have to conclude that you were wrong.
Also,
To you. But it might to someone else. Morality is the difficult problem of how groups of humans or other entities interact with each other so as to realise their won preferences without infringing on others preferences. Just declaring or pretenting that everyone has the same preference doesn’t solve or even address the problem of morality. It is a non starter. I know it is supposed to be naive, but it is too naive.
Then which one of us is right just comes down to an appeal to force, doesn’t it? (and all those various kinds of meta-force).
I.e., if I can incarnate my Diabolus-1 Antifriendly AI and grow it up to conquer the universe before you can get your Friendly AI up to speed, I win.
This one is actually really subtle, and I forget the solution, and it’s in the metaethics sequence somewhere (look for pebblesorters), but the punchline is that the outcome sucks.
So yes, you and your Diabolus-1 “win”, but the outcome still sucks.
Sure, but… okay, I’m going to go concrete here.
I suffered a lot of abuse as a child; as a result, sometimes my mind enters a state where its adopted optimization process is “maximize suck”. In this state, I tend to be MORE rational about my goals than I am when I’m in a more ‘positive’ state.
So I don’t have to stretch very far to imagine a situation where the outcome sucking—MAZIMALLY sucking—is the damned POINT. Because fuck you (and fuck me too).
So the outcome still sucks, if you are not maximizing actual awesomeness.
Not necessarily. Plenty of people think the Saw films are awesome. Plenty of people on 4chan think that posting flashing images to epileptic support boards is awesome, and pushing developmentally disabled children to commit suicide and then harassing their parents forever with pictures of the suicide is awesome.
They will, in fact, use “awesome” explicitly to describe what they’re doing.
I thought the first Saw film was awesome. It was a cool gory story about making the most of life. It’s fiction, so nobody actually got hurt and there is no secondary consideration of awesomeness there.
Some people think that the prospect of making disabled kids commit suicide is awesome; fewer people think that actually doing so is awesome. I don’t think that people who actually do so are awesome.
I think that’s a relatively standard use of “awesome”.
Much for the same reasons that people can be mistaken about their own desires, people can be mistaken about what they would actually consider awesome if they were to engage in an accurate modeling of all the facts. E.g. People who post flashing images to epileptic boards or suicide pictures to battered parents are either 1) failing to truly envision the potential results of their actions and consequently overvaluing the immediate minor awesomeness of the irony of the post or whatever vs. the distant, unseen, major anti-awesomeness of seizures/suicides, or 2) they’re actually socio- or psychopaths. Given the infrequency of real sociopathy, it’s safe to assume a lot of the former happens, especially over the impersonal, empathy-sapping environment of the Internet.
The answer you are referrign to is probably the utiiitarian one that you morally-should maximisie everyone’s preferences, not just your own. But that’s already going well beyond to the naive “awesomeness” theory presented above.
This is assuming, of course, that “awesome” is subjective. Maybe if we had a universal scale of awesomeness...
You guys are thinking too hard about this.
Either don’t think about it, and maximize awesome (of which there is only one).
Or read the metaethics sequence, where you will realize that you need to maximize awesome (of which there is only one).
Look, here’s the problem with that whole line of reasoning, right in the joy of the merely good:
First, our esteemed host says:
And then, he says:
But then suddenly, at the end, he flips around and says:
Speaking as a living child who has been dragged ONTO train tracks, I don’t buy it. Domination and infliction of misery are just as “awesome” as altruism and sharing joy.
I guess the crux of my point is, if you don’t think that the weak are contemptible and deserve to suffer, what are you gonna do about it? Because just trying to convince people that it’s less “awesome” than giving them all free spaceships is going to get you ignored or shot, depending on how much the big boys think you’ll interfere with them inflicting suffering for the lulz.
This doesn’t seem to be true over the long haul—somehow, the average behavior of the big boys has become less cruel. Part of this is punishment, but even getting punishment into place takes convincing people, some of them high status, that letting the people in charge do what they please isn’t the most awesome alternative.
Alternatively, maybe the cruelest never get convinced, it’s just people have been gradually solving the coordination problem for those who don’t want cruelty.
At least, at the levels most people operate at. Things tend to get better from the top down; for the bottom-dwellers, things are still pretty desperate and terrifying.
I agree, but I think your initial general claim of no hope for improvement was too strong.
Let me put it this way: I would be willing to bet my entire net worth, if it weren’t negative, that if some kind of uplifting Singularity happens, my broke ass gets left behind because I won’t have anything to invest into it and I won’t have any way to signal that I’m worth more than my raw materials.
You’re wrong.
By the point of the singularity no human has any instrumental value. Everything any human can do, a nanotech robot AI can do better. No one will be able to signal usefulness or have anything to invest; we will all be instrumentally worthless.
If the singularity goes well at all, though, humanity will get its shit together and save everyone anyways, because people are intrinsically valuable. There will be no concern for the cost of maintaining or uplifting people, because it will be trivially small next to the sheer power we would have, and the value of saving a friend.
Don’t assume that everyone else will stay uncaring, once they have the capacity to care. We would save you, along with everyone else.
Downvotes for being unreasonably dramatic.
Rather than downvoting, how about trying to explain why “caring” is a universal value to someone who’s never experienced “caring”? How about trying to explain why, in all the design-space of posthuman optimization processes, I should bet that the one that gets picked is the one where “caring” applies to my sorry ass?
We have enough resources to feed and shelter the world right now, and we don’t. So saying that “once we have the resources to care, we will” seems like the sort of BS that our esteemed host warns us about—the assumption that just because something is all-powerful and all-wise, it will be all-good.
I grew up worshipping a Calvinist dick of a deity, so pull the other one.
And another thing:
It’s all well and good for YOU to claim that people are intrinsically valuable, you probably have enough resources to avoid getting spit on and lectured about “bootstraps” when you say it. Some of us aren’t so lucky.
If whatever it is that gets to do the deciding is evaluating people based on their use as raw materials, things have gone horribly wrong. In fact, that’s basically the exact definition of “horribly wrong” that seems to be in common use around here. As a corollary, there’s a lot that’s wrong with the current state of affairs.
It’s a tricky point, I expect humans (in their present form) will also have insignificant terminal value, compared to other things that could be created. The question of whether the contemporary humans will remain in some form depends on how bad it is to discard the old humans, compared to how much value gets lost to inefficiency by keeping (and improving) the old humans. Given how much value could be created de novo, even a slight inefficiency might be more significant than any terminal value the present humanity contributes. (Discarding humans won’t be a wrong outcome if it happens, because it will only happen if it turns out to be a better outcome, assuming a FAI.)
Bur there is no evidence that there real world works on he Awesomeness theory.
Even if we did, something non-awesome would STILL be able to “win” if it had enough resources and was well-optimized. At which point, why isn’t its “non-awesome” idea more important than your idea of “awesome”?
(Yes, this is the old Is-Ought thing; I’m still not convinced that it’s a fallacy. I think I might be a nihilist at heart.)
If you taboo “important” you might discover you don’t know what you’re talking about.