Good point. I guess the only way to counter these odd scenarios is to point out that everyone’s utility function is different, and then the question is simply whether the responder wants to self-modify (or would be happier in the long run doing so) even after hearing some rationalist arguments to clarify their intuitions. The question of self-modification is a little hard to grasp, but at least it avoids all these far-fetched situations.
Amanojack
We’ve been over this: either “good aims” and “aims you should have” imply some kind of objective value judgment, which is incoherent, or they merely imply ways to achieve my final aims more efficiently, and we are back to my claim above as that is included under the umbrella of “guiding my actions.”
Eliezer’s point in that post was that there are more and less natural ways to “carve reality at the joints.” That however much we might say that a definition is just a matter of preference, there are useful definitions and less useful ones. The conceptual analysis lukeprog is talking about does call for the rationalist taboo, in my opinion, but simply arguing about which definition is more useful as Eliezer does (if we limit conceptual analysis to that) does not.
Within 20 seconds of arguing about the definition of ‘desire’, someone will say, “Screw it. Taboo ‘desire’ so we can argue about facts and anticipations, not definitions.”
If you don’t have the patience to do philosophy, or you don’t think it’s of any value, by all means do something else -argue about facts and anticipations, whatever precisely that may involve. Just don’t think that in doing this latter thing you’ll address the question philosophy is interested in, or that you’ve said anything at all so far to show philosophy isn’t worth doing.
You’re tacitly defining philosophy as an endeavor that “doesn’t involve facts or anticipations,” that is, as something not worth doing in the most literal sense. Such “philosophy” would be a field defined to be useless for guiding one’s actions. Anything that is useless for guiding my actions is, well, useless.
What I don’t understand is how either A or B is supposed to be you in the sense of being the same person as current you while the other is not?
A will probably call A the real you, and B will probably call B the real you. Other people might find them both the same as the current you, but might take sides on the labeling issue later if A or B does something they like or don’t like. It’d surely be most useful to call both A and B “the same person as current you” in the beginning, at least, because they’d both be extremely similar to the current you. A might change more than B as time goes on, leading some to prefer identifying B as the “real” you (possibly right away, to dissipate the weirdness of it all), but it’s all a matter of preference in labels. After all, even now the you that is reading this post is not the same as the you of 5 minutes ago. English simply isn’t well-equipped to deal with the situation where a person can have multiple future selves (at least, not yet).
I think he’s just pointing out that all you have to do is change the scenario slightly and then my objection doesn’t work.
Still, I’m a little curious about how someone’s ability to state a large number succinctly makes a difference. I mean, suppose the biggest number the mugger knew how to say was 12, and they didn’t know about multiplication, exponents, up arrow notation, etc. They just chose 12 because it was the biggest number they could think of or knew how to express (whether they were bluffing totally or were actually going to torture 3^^^3 people). Should I take a mugger more seriously just because they know how to communicate big numbers to me?
Newcomb’s Problem is silly. It’s only controversial because it’s dressed up in wooey vagueness. In the end it’s just a simple probability question and I’m surprised it’s even taken seriously here. To see why, keep your eyes on the bolded text:
Omega has been correct on each of 100 observed occasions so far—everyone [on each of 100 observed occasions] who took both boxes has found box B empty and received only a thousand dollars; everyone who took only box B has found B containing a million dollars.
What can we anticipate from the bolded part? The only actionable belief we have at this point is that 100 out of 100 times, one-boxing made the one-boxer rich. The details that the boxes were placed by Omega and that Omega is a “superintelligence” add nothing. They merely confuse the matter by slipping in the vague connotation that Omega could be omniscient or something.
In fact, this Omega character is superfluous; the belief that the boxes were placed by Omega doesn’t pay rent any differently than the belief that the boxes just appeared at random in 100 locations so far. If we are to anticipate anything different knowing it was Omega’s doing, on what grounds? It could only be because we were distracted by vague notions about what Omega might be able to do or predict.
The following seemingly critical detail is just more misdirection and adds nothing either:
And the twist is that Omega has put a million dollars in box B iff Omega has predicted that you will take only box B.
I anticipate nothing differently whether this part is included or not, because nothing concrete is implied about Omega’s predictive powers—only “superintelligence from another galaxy,” which certainly sounds awe-inspiring but doesn’t tell me anything really useful (how hard is predicting my actions, and how super is “super”?).
The only detail that pays any rent is the one above in bold. Eliezer is right that one-boxing wins, but all you need to figure that out is Bayes.
EDIT: Spelling
- May 23, 2011, 5:55 PM; 11 points) 's comment on Newcomb’s Problem and Regret of Rationality by (
The problem with basing decisions on events with a probability of 1-in-3^^^^^3, is that you’re neglecting to take into account all kinds of possibilities with much higher (though still tiny probabilities).
Especially the probability that the means by which you learned of these probabilities is unreliable, which is probably not even very tiny. (How tiny is the probability that you, the reader of this comment, are actually dreaming right now?)
That’s presumably why he said “my.”
Also, how far do you look away?
I think my response to lukeprog above answers this in a way, but it’s more just a question of what we mean by “help me decide.” I’m not against people helping me be less wrong about the actual content of the territory. I’m just against people helping me decide how to emotionally respond to it, provided we are both already not wrong about the territory itself.
If I am happy because I have plenty of food (in the map), but I actually don’t (in the territory), I’d certainly like to be informed of that. It’s just that I can handle the transition from happy to “oh shit!” all by myself, thank you very much.
In other words, my suspicion of anyone calling themselves an Empathetic Metaethicist is that they’re going to try to slide in their own approved brand of ethics through the back door. This is also a worry I have about CEV. Hopefully future posts will alleviate this concern.
And since we are humans, it helps to retrain our emotions: “Relinquish the emotion which rests upon a mistaken belief, and seek to feel fully that emotion which fits the facts.”
I’d rather call this “self-help” than “meta-ethics.” Why self-help? Because...
But it’s a standard LW perspective to assert that there is a territory, and some maps of (parts of) it are right and others are wrong.
...even if my emotions are “wrong,” why should I care? In this case, the answer can only be that it will help me derive more satisfaction out of life if I get it “right”, which seems to fall squarely under the purview of self-help.
Of course we can draw the lines between meta-ethics and self-help in various ways, but there is so much baggage in the label “ethics” that I’d prefer to get away from it as soon as possible.
- May 18, 2011, 7:18 AM; 0 points) 's comment on Conceptual Analysis and Moral Theory by (
it’s true that you should do what you should even if you have no idea what “should” means.
How do I go about interpreting that statement if I have no idea what “should” means?
Upvoted for lucidity, but Empathetic Metaethics sounds more like the whole rest of LessWrong than metaethics specifically.
If there are supposed to be any additional connotations to Empathetic Metaethics it would make me very wary. I am wary of the connotation that I need someone to help me decide whether my feelings align with the Truth. I always assumed this site is called LessWrong because it generally tries to avoid driving readers to any particular conclusion, but simply away from misguided ones, so they can make their own decisions unencumbered by bias and confusion.
Austere-san may come off as a little callous, but Empathetic-san comes off as a meddler. I’d still rather just be a friendly Mr. Austere supplemented with other LW concepts, especially from the Human’s Guide to Words sequence. After all, if it is just confusion and bias getting in the way, all there is to do is to sweep those errors away. Any additional offer of “help” in deciding what it is “right” for me to feel would tingle my Spidey sense pretty hard.
- May 25, 2011, 9:33 PM; 0 points) 's comment on Conceptual Analysis and Moral Theory by (
Usage. Dave interprets a sign from Jenny as referring to something, then he tries using the same sign to refer to the same thing, and if that usage of the sign is easily understood it tends to spread like that. The dictionary definition just records the common usages that have developed in the population.
For instance, how does the alien know what Takahiro means when he extends his index finger toward earth in this Japanese commercial? The alien just assumes it means he can find more chocolate bars on planet earth. If the alien gets to earth and finds more chocolate, he(?) is probably going to decide that his interpretation of the sign is at least somewhat reliable, and update for future interactions with humans.
About thinking without words?
When I was 10 years old I had a habit of talking to myself. Gradually my self-talk got more and more non-standard to the point where it would be impossible for others to understand, as I realized I didn’t need to clarify the thoughts I was trying to convey to myself. I would understand them anyway. I started using made-up words for certain concepts, just as a memory aid. Eventually words become exclusively a memory aid, something to help my short-term memory stay on track, and I would go for minutes at a time without ever using any words in my thought processes.
I think the reason I started narrating my thoughts again is because I found it really hard to communicate with people due to the habits I had built up during all those conversations with myself. I would forget to put in context, use words in unusual ways, and otherwise fail to consider how lost the listener might be. You can have great ideas, but if you can’t communicate them they don’t count for anything socially—that is the message from society. So I think there is effectively some social pressure to use natural languages (English, etc.) in your thought processes, obscuring the fact that it can all happen more efficiently with minimal verbal interference. I think words can be strong corrupting influence in the thought process in general, the short argument being that they are designed for the notoriously limited and linear process of mouth-to-ear communication. There is a lot more I could say about that, if anyone is interested.
The only problem is, part of the meaning of the post is its context, and sometimes the author’s identity provides context. Like when multiple people are having a discussion and someone says, “As I wrote above...” or something. They could just link everything, but it’d be best if the anti-kibitzer assigned random names or numbers to each commenter in a given thread—or something like that. That way you’d at least be able to follow a discussion. Or does it already do that?
Assuming I haven’t totally lost track of context here, I think I am saying that moral language works for persuasion (partially as Dark Arts), but is not really suitable for intellectual discourse.
The thought of mentioning other reasons why to punish (such as to make people behave more to your liking) did cross my mind, but I thought it was obvious enough. In fact, there are still other reasons to punish. Someone might reply to your post, “You are thinking like an engineer. I am thinking like a social animal. I want to know when I should punish: I want to use my understanding of social dynamics to make people respect me more. I want to know what it signals about me when I punish someone.”
As I said here, there are a lot of different reasons to use moral language (most of them sort of dark-arts-ish, which is why I guess that post was downvoted), and likewise there are a lot of different reasons to punish.
Isn’t Pascal’s mugging just this?
I’d just walk away. Why should I care? If I thought about it for so long that I had some lingering qualms, and I got mugged like that a lot, I’d self-modify just to enjoy the rest of the my life more.
As an aside, I don’t think people really care that much about other people dying unless they have some way to connect to it. Someone probably was murdered while you were reading this comment. Is it going to keep you up? On the other hand, people can cry all night about a video game character dying. It’s all subjective.