It seems to me that one needs to place a large amount of trust in one’s future self to implement such a strategy. It also requires that you be able to predict your future self’s utility function. If you have a difficult time predicting what you will want and how you will feel, it becomes difficult to calculate the utility of any given precomittment. For example, I would be unconvinced that deciding to eat a donut now means that I will eat a donut every day and that not eating a donut now means I will not eat a donut every day. Knowing that I want a donut now and will be satisfied with that seems like an immediate win, while I do not know that I will be fat later. To me this seems like trading a definite win for a definite loss + potential bigger win. Also, it is not clear that there wouldn’t be other effects. Not eating the donut now might make me dissatisfied and want to eat twice as much later in the day to compensate. If I knew exactly what the effects of action EAT DONUT vs NOT EAT DONUT were (including mental duress, alternative pitfalls to avoid, etc), then I would be better able to pick a strategy. The more predictable you are, the more you can plan a strategy that makes sense in the long term. In the absence of this information, most of just ‘wing it’ and do what seems best at the given moment. It would seem that deciding to be a TDT agent is deciding to always be predictable in certain ways. But that also requires trusting that future you will want to stick to that decision.
LauraABJ
I know that feeling, but I don’t know how conscious it is. Basically when then outcome matters in a real immediate way and is heavily dependent on my actions, I get calm and go into ‘I must do what needs to be done’ mode. When my car lost traction in the rain and spun on the highway, I probably saved my life by reasoning how to best get control of it, pumping the break, and getting it into a clearing away from other vehicles/trees, all within a time frame that was under a minute. Immediately afterwards the thoughts running through my head were not, ‘Oh fuck I could have died!’ but ‘How could I have handled that better.’ and ‘Oh fuck, I think the car is trashed.’ It was only after I climbed out of the car that I realized I was physically shaking.
Likewise, when a man collapsed at synogogue after most people had left (there were only 6 of us), and hit his head on the table leaving a not unimpressive pool of blood on the floor, I immediately went over to him and checked his vitals and declared that someone should call an ambulance. The other people just stood around looking dumbfounded, and it turned out the problem was no one had a cell-phone on Saturday, so I called and was already giving the address by the time the man’s friend realized there was something wrong and began screaming.
Doing these things did not feel like a choice. They were the necessary next action and so I did them. Period. I don’t know how to describe that. “Emergency Programming”?
Ok- folding a fitted sheet is really fucking hard! I don’t think that deserves to be on that list, since it really makes no difference whatsoever in life whether or not you properly fold a fitted sheet, or just kinda bundle it up and stuff it away. Not being able to deposit a check, mail a letter, or read a bus schedule, on the other hand can get you in trouble when you actually need to. Here’s to not caring about linen care!
That’s kind of my point—it is a utility calculation, not some mystical er-problem. TDT-type problems occur all the time in real life, but they tend not to involve ‘perfect’ predictors, but rather other flawed agents. The decision to cooperate or not cooperate is thus dependent on the calculated utility of doing so.
“I think this is different from the traditional Newcomb’s problem in that by the time you know there’s a problem, it’s certainly too late to change anything. With Newcomb’s you can pre-commit to one-boxing if you’ve heard about the problem beforehand.”
Agreed. It would be like opening the first box, finding the million dollars, and then having someone explain Newcomb’s problem to you as you consider whether or not to open the second. My thought would be, “Ha! Omega was WRONG!!!! ” laughing as I dove into the second box.
edit: Because there was no contract made between TDT agents before the first box was opened, there seems to be no reason to honor that contract, which was drawn afterwards.
Ok, so as I understand timeless decision theory, one wants to honor precommitments that one would have made if the outcome actually depended on the answer regardless of whether or not the outcome actually depends on the answer or not. The reason for this seems to be that behaving as a timeless decision agent makes your behavior predictable to other timeless decision theoretical agents (including your future selves), and therefore big wins can be had all around for all, especially when trying to predict your own future behavior.
So, if you buy the idea that there are multiple universes, and multiple instantiations of this problem, and you somehow care about the results in these of these other universes, and your actions indicate probabalistically how other instantiations of your predicted self will act, then by all means, One Box on problem #1.
However, if you do NOT care about other universes, and believe this is in fact a single instantiation, and you are not totally freaked out by the idea of disobeying the desires of your just revealed upon you creator (or actually get some pleasure out of this idea), then please Two Box. You as you are in this universe will NOT unexist if you do so. You know that going into it. So, calculate the utility you gain from getting a million dollars this one time vs the utility you lose from being an imperfect timeless decision theoretical agent. Sure, there’s some loss, but at a high enough pay out, it becomes a worthy trade.
I think Newcomb’s problem would be more interesting if the 1st box contained 1⁄2 million and the 2nd box contained 1 million, and omega was only right, say 75% of the time… See how fast answers start changing. What if omega thought you were a dirty two-boxer and only put money in box b? Then you would be screwed if you one-boxed! Try telling your wife that you made the correct ‘timeless decision theoretical’ answer when you come home with nothing.
This is a truly excellent post. You bring the problem that we are dealing with into a completely graspable inferential distance and set up a mental model that essentially asks us to think like an AI and succeeds. I haven’t read anything that has made me feel the urgency of the problem as much as this has in a really long time...
Yes.
This is true. We were (and are) in the same social group, so I didn’t need to go out of my way for repeated interaction. Had I met him once and he failed to pick up my sigs, then NO, we would NOT be together now… This reminds me of a conversation I had with Silas, in which he asked me, “How many dates until....?” And I stared at him for a moment and said, “What makes you think there would be a second if the first didn’t go so well?”
Self help usually fails because people are terrible at identifying what their actual problems are. Even when they are told! (Ahh, sweet, sweet denial.) As a regular member of the (increasingly successful) OB-NYC meetup, I have witnessed a great deal of ‘rationalist therapy,’ and frequently we end up talking about something completely different from what the person originally asked for therapy for (myself included). The outside view of other people (preferably rationalists) is required to move forward on the vast majority of problems. We should also not underestimate the importance of social support and social accountability in general as positive motivating factors. Another reason that self-help might fail is that the people reading these particular techniques are trying to help themselves by themselves. I really hope others from this site take the initiative in forming supportive groups, like the one we have running in NYC.
You are very unusual. I love nerds too, and am currently in an amazing relationship with one, but even I have my limits. He needed to pursue me or I wouldn’t have bothered. I was quite explicitly testing, and once he realized the game was one, he exceeded expectations. But yeah, there were a couple of months there when I thought, ‘To hell with this! If he’s not going to make a move at this point, he can’t know what he’s doing, and he certainly won’t be any good at the business...’
Are you intending to do this online or meet in person? If you are actually meeting, what city is this taking place in? Thanks.
I agree that these virtue ethics may help some people with their instrumental rationality. In general I have noticed a trend at lesswrong in which popular modes of thinking are first shunned as being irrational and not based on truth, only to be readopted later as being more functional for achieving one’s stated goals. I think this process is important, because it allows one to rationally evaluate which ‘irrational’ models lead to the best outcome.
It seems that one way society tries to avoid the issue of ‘preemptive imprisonment’ is by making correlated behaviors crimes. For example, a major reason marijuana was made illegal was to give authorities an excuse to check the immigration status of laborers.
Dear Tech Support, Might I suggest that the entire Silas-Alicorn debate be moved to some meta-section. It has taken over the comments section of an instrumentally useful post, and may be preventing topical discussion.
I have always been curious about the effects of mass-death on human genetics. Is large scale death from plague, war, or natural-disaster likely to have much effect on the genetics of cognitive architecture, or are outcomes generally too random? Is there evidence for what traits are selected for by these events?
Most people commenting seem to be involved in science and technology (myself included), with a few in business. Are there any artists or people doing something entirely different out there?
To answer the main question, I am an MD/PhD student in neurobiology.
Awe, this made my night! Welcome to all!
Sure, one can always look at the positive aspects of reality, and many materialists have even tried to put a positive spin on the inevitability of death without an afterlife. But it should not be surprising that what is real is not always what is most beautiful. There are a panoply of reasons not to believe things that are not true, but greater aesthetic value does not seem to be one of them. There is an aesthetic value in the idea of ‘The Truth,’ but I would not say that this outweighs all of the ways in which fantasy can be appealing for most people. And the ‘fantasies’ of which I am speaking are not completely random untruths, like “Hey, I’m gonna believe in Hobbits, because that would be cool!’, but rather ideas that spring from the natural emotional experiences of humanity. They feel correct. Even if they are not.
Thank you for saying this outright. I was appalled by Scott’s lack of epistemic rigor and how irresponsible he was at using his widely-read platform and trust as a physician to fool people into thinking cutting out a major organ has very little risk. Maybe he really did just fool himself, but I don’t think that is an excuse when your whole deal is being the guy with good epistemics who looks at medical research. A comment he made later about guilting 40,000 randomly selected Americans into donating indicates clearly that he has an Agenda. He does not have your best interests at heart at all and thinks this is obligatory and not superogatory. If people understand that there are large, not necessarily quantified risks here, and still want to donate, then go right ahead. I think you are right that this is more about purifying themselves through self-sacrifice than it is about actually improving the world, but hey- a lot of people seem to report that donation has long-term improved their mental well-being. What I object to is minimizing the risks and guilting people who second-guess the BS data. People really need to go into this with their eyes open and make this choice for themselves. I don’t think Scott’s article is written in good faith.
And what effect would receiving this letter have on any of your patients Mr. Scott? Do you think they will be better off? Or should we convince suicidal people to stick around because they have so many useful organs? Hey, don’t feel like you’re a burden on your parents—they might need your kidney one day!