Well, now he has another reason not to change his mind. Seems unwise, even if he’s right about everything.
APMason
What does “an action that harms another agent” mean? For instance, if I threaten to not give you a chicken unless you give me $5, does “I don’t give you a chicken” count as “a course of action that harms another agent”? Or does it have to be an active course, rather than act of omission?
It’s not blackmail unless, given that I don’t give you $5, you would be worse of, CDT-wise, not giving me the chicken than giving me the chicken. Which is to say, you really want to give me the chicken but you’re threatening to withhold it because you think you can make $5 out of it. If I were a Don’t-give-$5-bot, or just broke, you would have no reason to threaten to withhold the chicken. If you don’t want to give me the chicken, but are willing to do so if I give you $5, that’s just normal trade.
“Wanna see something cool?”
Bob Dylan’s new album (“Tempest”) is perfect. At the time of posting, you can listen to it free on the itunes store. I suggest you do so.
On another note, I’m currently listening to all the Miles Davis studio recordings and assembling my own best-of list. It’ll probably be complete by next month, and I’ll be happy to share the playlist with anyone who’s interested.
Thomas Bergersen is just wonderful. Also, I’ve been listening to a lot of Miles Davis (I’m always listening to a lot of Miles Davis, but I haven’t posted in one of these threads before). I especially recommend In a Silent Way.
Murakami is still the only currently living master of magical realism
Salman Rushdie. Salman Rushdie Salman Rushdie Salman Rushdie. Salman Rushdie.
If you haven’t read much other Italo Calvino, “Invisible Cities” is really, really, really great.
I have to say, as a more-or-less lifelongish fan of Oscar Wilde (first read “The Happy Prince” when I was eight or nine), that the ending to Ernest is especially weak. I like the way he builds his house of cards in that play, and I like the dialogue, but (and I think I probably speak for a lot of Wilde fans here), the way he knocks the cards down really isn’t all that clever or funny. For a smarter Wilde play, see “A Woman of No Importance”, although his best works are his childrens’ stories, “The Picture of Dorian Grey”, and “Ballad of Reading Gaol” (although it is not, in fact, the case that “Every man kills the thing he loves”.)
(Also I should mention that I recently reread “The Code of the Woosters” and laughed myself inside-out.)
You sure about this?
Nope, not sure at all.
I don’t think that question’s going to give you the information you want—when in the last couple thousand of years, if Jews had wanted to stone apostates to death, would they have been able to do it? The diasporan condition doesn’t really allow it. I think Christianity really is the canonical example of the withering away of religiosity—and that happened through a succession of internal revolutions (“In Praise of Folly”, Lutheranism, the English reformation etc.) which themselves happened for a variety of reasons, not all pure or based in rationality (Henry VIII’s split with Rome, for example) but had the effect of demystifying the church and thereby shrinking the domain of its influence. I think. Although it’s hard to interpret the Englightenment as a movement internal to Christianity, so this only gets you so far, I suppose.
I agree with pretty much everything you’ve said here, except:
You only cooperate if you expect your opponent to cooperate if he expects you to cooperate ad nauseum.
You don’t actually need to continue this chain—if you’re playing against any opponent which cooperates iff you cooperate, then you want to cooperate—even if the opponent would also cooperate against someone who cooperated no matter what, so your statement is also true without the “ad nauseum” (provided the opponent would defect if you defected).
What sort of examples can you bring up of custom marital contracts that would make people scream in horror? My guess is that people would generally feel queasy about allowing legal enforcement of what looks like slavish or abusive relationships. I think this would be a genuine cause for concern, not because I don’t think that people should be able to enter whatever relationships please them in principle, but because in practice I’m concerned about people being coerced into signing contracts harmful to themselves. Not sure where I’d draw the line exactly; this is probably a Hard Problem.
Remember that “enforcing contracts” could mean two things. It could mean that the government steps in and makes the parties do what they said they would—it keeps whipping them until they follow through. It could also mean punishing the parties for damage done on the other end when they breach the contract. For example, in a world in which prostitution is legal, X proposes to pay Y for sex. Y accepts. X hands over the money. Y refuses to have sex with X. The horrific version of this is the government comes in and “enforces” the contract… by holding down Y and, well, yeah. The alternative is the government comes in, sees that Y has taken money from X by fraud, and punishes Y the same way it would punish any other thief. The second option is, I think, both more intuitive and less massively disturbing.
Thank you. I had expected the bottom to drop out of it somehow.
EDIT: Although come to think of it I’m not sure the objections presented in that paper are so deadly after all if you takes TDT-like considerations into account (i.e. there would not be a difference between “kill 1 person, prevent 1000 mutilations” + “kill 1 person, prevent 1000 mutilations” and “kill 2 people, prevent 2000 mutilations”.) Will have to think on it some more.
Can anyone explain what goes wrong if you say something like, “The marginal utility of my terminal values increases asymtotically, and u(Torture) approaches a much higher asymptote than u(Dust speck)” (or indeed whether it goes wrong at all)?
That last sentence didn’t make sense to me when I first looked at this. Think you must mean “worse”, not “better”.
This variation of the problem was invented in the follow-up post (I think it was called “Sneaky strategies for TDT” or something like that:
Omega tells you that earlier he flipped a coin. If the coin came down heads, it simulated a CDT agent facing this problem. If the coin came down tails, it simulated a TDT agent facing this problem. In either case, if the simulated agent one-boxed, there is $1000000 in Box-B; if it two-boxed Box-B is empty. In this case TDT still one-boxes (50% chance of $1000000 dominates a 100% chance of $1000), and CDT still two-boxes (because that’s what CDT does). In this case, even though both agents have an equal chance of being simulated, CDT out-performs TDT (average payoffs of 500500 vs. 500000) - CDT takes advantage of TDT’s prudence and TDT suffers for CDT’s lack of it. Notice also that TDT cannot do better by behaving like CDT (both would get payoffs of 1000). This shows that the class of problems we’re concerned with is not so much “fair” vs. “unfair”, but more like “those problem on which the best I can do is not necessarily the best anyone can do”. We can call it “fairness” if we want, but it’s not like Omega is discriminating against TDT in this case.
Wait a minute, what exactly do you mean by “you”? TDT? or “any agent whatsoever”? If it’s TDT alone why? If I read you correctly, you already agree that’s it’s not because Omega said “running TDT” instead of “running WTF-DT”. If it’s “any agent whatsoever”, then are you really sure the simulated and real problem aren’t actually the same? (I’m sure they aren’t, but, just checking.)
Well, no, this would be my disagreement: it’s precisely because Omega told you that the simulated agent is running TDT that only TDT could or could not be the simulation; the simulated and real problem are, for all intents and purposes, identical (Omega doesn’t actually need to put a reward in the simulated boxes, because he doesn’t need to reward the simulated agent, but both problems appear exactly the same to the simulated and real TDT agents).
Well, in the problem you present here TDT would 2-box, but you’ve avoided the hard part of the problem from the OP, in which there is no way to tell whether you’re in the simulation or not (or at least there is no way for the simulated you to tell), unless you’re running some algorithm other than TDT.
“Father figure” seems to me to permit either position, “father” not so much. It’s always troublesome when someone declares that you can only be properly impartial by agreeing with them.
shakespeare is good tho