Is that “I’m sure I’ll manage somehow” because thinking about the problem is uncomfortable so you’re brain is creating a comforting excuse, or because you’ve known yourself to solve similarly difficult problems in the past? I ask because you don’t want to get burnt later if it’s the former, and I’ve personally had experiences (read: problems) with a long distance relationship.
Joshua_Blaine
Two TDT players have 3 plausible outcomes to me, it seems. This comes from my admittedly inexperienced intuitions, and not much rigorous math. The 1st two plausible points that occurred to me are 1)both players choose C,Y, with certainty, or 2)they sit at exactly the equilibrium for p1, giving him an expected payout of 3, and p2 an expected payout of .5. Both of these improve on the global utility payout of 3 that’s gotten if p1 just chooses A (giving 6 and 3.5, respectively), which is a positive thing, right?
The argument that supports these possibilities isn’t unfamiliar to TDT. p2 does not expect to be given a choice, except in the cases where p1 is using TDT, therefore she has the choice of Y, with a payout of 0, or not having been given a chance to chose at all. Both of these possibilities have no payout, so p2 is neutral about what choice to make, therefore choosing Y makes some sense. Alternatively, Y has to choose between A for 3 or C for p(.5)*(6), which have the same payout. C, however, gives p2 .5 more utility than she’d otherwise get, so it makes some sense for p1 to pick C.
Alternatively, and what occurred to me last, both these agents have some way to equally share their “profit” over Classical Decision Theory. For however much more utility than 3 p1 gets, p2 gets the same amount. This payoff point (p1-3=p2) does exists, but I’m not sure where it is without doing more math. Is this a well formulated game theoretic concept? I don’t know, but it makes some sense to my idea of “fairness”, and the kind of point two well-formulated agents should converge on.
Where did the brain diagram on the front page go? Is it just an error on my end, or has that actually been removed from the site?
Here’s something that might net you successes of a similar kind, rejection therapy. It’s much less official than it might immediately sound, as it’s an entirely self imposed challenge. You may find yourself asking for and getting a lot more than you realized you could, as well as becoming more comfortable doing it.
It’s easy to forget that first person shooters already have plenty of game modes with strictly defined victory conditions that make fun to play watch matches. Team Fortress 2 has arena mode, which seems like it has very appropriate win conditions for a war game. Teams must eliminate each other, or capture a central point that unlocks after a set amount of time to win. It discourages fleeing and hiding if your team is mostly eliminated, because you’ll lose anyway when the point unlocks. In general all you should need is one clearly defined goal for at least one team, and a time limit.
I really like HPMoR style war games for a rationalist sport myself, using Airsoft or paintball, whichever comes out safest. One fun possibility is real life Trouble in Terrorist Town, which has the fun rationalist twist of trying to identify who the traitor(s) might be. I find the game incredibly fun to watch, which is something positive for a sport. Larger scale games with less betrayal mechanics might be more fun/interesting as well. War games is a very versatile specification for a sport, but should allow for consistent rules for specific tournaments or leagues.
While I hate to say this, the numbers are much less important than the explanation of what they mean. I thought “lifetime of debt”, and then made up the costs in a way to sound realistic-ish. The world-building is probably pretty inconsistent. That is a Bad Author thing to do, but it is super common in the majority of popular stories (I’m looking at you Galleons from Harry Potter).
This is correct. Betting, as a policy, helps distinguish between Orin(correct) and Orin(wrong), but is really only useful for eliminating Orin(spy) because it’s a novel method that the King expects spies to yet be unprepared for, and is easily investigated if circumvented.
Imagine, If Orin is wrong and yet mysteriously has all his debts re-paid and shop re-purchased shortly after being punished, some eyebrows would be raised.
Gwern, I happen to agree with most of what you’ve said, if this were written in regards to x-risks. It is in fact irrelevant to UFAI, but was mostly an exercise in a) practicing writing, and b) working through some intuitions in regards to betting/prediction markets. I wrote it for LW because I assumed it would be enjoyed, but not really learned from (hence Discussion, not Main). A re-write would explore more thoroughly and explicitly the difference between Orin being correct, a spy, or mistaken, and how his bet changes those probabilities.
I suppose it makes an ok-ish example of “people take their money more seriously than their beliefs, and betting helps fix that” Which I think is am important lesson in general.
Fixed, thank you. I’d hate to think the King turned himself into a waterway.
A Story of Kings and Spies
The Wayback Machine, of course. Thanks.
I suppose I lose a rationalist point for failing to use a tool I’m already familiar with to solve my problem.
Connected to this: A preemptive favor is more likely to result in later requests (even if larger than the initial favor) being fulfilled, but the end result may or may not be a more positive opinion of you. The abstract of this paper seems to indicate increased liking of a stranger that does this, but paywalls and general laziness prevent me from getting a more comprehensive idea of what can happen.
This is a nice comment. It’s a useful frame of reference and I especially like it because it jives well with the intuitions I’ve developed since I started studying Economics. And probably my identity as a Neat Person and someone who enjoys experiences over things.
As a general point the “off topic” complaint is used too much to shut down what I think would be valuable contributions to the site. If we’re only ever allowed to talk about rationality, but not demonstrate ourselves using it, then we’ve make a community-crafting mistake.
For everyone seeing this in the “recent comments” section, does there exist a record of this comment? It seems like it was really useful and it’s a shame it was completely nuked.
It’s clear that LessWrong disagrees with you, but in the spirit of challenging my assumptions I’m asking you for any substantive sources that support your claim.
Or less substantively, where did you hear/why do you believe that?
I think something that touches closely to what you’re trying to do is Rational Poker, which is using poker to train your bias overcoming skill. Specifically the This is what 5% feels like. exercise. The idea is that you can combine your explicitly calculated chance of winning a hand with your gut feeling enough times that they begin matching up. You’ll eventually understand what 5%, 10%, 50%, or 90% actually feel like from the inside. Unfortunately, Im uncertain how well this gut feeling generalizes, so using it to determine probabilities outside of areas you’ve trained it for may be less successful.
This is beautiful and really useful seeming. I’m happy it exists, so thanks for making it.
I have reduced my discomfort with talking to anyone about anything, especially requests for help/information. Instead of avoiding talking to people like my parents, teachers, or businesses due to some odd, misplaced anxiety I’ve successfully started noticing this pattern and purposefully overcoming it.
Examples include:
Talking to my family about wanting to move to Australia, instead of putting it off until the last minute.
Calling several local banks and asking about shadowing opportunities, something unfamiliar to both me and the banks.
It’s not incredible, but it feels nice to be making progress in the right direction.
I’ve not personally finished my own arrangements, but I’ll likely be using whole life of some kind. I do know that Rudi Hoffman is an agent well recommended by people who’ve gone the insurance route, so talking to him will likely get you a much better idea of what choices people make (A small warning, his sight is not the prettiest thing). You could also contact the people recommended on Alcor’s Insurance Agents page, if you so desire.