What’s wrong with just using this algorithm to establish ratios between bets, then scaling up to meet whichever limit is hit first?
In your example, it’d be scaled up to 5.12 against 25.
What’s wrong with just using this algorithm to establish ratios between bets, then scaling up to meet whichever limit is hit first?
In your example, it’d be scaled up to 5.12 against 25.
Neither probability should be <50%, you take the probability that your opinion is the right one, not whether the proposition is true or false.
In your example B would be betting against his beliefs, thus the negative result.
The right calculation: A = 0.6 B = 0.7
A pays: (A ^ 2 - (1 - B) ^ 2) * 25 = (0.36 - 0.09) * 25 = 6.57
B pays: (B ^ 2 - (1 - A) ^ 2) * 25 = (0.49 - 0.16) * 25 = 8.25
Edit:
actually, it’s sufficient that A and B sum to over 1. Since you can always negate the condition, the right calculation here is:
A = 0.4
B = 0.7
A pays: (A ^ 2 - (1 - B) ^ 2) * 25 = (0.16 - 0.09) * 25 = 1.75
B pays: (B ^ 2 - (1 - A) ^ 2) * 25 = (0.49 - 0.36) * 25 = 3.25
Also, apparently I can’t use the retract button the way I wanted to use it.
Well, it’s none of anyone elses business, so I don’t see how other people being there is relevant.
If you mean it in the sense of “don’t settle for someone who isn’t going to help you with kids, no matter how good a match you otherwise are”… Never settle is a brag
The guy doesn’t want children, but he doesn’t mind having children with the woman as long as it’s not too bothersome for him. The woman either really wants children, in which case this arrangement is to her benefit, or does not want children that badly, in which case they don’t have children.
Huh!
Now I’m even more confused. How can my answer be useful if they don’t know how I interpret the question? Esp. since my answers are pretty much opposite depending on the interpretation...
My bad for not finding that comment. I skimmed through the thread, but didn’t see it.
I’m confused by the CFAR questions, in particular the last four. Are they using you as ‘the person filling out this survey’ or the general you as in a person? “You can always change basic things about the kind of person you are” sounds like the general you. “You are a certain kind of person, and there’s not much that can be done either way to really change that” sounds like the specific you.
Help?
I think if you would ask those people they would also say yes, that they are thinking about ways of solving their problems.
Not necessarily. They might say it’s too big to solve, or “it’s not really a big deal” when it obviously is, or that it’s not their responsibility to solve, or any of multum other excuses that validate not changing.
That does sound like a good idea. Browsing the google groups, the next occasion seems to be the CZE outing on 1. Sep. (http://lesswrong.com/r/discussion/lw/ie0/meetup_comfort_zone_expansion_outing_london/).
Are there requirements other than ‘find this thread’?
But there is a significant difference between taking a medical formula under doctors supervision and mixing up the most common nutrition ingredients and claiming it to be a cure-all-be-all food. Didn’t the guy forget to include iron in his first mixture?
Another ‘Soylent’ equivalent I know of is Sustagen Hospital Formula.
As a 1.4999999999999 boxer (i.e. take a quantum randomness source for [0, 1], take both boxes if 0, one box if 1, one box if something else happens), I don’t think scenario C is convincing.
The crucial property of B is that as your thoughts change the contents of the box change. The casualty link goes forward in time. Thus the right decision is to take one box, as by the act of taking one box, you will make it contain the money.
In C however there is no such casualty. The oracle either put money in both boxes, or it did not. Your decision now cannot possibly affect that state. So you cannot base your decision in C on its similarity to B.
A good reason to one box, in my opinion, is that before you encounter the boxes it is clearly preferable to commit to one boxing. This is of course not compatible with taking two boxes when you find them (because the oracle seems to be perfect). So it is rational to make yourself the kind of person that takes one box (because you know this brings you the best benefit, short of using the randomness trick).
It’s from Terry Pratchett’s Discworld series. http://en.wikipedia.org/wiki/Havelock_Vetinari
Lord Vetinari also has a strange clock in his waiting-room. While it does keep completely accurate time overall, it sometimes ticks and tocks out of sync (example: “tick, tock… ticktocktick, tock...”) and occasionally misses a tick or tock altogether, which has the net effect of turning one’s brain “into a sort of porridge”. (Feet of Clay, Going Postal).
I think you underestimate the power of the GHD. If Hermione really believed she had to kill Draco or he will, for example, murder every student in Hogwarts the next day, I’m pretty sure she would cold-bloodedly kill him.
Spells that extract the history of spells casted using a wand are canon, afaik (or was that just the most recent spell?)
I would expect they were casted on hermiones wand and the usage was confirmed.
Lack of sabotage is obviously evidence for a fifth column trying to lull the government, given the fifth column exists, since the opposite—sabotage occuring—is very strong evidence against that.
However lack of sabotage is still much stronger evidence towards the fifth column not existing.
Oh, I see. You probably already understood that, but I’ll write it up for anyone else who didn’t initially grok the process (like me).
Intuitively, the original algorithm incentivises people to post their true estimates by scaling up the opponents investment with your given odds, so that it doesnt pay for you to artificially lower your estimate. The possible wins will be much lower; disproportionately to your investment, if you underestimate your odds. Conversely, the possible losses will not be covered by increased wins if you overestimate your chances.
It does not work if you scale the bets. If A believes he wins the bet half the time, and B believes it will be 90%, with the assumption of B being honest and both players setting the limit at 1 (for ease of calculation):
With A declaring 50%, the investment ratios would be:
With the original amount calculation that gives the expected value of
Whereas with scaled up bets A puts in 0.43 while B gives 1:
With A declaring 20%, the numbers are:
While with scaled bets (B = 1, A = 0.18)
Note how E(A) goes down if A lies, but E’(A) went way up.