My apologies, I was meaning a more general “you”, as in “the person who uses this phrase”. Not directed at you you, just the common you, and you are certainly not the you I meant for “you” to refer to.
thescoundrel
Fair enough- I should have chosen a clearer example.
My complaint is that is either a euphemism for autistic (in which case, just say autistic- if that feel “Squicky”, re-evaluate your statement), or it is so vague as to lose all meaning- someone with bi-polar disorder is non-neurotypical, but is no more likely to have made these than anyone else.
If you do mean specifically autistic, you may want to broaden your understanding of autism. Autism is not standard, it can present in many, many ways, including many that would not create this type of image. The images are indicative of a poor grasp of humor, and a poor grasp of the original subject matter, but I do not see a higher probability for an autistic person to create these against the general population.
If that’s the case, then I stand by my original point, if not to its extreme conclusion.
Ah- I read the preview version, I think that bit was added later. Thanks :)
Wow- that is former MTG Pro Zvi, one of the best innovators in the game during his time. Awesome to see him involved in something like this.
The biggest horror aspect for me (also from the original) was that (rot13) nal aba-uhzna vagryyvtrapr unf ab punapr. Aba-uhzna vagryyvtrag yvsr trgf znqr vagb pbzchgebavhz, gb srrq gur rire tebjvat cbal fcurer. Vg vf gur gbgny trabpvqr bs rirel aba-uhzna enpr.
I think that is fighting the hypothetical.
That’s possible, but I am not sure how I am fighting it in this case. Leave Omega in place- why do we assume equal probability of omega guessing incorrectly or correctly, when the hypothetical states he has guessed correctly each previous time? If we are not assuming that, why does cdc treat each option as equal, and then proceed to open two boxes?
I realize that decision theory is about a general approach to solving problems- my question is, why are we not including the probability based on past performance in our general approach to solving problems, or if we are, why are we not doing so in this case?
I made a comment early this week on a thread discussing the lifespan dilemma, and how it appears to untangle it somewhat. I had intended to see if it helped clarify other similar issues, but haven’t done so yet. I would be interested in feedback- it seems possible the I have completely misapplied it in this case.
If in Newcomb’s problem you replace Omega with James Randi, suddenly everyone is a one-boxer, as we assume there is some slight of hand involved to make the money appear in the box after we have made the choice. I am starting to wonder if Newcomb’s problem is just simple map and territory- do we have sufficient evidence to believe that under any circumstance where someone two-boxes, they will receive less money than a one box? If we table the how it is going on, and focus only on the testable probability of whether Randi/Omega is consistently accurate, we can draw conclusions on whether we live in a universe where one boxing is profitable or not. Eventually, we may even discover the how, and also the source of all the money that Omege/Randi is handing out, and win. Until then, like all other natural laws that we know but don’t yet understand, we can still make accurate predictions.
Interestingly, I discovered the Lifespan Dilemma due to this post. While not facing a total breakdown of my ability to do anything else, it did consume an inordinate amount of my thought process.
The question looks like an optimal betting problem- you have a limited resource, and need to get the most return. According to the Kelly Criterion, the optimal percentage of your total bankroll looks like f*=(p(b-1)+1)/b, where p is the probability of success, and b is the return per unit risked. The interesting thing here is that for very large values of b, the percentage of bankroll to be risked almost exactly equals the percentage chance of winning. Assuming a bankroll of 100 units and a 20 percent chance of success, you should bet the same amount if b = 1 million or if b = 1 trillion: 20 units.
Eager to apply this to the problem at hand, I decided to plug in the numbers. I then realized I didn’t know what the bank roll was in this situation. My first thought was that the bankroll was the expected time left- percent chance of success * time if successful. I think this is the mode that leads to the garden path- every time you increase your time of life if successful, it feels like you have more units to bet with, which means you are willing to spend more on longer odds.
Not satisfied, I attempted to re-frame the question into money. Stating it like this, I have 100$, and in 2 hours I will either have 0$, or 1 million, with an 80% chance of winning. I could trade my 80% chance for a 79% chance of winning 1 trillion. So, now that we are in money, where is my bankroll?
I believe that is the trick- in this question, you are already all in. You have already bet 100% of your bankroll, for an 80% chance of winning- in 2 hours, you will know the outcome of your bet. For extremely high values of b, you should have only bet 80% of your bankroll- you are already underwater. Here is the key point- changing the value of b does not change what you should have bet, or even your bet at all- that’s locked in. All you can change is the probability, and you can only make it worse. From this perspective, you should accept no offer that lowers your probability of winning.
- 6 Jan 2013 8:15 UTC; 1 point) 's comment on [Discussion] The Kelly criterion and consequences for decision making under uncertainty by (
My apologies if you felt I was handing out condemnation- it was not my intent at all. As is said, I did not think the reaction I had was the reaction you were aiming for. While upon consideration I don’t think there is a valid harm to the OK Cupid posting, I was in no way attempting to say we shouldn’t talk about it. I simply was noting that if persuasion is what you are after, there may be a better approach that does not trigger the squick feeling. It is also possible that I am a statistical anomaly in this (although I would say that the number of upvotes I have received is probably evidence to the contrary), and I need to re-calibrate somewhere. In any case, it seems I too need to work on my delivery, as my intended message was not accurately received. I was in competitive debate for many years, and have long since separated my dislike of an argument for my feelings of the person- one of my faults is that I tend to common mind fallacy that trait to everyone, and then be surprised when someone sees my evaluation of what they are saying as a reflection on them as a person as opposed to an evaluation of their argument.
From my perspective, if you are in a place of prestige and you want to avoid damage to your image, hiding your quirks is maximizes the chance that they will be discovered in a way that precludes you controlling the how it is released. If image malpractice is the issue, keeping this in the open is an inoculation against more damaging future revelation. The trade-off is that you may lose credibility up front. Given EY’s eschewing of the “normal” routes to academic success, and the profound strangeness that a some of the ideas we take for granted have at first blush to anyone who hasn’t read the sequences, I don’t thing OK cupid is doing much damage.
Finally, I noticed when I first read this that the article gave me the squicks. In trying to compare the feeling to a known quantity, I realized it was analogous to when my religious parents would scandalously tell me of a couple who are “shacking up”. The feeling of someone sharing psudo-private information in a way that does not explicitly make a value judgement, certainly does implicitly. I rather doubt that was your intention, however, you might want to be aware of the reaction, if it was not intended.
tl,dr; EY’s just this guy, you know?
Eliezer Yudkowsky is what acausal sex feels like from the inside.
Inside Eliezer Yudkowsky’s pineal gland is not an immortal soul, but counterfactual hugging.
So- does the whole problem go away if instead of trying to deduce what fairbot is going to do with masquerade, we assume that fairbot is going to asses it as if masquerade = the current mask? By ignoring the existence of masquerade in our deduction, we both solve the Gödel inconsistency, while simultaneously ensuring that another AI can easily determine that we will be executing exactly the mask we choose.
Masquerade deduces the outcomes of each of its masks, ignoring its own existence, and chooses the best outcome. Fairbot follows the exact same process, determines what Mask Masquerade is going to use, and then uses that outcome to make its own decision, as if Masquerade were whatever mask it ends up as.I assume Masquerade would check that it is not running against itself, and automatically co-operate if it is, without running the deduction, which would be the other case for avoiding the loop.
What happens if the masks are spawned as sub processes that are not “aware” of the higher level process monitoring thems? The higher level process can kill off the sub processes and spawn new ones as it sees fit, but the mask processes themselves retain the integrity needed for a fairbot to cooperate with itself.
Ahh, that wonderfully embarrassing moment when you realize your small group has been calling Crocker’s rules by the wrong name for almost year.
A rationalist who doesn’t consider the effects of tone when attempting to effect a change in someone’s thinking is not dealing in reality. There is a reason Becker’s Rules have to be asked for and agreed to, even among rationalists- we are not built to automatically separate tone from content, and there are times when even the most thoughtful of us are personally vulnerable to a harsh tone. We tend to simplify to “two Beysians updating on evidence”, but in reality, we have to consider the best way to transmit that message, as well as the outcome of that transmission. Human language is not tightly controlled code- when a change in tone is equivalent to a change in meaning, ignoring tone is the same as ignoring all parentheses in code.
This is a very fine line to walk, especially in magic. Finding the places you could have made better decisions, while understanding what decisions you could not have made better with the information you had at the time, is not an easy task- although at my skill level, it is generally easier to assume I made a poor decision and find it.
So, you no box on Newcomb’s Problem? :)