Hey everyone, I just voted, and so I can see the correct answer. The average is 19.2, so you should choose 17%!
jeremysalwen
Perhaps I am just contrarian in nature, but I took issue with several parts of her reasoning.
“What you’re saying is tantamount to saying that you want to fuck me. So why shouldn’t I react with revulsion precisely as though you’d said the latter?”
The real question is why should she react with revulsion if he said he wanted to fuck her? The revulsion is a response to the tone of the message, not to the implications one can draw from it. After all, she can conclude with >75% certainty that any male wants to fuck her. Why doesn’t she show revulsion simply upon discovering that someone is male? Or even upon finding out that the world population is larger than previously thought, because that implies that there are more men who want to fuck her? Clearly she is smart enough to have resolved this paradox on her own, and posing it to him in this situation is simply being verbally aggressive.
“For my face is merely a reflection of my intellect. I can no more leave fingernails unchewed when I contemplate the nature of rationality than grin convincingly when miserable.”
She seems to be claiming that her confrontational behavior and unsocial values are inseparable from rationality. Perhaps this is only so clearly false to me because I frequent lesswrong.
“If it was electromagnetism, then even the slightest instability would cause the middle sections to fly out and plummet to the ground… By the end of class, it wasn’t only sapphire donut-holes that had broken loose in my mind and fallen into a new equilibrium. I never was bat-mitzvahed.”
This seems to show an incredible lack of creativity (or dare I say it, intelligence), that she would be unable to come up with a plausible way in which an engineer (never mind a supernatural deity) could fix a piece of rock to appear to be floating in the hole in a secure way. It’s also incredible that she would not catch onto the whole paradox of omnipotence long before this, a paradox with a lot more substance.
“he eventual outcome would most likely be a compromise, dependent, for instance, on whether the computations needed to conceal one’s rationality are inherently harder than those needed to detect such concealment.”
Whoah, whoah, since when did cheating and catching it become a race of computation? Maybe an arms race of finding and concealing evidence, but when does computational complexity enter the picture? Second of all, the whole section about the Darwinian arms race makes the (extremely common) mistake of conflating evolutionary “goals” and individual desires. There is a difference between an action being evolutionarily advantageous, and an individual wanting to do it. Never mind the whole confusion about the nature of an individual human’s goals (see http://lesswrong.com/lw/6ha/the_blueminimizing_robot/).
One side point is that the way she presents it (“Emotions are the mechanisms by which reason, when it pays to do so, cripples itself”) is essentially presenting the situation as Newcomb’s Paradox, and claiming that emotions are the solution, since her idea of “rationality” can’t solve it on its own.
“By contrast, Type-1 thinking is concerned with the truth about which beliefs are most advantageous to hold.”
But wait… the example given is not about which beliefs are most advantageous to hold… it’s about which beliefs it’s most advantageous to act like you hold. In fact, if you examine all of the further Type-X levels, you realize that they all collapse down to the same level. Suppose there is a button in front of you that you can press (or not press). How could it be beneficial to believe that you should push the button, but not beneficial to push the button? Barring of course, supercomputer Omegas which can read your mind. You’re not a computer. You can’t get a core dump of your mind which will show a clearly structured hierarchy of thoughts. There’s no distinction to the outside world between your different levels of recursive thoughts.
I suppose this bothered me a lot more before I realized this was a piece of fiction and that the writer was a paranoid schizophrenic (the former applying to most else of what I am saying).
“Ah, yet is not dancing merely a vertical expression of a horizontal desire?”
No, certainly not merely. Too bad Elliot lacked the opportunity (and probably the quickness of tongue) to respond.
“But perplexities abound: can I reason that the number of humans who will live after me is probably not much greater than the number who have lived before, and that therefore, taking population growth into account, humanity faces imminent extinction?...”
Because I am overly negative in this post, I thought I’d point out the above section, which I found especially interesting.
But the whole “Flowers for Algernon” ending seemed a bit extreme...and out of place.
No, you can only get an answer up to the limit imposed by the fact that the coastline is actually composed of atoms. The fact that a coastline looks like a fractal is misleading. It makes us forget that just like everything else it’s fundamentally discrete.
This has always bugged me as a case of especially sloppy extrapolation.
You’re right, if the opponent is a TDT agent. I was assuming that the opponent was simply a prediction=>mixed strategy mapper. (In fact, I always thought that the strategy 51% one-box 49% two box would game the system, assuming that Omega just predicts the outcome which is most likely).
If the opponent is a TDT agent, then it becomes more complex, as in the OP. Just as above, you have to take the argmax over all possible y->x mappings, instead of simply taking the argmax over all outputs.
Putting it in that perspective, essentially in this case we are adding all possible mixed strategies to the space of possible outputs. Hmmm… That’s somewhat a better way of putting it than everything else I said.
In any case, two TDT agents will both note that the program which only cooperates 100% iff the opponent cooperates 100% dominates all other mixed strategies against such an opponent.
So to answer the original question: Yes, it will defect against blind mixed strategies. No, it will not necessarily defect against simple (prediction =>mixed strategy) mappers. N/A against another TDT agent, as neither will ever play a mixed strategy, so to ask what whether it would cooperate with a mixed strategy TDT agent is counterfactual.
EDIT: Thinking some more, I realize that TDT agents will consider the sort of 99% rigging against each other — and will find that it is better than the cooperate IFF strategy. However, this is where the “sanity check” become important. The TDT agent will realize that although such a pure agent would do better against a TDT opponent, the opponent knows that you are a TDT agent as well, and thus will not fall for the trap.
Out of this I’ve reached two conclusions:
The sanity check outlined above is not broad enough, as it only sanity checks the best agents, whereas even if the best possible agent fails the sanity check, there still could be an improvement over the nash equilibrium which passes.
Eliezer’s previous claim that a TDT agent will never regret being a TDT agent given full information is wrong (hey, I thought it was right too). Either it gives in to a pure 99% rigger or it does not. If it does, then it regrets not being able to 99% rig another TDT agent. If it does not, then it regrets not being a simple hard-coded cooperator against a 99% rigger. This probably could be formalized a bit more, but I’m wondering if Eliezer et. al. have considered this?
EDIT2: I realize I was a bit confused before. Feeling a bit stupid. Eliezer never claimed that a TDT agent won’t regret being a TDT agent (which is obviously possible, just consider a clique-bot opponent), but that a TDT agent will never regret being given information.
Well, it certainly will defect against any mixed strategy that is hard coded into the opponent’s source code. On the other hand, if the mixed strategy the opponent plays is dependent on what it predicts the TDT agent will play, then the TDT agent will figure out which outcome has a higher expected utility:
(I defect, Opponent runs “defection predicted” mixed strategy)
(I cooperate, Opponent runs “cooperation detected” mixed strategy)Of course, this is still simplifying things a bit, since it assumes that the opponent can perfectly predict one’s strategy, and it also rules out the possibility of the TDT agent using a mixed strategy himself.
Thus the actual computation is more like
ArgMax(Sum(ExpectedUtility(S,T)*P(T|S)))where the argmax is over S: all possible mixed strategies for the TDT agent
the sum is over T: all possible mixed strategies for the opponent
and P(T|S) is the probability that opponent will play T, given that we choose to play S. (so this is essentially an estimate of the opponent’s predictive power.)
Okay, I completely understand that the Heisenberg Uncertainty principle is simply the manifestation of the fact that observations are fundamentally interactions.
However, I never thought of the uncertainty principle as the part of quantum mechanics that causes some interpretations to treat observers as special. I was always under the impression that it was quantum entanglement… I’m trying to imagine how a purely wave-function based interpretation of quantum entanglement would behave… what is the “interaction” that localizes the spin wavefunction, and why does it seem to act across distances faster than light? Please, someone help me out here.
Er, this is assuming that the information revealed is not intentionally misleading, correct? Because certainly you could give a TDT agent an extra option which would be rational to take on the basis of the information available to the agent, but which would still be rigged to be worse than all other options.
Or in other words, the TDT agent can never be aware of such a situation.
Isn’t this an invalid comparison? If The Nation were writing for an audience of reader which only read The Nation, wouldn’t it change what it prints? The point is these publications are fundamentally part of a discussion.
Imagine if I thought there were fewer insects on earth then you did, and we had a discussion. If you compare the naive person who reads only my lines vs the naive person who reads only your lines, your person ends up better off, because on the whole, there are indeed a very large number of insects on earth This will be the case regardless of who actually has the accurate estimate of number of insect species. The point is that my lines will all present evidence that insects are less numerous, in an attempt to get you to adjust your estimate downward, and your lines will be the exact opposite. However, that says nothing about who has a better model of the situation.
Here: http://lesswrong.com/lw/ua/the_level_above_mine/
I was going to go through quote by quote, but I realized I would be quoting the entire thing.
Basically:
A) You imply that you have enough brainpower to consider yourself to be approaching Jaynes’s level. (approaching alluded to in several instances) B) You were surprised to discover you were not the smartest person Marcello knew. (or if you consider surprised too strong a word, compare your reaction to that of the merely very smart people I know, who would certainly not respond with “Darn”). C) Upon hearing someone was smarter than you, the first thing you thought of was how to demonstrate that you were smarter than them. D) You say that not being a genius like Jaynes and Conway is a “possibility” you must “confess” to. E) You frame in equally probable terms the possibility that the only thing separating you from genius is that you didn’t study quite enough math as a kid.
So basically, yes, you don’t explicitly say “I am a mathematical genius”, but you certainly positions yourself as hanging out on the fringes of this “genius” concept. Maybe I’ll say “Schrodinger’s Genius”.
Please ignore that this is my first post and it seems hostile. I am a moderate-time lurker and this is the first time that I felt I had relevant information that was not already mentioned.
Or maybe that’s what I want you to think I’d say...