Laura: In a comment marked as general I do not expect to find a sharply asymmetric statement about a barely (if at all) asymmetric issue.
Frank_Hirsch
[Laura ABJ:] While I think I have insight into why a lot of men might FAIL with women, that doesn’t mean I get THEM...
You are using highly loaded and sexist language. Why is it only the men who fail with the women? Canst thou not share in the failure, bacause thou art so obviously superior?
Q:
Did Sarah understand Mike? She could articulate important differences, but seemed unable to act accordingly, to accept his actions, to communicate her needs to him, or even to understand why V-Day went sour.
A:
Sarah and Mike seem to be in exactly the same position. Either they learn it or they learn to live with it. Or not.Q:
Question #2: How far does understanding need to go? Some understanding of differences is helpful, but only when it’s followed by acceptance of the differences. That’s an attitude rather than an exercise in logic.
A:
This is even stranger than #1. Sorry, does not compute.
well, i wonder how gender is actually defined, if six have been claimed.
can you give a line on the model which is used?
my very rough first model allows for (2+n)(2+m)222 combinations. that’s at least 32 for the corner-cases alone. i say if it’s worth doing, then it’s worth doing right.
Frank, Demonstrated instances of illusory free-will don’t seem to me to be harder or easier to get rid of than the many other demonstrated illusory cognitive experiences. So I don’t see anything exceptional about them in that regard.
HA, I do. It is a concept I suspect we are genetically biased to hold, an outgrowth of the distinction between subject (has a will) and object (has none). Why are be biased to do so? Because, largely, it works very well as a pattern for explanations about the world. We are built to explain the world using stories, and these stories need actors. Even when you are convinced that choice does not exist, you’ll still be bound to make use of that concept, if only for practical reasons. The best you can do is try to separate the “free” from the “choice” in an attempt to avoid the flawed connotation. But we have trouble conceptualising choice if it’s not free; because then, how could it be a choice? All that said, I seem to remember someone saying something like: “Having established that there is no such thing as a free will, the practical thing to do is to go on and pretend there was.”.
HA: How come you think I defend any “non-illusory human capacity to make choices”? I am just wondering why the illusion seems so hard to get rid of. Did I fail so miserably at making my point clear?
If your mind contains the causal model that has “Determinism” as the cause of both the “Past” and the “Future”, then you will start saying things like, “But it was determined before the dawn of time that the water would spill—so not dropping the glass would have made no difference”.
Nobody could be that screwed up! Not dropping the glass would have been no option. =)
About all that free-will stuff: The whole “free will” hypothesis may be so deeply rooted in our heads because the explanatory framework of identifying agents with beliefs about the world, objectives, and the “will” to change the world according to these beliefs and objectives just works so remarkably well. Much like Newtons theory of gravity: In terms of the ratio of predictive_accuracy_in_standard_situations to operational_complexity Newton’s gravity kicks donkey. So does the Free Will (TM). But that don’t mean it’s true.
steven: To much D&D? I prefer chaotic neutral… Hail Eris! All hail Discordia! =)
[Eliezer says:] And if you’re planning to play the lottery, don’t think you might win this time. A vanishingly small fraction of you wins, every time.
I think this is, strictly speaking, not true. A more extreme example: While recently talking with a friend, he asserted that “In one of the future worlds, I might jump up in a minute and run out onto the street, screaming loudly!”. I said: “Yes, maybe, but only if you are already strongly predisposed to do so. MWI means that every possible future exists, not every arbitrary imaginable future.”. Although your assertion in the case about lottery is much weaker, I don’t believe it’s strictly true.
The Taxi anecdote is ultra-geeky—I like that! ;-)
Also, once again I accidentally commented on Eliezers last entry, silly me!
[Unknown wrote:] [...] you should update your opinion [to] a greater probability [...] that the person holds an unreasonable opinion in the matter. But [also to] a greater probability [...] that you are wrong.
In principle, yes. But I see exceptions.
[Unknown wrote:] For example, since Eliezer was surprised to hear of Dennett’s opinion, he should assign a greater probability than before to the possibility that human level AI will not be developed with the foreseeable future. Likewise, to take the more extreme case, assuming that he was surprised at Aumann’s religion, he should assign a greater probability to the Jewish religion, even if only to a slight degree.
Well, admittedly, the Dennett quote depresses me a bit. If I were in Eliezers shoes, I’d probably also choose to defend my stance—you can’t dedicate your life to something with just half a heart!
About Auman’s religion: That’s one of the cases where I refuse to adapt my assigned probability one iota. His belief about religion is the result of his prior alone. So is mine, but it is my considered opinion that my prior is better! =)
Also, if I may digress a bit, I am sceptical about Robin’s Hypothesis that humans in general update to little from other people’s beliefs. My first intuition about this was that the opposite was the case (because of premature convergence and resistance to paradigm shifts). After having second thoughts, I believe the amount is probably just about right. Why? 1) Taking other people’s beliefs as evidence is an evolved trait, and so is probably the approximate amount. 2) Evolution is smarter than I (and Robin, I presume).
Unknown: Well, maybe yeah, but so what? It’s just practically impossible the completely re-evaluate every belief you hold whenever someone says something that asserts the belief to be wrong. That’s nothing at all to do with “overconfidence”, but it’s everything to do with sanity. The time to re-evaluate your beliefs is when someone gives a possibly plausible argument about the belief itself, not just an assertion that it is wrong. Like e.g. whenever someone argues anything, and the argument is based on the assumption of a personal god, I dismiss it out of hand without thinking twice—sometimes I do not even take the time to hear them out! Why should I, when I know it’s gonna be a waste of time? Overconfidence? No, sanity!
Nick:
I thought the assumption was that SI is to S to get any ideas about world domination?
Makes me think:
Wouldn’t it be rather recommendable, if instead of heading straight for an (risky) AGI, we worked on (safe) SIs and then have them solve the problem of Friendly AGI?
botogol:
Eliezer (and Robin) this series is very interesting and all, but.… aren’t you writing this on the wrong blog?
I have the impression Eliezer writes blog entries in much the same way I read Wikipedia: Slowly working from A to B in a grandiose excess of detours… =)
Wow, good teaser for sure! /me is quivering with anticipation ^_^
Caledonian:
One of the very many problems with today’s world is that, instead of confronting the root issues that underlie disagreement, people simply split into groups and sustain themselves on intragroup consensus. [...] That is an extraordinarily bad way to overcome bias.
I disagree. What do we have to gain from bringing all-and-everyone in line with our own beliefs? While it is arguably a good thing to exchange our points of view, and how we are rationalising them, there will always be issues where the agreed evidence is just not strong enough to refute all but one way to look at things. I believe that sometimes you really do have to agree to disagree (unless all participants espouse bayesianism, that is), and move on to more fertile pastures. And even if all participants in a discussion claim to be rationalists, sometimes you’ll either have to agree that someone is wrong (without agreeing on who it is, naturally) or waste time you could have spent on more promising endeavours.
Will Pearson [about tiny robots replacing neurons]: “I find this physically implausible.”
Um, well, I can see it would be quite hard. But that doesn’t really matter for a thought experiment. To ask “What it would be like to ride on a light beam?” is quite as physically implausible as it gets, but seems to have produced a few rather interesting insights.
[Warning: Here be sarcasm] No! Please let’s spend more time discussing dubious non-disprovable hypotheses! There’s only a gazillion more to go, then we’ll have convinced everyone!
You mean like the distinction between competence and performance?