[Link] RSA Animate: extremely entertaining LW-relevant cartoons
It’s a brilliant idea: a lecture by a cool modern thinker, illustrated by word-by-word doodles on a whiteboard. Excellent at pulling you along the train of thought and absolutely disallowing boredom.
The lectures’ content is pretty great too, although there’s a definite left-wing, populist bent that’s exploting today’s post-crisis hot button issues (they got Zizek, for god’s sake) - some might not like it. Regardless, it’s all very amusing and enlightening. Been linked to before in a comment or to, but it deserves a headline.
You can start here: http://www.youtube.com/watch?v=dFs9WO2B8uI&feature=relmfu (But they’re all worth watching!)
Started out good, but I was cringing when I got to the part where he states that rationality tells people to defect in a Prisoner’s Dilemma.
Pardon my ignorance in asking this stupid question:
Isn’t the mainstream position among decision theorists that defect is the optimal strategy in a one-shot prisoner’s dilemma game? Sure, we can say something like “a TDT-agent will cooperate with another TDT-agent.” But even a TDT-agent will defect if it doesn’t know the identity of the counter-party. And TDT isn’t completely formalized yet, right?
In short, I’m not sure of the value of snarking a popularizer of decision theory for using the word “rationality” differently than this community uses the word, particularly when our usage is (unfortunately) not the consensus definition in the relevant field.
Depending on the payoff scale, a TDT agent will cooperate if it believes that the other agent has some (high enough) chance of being a TDT agent. In other words, raise the sanity waterline high enough, and TDT cooperates.
TDT / superrationality will defect probabilistically given a high enough payoff for defection, even against a known-TDT agent.
In short: TDT and superrationality theories aren’t as simple as some here make them out to be, and the one-shot prisoner’s dilemma has hidden depths for smart players.
Of course, the rational thing to do is to convince everyone ELSE to be “superrational”, and convince them that you are ALSO “superrational”, and then defect if you actually play a prisoner’s dilemma for sufficiently high stakes.
Eliezer has done a good job of this. Hofstadter too. Inventing the term “superrationality” for “magicalthinking” was a good move.
He also has to believe that the other agent believes with sufficient confidence that he is a suitable kind of agent. Same population makeup considerations apply.
Increasing the rationality of a population can lead to that population being worse off, even by the “rationality as systematized winning” definition, if the members are at odds with each other. You don’t necessarily want your adversaries to be more rational. But the reason that part made me cringe is that it carries so many caveats that it’s incredibly misleading. There are many contexts where defecting in a prisoner’s dilemma is a predictably bad idea, and rationality already has image problems as being rigid and unadaptable as well as antisocial without reinforcing them.
The video probably couldn’t really have gone into a proper discussion of what variations of the prisoner’s dilemma make defection appropriate without breaking the flow, but it really wasn’t helped by a message of “paradoxically, rationality can leave you worse off.”
I agree with your point that usage of rationality this way exacerbates the problem of straw-vulcanism. I’m not sure the criticism is aimed at the lowest hanging fruit in this case, but your point is well taken.
Your point about the dangerousness of rationality is not on point. I am almost strictly better off in a society in which the members are not at odds than in one where they are at odds. But whether one of the exceptions to that statement applies does not appear to depends on how dangerous individual members are. Unless individuals are so dangerous that mutually assured destruction is an important part of the analysis. And that is currently so unrealistic (at the individual level) that I don’t think it is worth talking about.
My point is not that rationality is dangerous. It’s possible to formulate situations in which increasing the rationality of a population leaves the population worse off, rather like it’s possible to formulate situations that reward agents for having priors that would normally be stupid. The fact that you can contrive such situations doesn’t constitute a compelling criticism of rationality, which is what makes the video problematic. If rationality really were that dangerous, the implicit criticism would be entirely valid.
I had to laugh at the little caption.
“If they each behave rationally they end up doing worse.”
That’s what happens in Bayesian hell.
What’s your prior on that?
I assure you that after you’ve witnessed the horrors of Bayesian Hell, only your posterior beliefs about it will matter to you.
Hmm, looks like at least 3 people downvoted me, probably for a warning about political “bias” present in some videos. The karma loss is trivial, but I’m really perplexed by their reasoning. Because it sure looks like I wouldn’t have been punished if I just pretended—in bad faith—that there’s no ideology at play here, and simply shared a link!
I doubt it’s politics. Probably it’s because they didn’t like the video. There’s about a third of it where he’s not talking about cognitive science, he’s just complaining about culture while using cognitive science words for tenuously related things.
One of the key points of restraint to insight is to avoid trying to apply them beyond their appropriate scopes.
Zizek a populist? Really?
Uh, yes?
I don’t see how unless you’re classifying “populist” as a proper subset of “left-wing,” and -
Wait. I’m getting into an argument about semantics. Oops.
Eh? What does that have to do with anything? He’s about the most anti-authoritarian philosopher alive, now that Derrida is dead and they de-fanged Chomsky.
There’s a number of comments on this post where people wrongly think they know why someone is in disagreement with them:
http://lesswrong.com/r/discussion/lw/d9b/link_rsa_animate_extremely_entertaining/6wbn http://lesswrong.com/r/discussion/lw/d9b/link_rsa_animate_extremely_entertaining/6wbc http://lesswrong.com/r/discussion/lw/d9b/link_rsa_animate_extremely_entertaining/6w97
Arguably others. The other material is either empty (minus humor) or simply correction of these sorts of trivial errors. I think this is very common on LessWrong.
One of my remaining interests in this place is discovering why I find you all pretty unlikeable. This is a change in my viewpoint since I started actually becoming familiar with the joint and is also pretty surprising since I overlap philosophically in ways that usually make me fond of people. I’d say my reaction to this place is roughly what I feel about anti-science types. Of course, a lot of the dialogue here is superficially anti-science, but I don’t think that’s what’s setting me off. I think I really feel like this place is not just superficially anti-science. Something like your ideas about testing hypotheses and modifying beliefs are fine, but your hypotheses trend moronic (circling back to the opening point). Also, mainly concerned with superficialities (e.g., you will have an unduly strong reaction to my using the word “moronic”). Anyway, just some impressions. I think I’ll test something (not in a Gwernian way).
I think you are looking at the wrong places here. I mean—you are calling people idiots, you are breaking community norms by creating multiple accounts—this is not related to “anti-science” or anything like that. There are people here who disagree about artificial intelligence, many worlds, existential risks, reductionism, etc., but they usually don’t behave like that.
For many people here, being polite and friendly is not a “superficiality”. So perhaps this is what irritates you. Maybe you’d prefer a place where people discuss scientific ideas by calling each other idiots. Starting your own blog could also be a solution.
I’m not surprised at being downvoted, and I don’t mean that in the usual defensive way (i.e., “I have such a good model of you it predicts your behaviour and negative reaction is stupid; I’m superior, yadda yada”).
My behavior is worthy of being downvoted and some degree of annoyance with me is perfectly reasonable, appropriate, and likeable. Trying to extrapolate my annoyance from my behavior is misleading since I am not responding to what is irritating me and my (hidden specific) annoyance manifests as general irritability. I would say an appropriate criticism of me (which I have attempted to highlight when being critical) is the degree to which I am a collectivist in the way I think about LessWrong.
Let us try one more time, and I have asked questions like this before. Let’s make this my last post for a while (6 months); that way if I return to make some sort of status ploy from this consideration, the promise to have left for that time will diminish the value of the plot. So, let us play a game where you answer this request as if it were true rather than a bid for status. Name a post in the sequences I should read that I will find instructive. Is it really so difficult a request that you require a good model of me to answer? Just assume an undergrads knowledge in every field. Is there any physics that superpasses what a good (but not exceptional) undergrad knows? Or biology, computer science, philsophy, etc? I confess to finding the boxing experiment mildly interesting. In return, I will do something on my own time that would be useful to the Singularity Institute if they did it. If it works, I’ll return, tell someone, and insult you all with greater justice.
I don’t have “undergrads knowledge in every field”, so how could I know which parts of Sequences are outside that range?
I do suspect (but this may be just my ignorance speaking) that there is most to gain in philosophy, specifically philosophy of science. I never studied philosophy seriously, but my model (prejudice) is that it is a huge confused field with a lot of history, nonsense, and mysterious answers. As in: You learn who was Plato, when and where he lived, and what he said… but the question “is that really true?” is kind of forbidden. You don’t learn true or false, you just memorize and classify the ideas. This guy said this, the other guy said that; both of them are famous philosophers, both of them deserve respect; end of story. The only critical thing you can say about a philosopher X is that a philosopher Y disagreed with him, and because Y lived later and read X’s books, this criticism is also famous and deserves respect. But if later a philosopher Z says “meh” and follows the teachings of X anyway, well, that deserves respect too. So at the end we have many contradictory answers, all of them worthy of respect, but you can’t use any of them to build a better mousetrap. (But you could get a PhD in mousetrap philosophy by explaining how the idea of the mousetrap relates to the idea of the mouse, and why catching or not catching a mouse is just a language game, and that according to a different culture the so-called mouse is an ancient spirit.) -- All this is useless for science. And even if the useful parts are there somewhere, it is worth pointing at them and saying “this part is right, the other parts are wrong”.
If you want a specific example, for me it would be e.g. “How to Convince Me That 2 + 2 = 3”. As far as I know, this is not a part of undergrad math.
As someone who has studied philosophy seriously, I assure you that your model is pretty inaccurate, at least for top-tier American universities. It is not true that you are forbidden from actually evaluating arguments. It is true that instructors in college philosophy courses, especially at the upper level, won’t usually tell you which argument is better, or which conclusion is the right one. But that is because the skill that they are attempting to train is the ability to figure these things out for yourself. It is completely false that the student is forbidden from making judgments. The student is actively encouraged to make judgments. In fact, in upper level courses the student is evaluated almost entirely on her or his ability to evaluate and construct arguments, not on the ability to memorize and regurgitate what philosopher X said about philosopher Y.
On the 2 + 2 = 3 article, pretty much all of the ideas in there are covered in a standard philosophy undergrad curriculum, although the students are not told that they are the right ideas, and contradictory ideas are covered as well.
Thanks, updating. I didn’t study philosophy, only took an “introduction to philosophy” class and it was like I described.
Having an opportunity to figure it out on my own, seems cool. Not being corrected, ever, if I am wrong, seems cruel.
So I am not sure what exactly is the lesson learned. Is it “trust your brain to find the right answer” or “there is no right answer, but you can still win by arguing smartly”?
Philosophy does not have consensus-forming mechanisms that are remotely as effective as the ones in science. I do think this is a problem with the discipline. Philosophers get published (and famous) for producing clever arguments in support of some conclusion, or for cleverly showing that some other philosopher’s clever argument doesn’t work. They don’t get penalized in peer review for failing to show that their preferred conclusion is all things considered the right one. There are broadly agreed-upon conventions for the evaluation of arguments, but there are no agreed-upon conventions for meta-analysis which allow someone to survey all the arguments for and against a particular claim and unambiguously decide on which side the balance of evidence lies. So the sort of thing philosophers end up agreeing about (and teaching students) is “This is a clever argument for conclusion X, without any clear flaws” or “This argument for conclusion Y does not work for these reasons.” Consensus is formed about the status of arguments, rather than about conclusions.
In general, a professor of philosophy will be comfortable telling a student whether her arguments for some conclusion are good or bad, but will not feel comfortable telling her whether the conclusion itself is true or false. I think this is mostly attributable to a worry about giving the false impression that there is a disciplinary consensus about the issue. None of this means, of course, that philosophers never arrive at a consensus about anything. You won’t see evidence of the consensus in a philosophy class though, because the consensus is usually about which conclusions are almost certainly false. And those conclusions (and their corresponding arguments) are simply not taught in the class (unless it’s a class on the history of philosophy, where the pedagogical purpose is different). So the illusion of complete lack of consensus is really due to the fact that consensus hasn’t yet winnowed down the viable options to a single one. It doesn’t follow that there has been no winnowing at all.
ETA: As evidence of the depressing lack of consensus in philosophy, check out the PhilPapers survey. There are only two positions surveyed on which more than 70% of professional philosophers and philosophy Ph.D.’s agree: non-skeptical realism about the external world (76.6%) and scientific realism (70.1%). Atheism comes close with 69.7%. By contrast, there are 16 questions where all the answers have less than 50% support.
So what? I guess the survey just wouldn’t ask questions on which there’s consensus. And if you surveyed physicists about whether they think neutrinos are Majorana particles, whether cosmic rays above 10^19 eV are mostly protons or mostly heavier nuclei, and stuff like that you’d likely get similar results.
If you look at the questions in the survey, pretty much all the big topics covered in an undergraduate philosophy education are represented. It isn’t just a selection of particularly controversial topics. But you’re right that I should have specified this in order for my comment to make sense.
I’ll backtrack from “last post” for 6 months to last conversation for 6 months. Viliam, you’re a reasonably upvoted dude. You seem pretty normal for these parts. Exactly how annoyed do I get to be that your response to me is dumb? Isn’t the commitment to some aspects of rationality exemplified by my complete inability to restrain my annoyance with your being an idiot of some value? Yes, yes, I could be better still.
Again, I think your response is very typically LessWrongian: Wordy, vacant, stupid, irrational, over-confident with weaseliness to pretend otherwise, etc, etc. Do I get downvoted for telling you you’re being an idiot in thinking you need an undergrads knowledge in every field in order to know any part of the Sequences are outside that range? I didn’t ask for an exhaustive list; I asked for one post exceeding an undergrads knowledge in that field. Do I need to explain that in more detail? Do I lose points for being annoyed you took the time to write all those words and not a second to think about them. Do I get to be insulted that your model of me is basically retarded, from my point of view. Maybe that’s all you folks are capable of. Fine. What a shocking coincidence that your example comes from an area you never studied seriously, when I basically asked for just a single example of the opposite. The post you link to is fine but totally and completely uninformative to me.
Listen, of course you can defend your stupidity if you assume I’m a moron. You can say, well, if I don’t know what they study in philosophy, I can’t say blah isn’t covered. Can we not have that idiotic conversation? Can we just acknowledge that if you have a good physics knowledge in some area, you know when the conversation is on physics in that area and when it is exceeded without knowing all other fields? Do I have to be as wordy as you? I clearly am being so; I didn’t even bother reading the middle of your post. Just stupidity.
That’s not very important, and certainly one of dozens of things wrong. However, it’s something you’ll see.
So, I pointed out an error you made. LessWrongians like when people point out errors they make. The only reason I pointed it out is that I was annoyed. Ideally you’d find some reason other than annoyance for me to talk to you (repeating the request: A post in the sequences that is informative). You can also conclude it is not worth talking to me when that is all that motivates me. Maybe you think you can modify my behavior, but not without a carrot. Perhaps upvotes and downvotes are supposed to serve in that way, but they don’t for me.
Annoyed with yourself for having a wrong model of me? With your inability to communicate better? You know, the meaning of the communication is the response you get.
Nice try.
You probably meant to say:
You’re welcome!
Well excuse me, but I’m far below the LW average in rational thinking and my comments shouldn’t be taken as representative of anything pertaining to our group—I also don’t like math, don’t have an iNtuitive-Thinking personality, am not an atheist, etc. I’m biased as fuck and I enjoy it in a very perverse way.
Wait you aren’t an atheist? I must admit I assumed you where and didn’t find any contradictory indications in our previous conversations.
Oh, I’ve said it over 9000 times: I’m Gnostic. :) (The confusion may stem from the fact that I prefer to compartmentalize religious and “rational” thinking unless the former really yells at me not to. I think it’s just one of many areas in which a schizophrenic approach makes sense.)
Gnostic has a non-religious meaning. In any case I hope you know I’m going to report you to the proper authorities for heresy. :P
Throwing rocks from a glass house, bro?
Well, this is an entire community that just uses mathematicians (Bayes, Solomonoff, Kolmogorov, etc) as name-drops while having very, very little clue about any of that, just some general very foggy idea. I’ve seen that behaviour elsewhere before. Some dude would e.g. read about Godel’s incompleteness theorem and then talk how humans must be capable of hypercomputation if they understand that theorem. Passed for a local genius theoretician.