The example of the three locks brings to mind another possible failure of this principle: that it can be exploited by deliberately giving us additional choices. For example, perhaps in this example the cheap lock is perfectly adequate for our needs, but seeing the existence of an expensive lock makes us believe that the regular one is the one that has equal chance of erring in both directions. I believe I read (in LW? or in Marginal Revolution?) that restaurant menus and sales catalogs often include some outrageously priced items to induce customers to buy the second-tier priced items, which look reasonable in comparison, but are the ones where most profit is made. Attempts to shift the Overton Window in politics rely on the same principle.
Alejandro1
On the other hand, it is kind of awesome that people with no knowledge of Esperanto but knowledge of two or three European languages can immediately understand everything you say—as I just did.
I doubt it is possible to find non-controversial examples of anything, and especially of things plausible enough to be believed by intelligent non-experts, outside of the hard sciences.
If this is true, the only plausible examples would be such as “an infinity cannot be larger than another infinity”, “time flows uniformly regardless of the observer”, “biological species have unchanging essences”, and other intuitively plausible statements unquestionably contradicted by modern hard sciences.
Turnabout Confusion is a Daria fanfic that portrays Lawndale High as being as full of Machiavellian plotters as HPMOR!Hogwarts is. Each student is keenly aware of their role in the popularity food chain, and most are constantly scheming on how to advance on it. When Daria and Quinn exchange roles for a few days on a spontaneous bet, they unwittingly set a chain reaction of plots and counterplots, leading to a massive Gambit Pileup that could overturn completely the whole social order of the school…
One thing that doesn’t quite fit is this: If you are the weaker side, how is it possible that you come and bully me, and expect me to immediately give up? This doesn’t seem like a typical behavior or weaker people surrounded by stronger people. (Possible explanation: This side is locally strong here, for some definition of “here”, but the enemy side is stronger globally.)
Another explanation could be that the side is dominant in one form of battle (moralizing) but weak at another kind (economic power, prestige, literal battle) and wish to play to their strengths.
See also Yvain on social vs. structural power.
More succinctly: I am rational, you are biased, they are mind-killed.
“However, yields an even better joke (due to an extra meta level) when preceded by its quotation and a comma”, however, yields an even better joke (due to an extra meta level) when preceded by its quotation and a comma.
Another type of rare event, not as important as the ones you discuss, but with a large community devoted to forecasting its odds: major upsets in sports, like the recent Germany-Brazil 7-1 blowout in the World Cup. Here is a 538 article discussing it and comparing it to other comparable upsets in other sports.
The mention of the Sokal paper reminds me that Derrida (who is frequently associated with the po-mo authors parodied by Sokal, although IIRC he was not targeted directly) was basically a troll, making fun of conventional academic philosophy in a similar way than Will makes fun of conventional LW thought. I wonder if Will has read Derrida…?
Scott remarks on this himself:
Another type of scenario involves minorities. Imagine, for instance, that 98% of the players are unfailingly nice to each other, but unfailingly cruel to the remaining 2% (who they can recognize, let’s say, by their long noses or darker skin—some trivial feature like that). Meanwhile, the put-upon 2% return the favor by being nice to each other and mean to the 98%. Who, in this scenario, is moral, and who’s immoral? The mathematical verdict of both eigenmoses and eigenjesus is unequivocal: the 98% are almost perfectly good, while the 2% are almost perfectly evil. After all, the 98% are nice to almost everyone, while the 2% are mean to those who are nice to almost everyone, and nice only to a tiny minority who are mean to almost everyone. Of course, for much of human history, this is precisely how morality worked, in many people’s minds. But I dare say it’s a result that would make moderns uncomfortable.
If the model of dice are perfectly fair and unbreakable is correct, then the Swedish king is justified in assigning very low probability to losing after rolling two sixes; but this model turns out to be incorrect in this case, and his confidence in winning should have been lower.
Of course it would be silly to apply this reasoning to dice in real life, but there are cases (like those discussed in the linked article) where the lesson applies.
On Confidence levels inside and outside an argument:
Thorstein Frode relates of this meeting, that there was an inhabited district in Hising which had sometimes belonged to Norway, and sometimes to Gautland. The kings came to the agreement between themselves that they would cast lots by the dice to determine who should have this property, and that he who threw the highest should have the district. The Swedish king threw two sixes, and said King Olaf need scarcely throw. He replied, while shaking the dice in his hand, “Although there be two sixes on the dice, it would be easy, sire, for God Almighty to let them turn up in my favour.” Then he threw, and had sixes also. Now the Swedish king threw again, and had again two sixes. Olaf king of Norway then threw, and had six upon one dice, and the other split in two, so as to make seven eyes in all upon it; and the district was adjudged to the king of Norway.
First observation: Surely any entity intelligent enough to hack the essay according to the rules you have set is also intelligent enough to get the maximum grade (much more easily) by the usual means of writing the assigned essay…
Second observation: Since the concept of “being on topic” is vague (essentially, anything that humans interpret as being on a certain topic is on that topic) maybe the easiest way to hack it following your rules would be to write an essay that is not on topic by the criteria the designers of the exam had in mind, but that is close enough that it can confuse the graders into believing it is on topic. An analogy could be how some postmodernists confused people into believing they were doing philosophy...
My suggestion would be to start with Dennett’s Consiousness Explained. It tackles exactly the questions you are interested in, and it is much more entertaining than the average philosophy/neurology book on the topic.
It is certainly debatable, but there are philosophers who make this argument, and I only used it as an example.
In question 1, is “consequences” supposed to mean “actual consequences”, “expected consequences”, “foreseeable consequences”…?
One sense of “burden of proof” seems to be a game-rule for a (non-Bayesian) adversarial debate game. It is intended to exclude arguments from ignorance, which if permitted would stall the game.
I like this framing, but “burden of proof” is also used in other contexts than arguments from ignorance. For example, two philosophers with opposing views on consciousness might plausibly get stuck in the following dialog:
A: If consciousness is reducible, then the Chinese room thinks, Mary can know red, zombies are impossible, etc.; all these things are so wildly counterintuitive that the position that the burden of proof falls on those who claim that consciousness is reducible.
B: Consciousness being irreducible would go so completely against all the scientific knowledge we have gained about the universe that the burden of proof falls on those who assert that.
Here “who has the burden of proof?” seems to be functioning as a non-Bayesian approximation for “whose position has the lowest prior probability?” The one with the lowest prior probability is the one that should give more evidence (have a higher P(E|H)/P(E)) if they want their hypothesis to prevail; in absence of new evidence, the one with the highest prior wins by default. The problem is that if the arguers have genuinely different priors this leads to stalemate, as in the example.
ETA: tl.dr, what Stabilizer said.
So the problem is that the politicians can’t lie well enough??
No, that’s not what he means. Quoting from the post (which I apologize for not linking to before):
Many of the commenters said that my position can’t be right because people will misapply it in dangerous ways. They are right that politicians will misapply it in dangerous ways. In fact, I bet some politicians who wrongfully lie do so because they think that they mistakenly fall under a murderer at the door-type case. But that doesn’t mean that the principle is wrong. It just means that people tend to mess up the application.
So, to recap. Brennan says “lying to voters is the right thing when good results from it”. His critics say, very reasonably, that since politicians and humans in general are biased in their own favor in manifold ways, every politician would surely think that good would result from their lies, so if everyone followed his advice everyone would lie all the time, with disastrous consequences. Brennan replies that this doesn’t mean that “lying is right when good results from it” is false; it just means that due to human fallibilities a better general outcome would be achieved if people didn’t try to do the right thing in this situation but followed the simpler rule of never lying.
My interpretation is that therefore in the post Multiheaded linked to Brennan was not, despite appearances, making a case that actually existing politicians should actually go ahead and lie, but rather making an ivory-tower philosophical point that sometimes them lying would be “the right thing to do” in the abstract sense.
A lot of the superficial evilness and stupidity is softened by the follow-up post, where in reply to the objection that politicians uniformly following this principle would result in a much worse situation, he says:
The fact that most people would botch applying a theory does not show that the theory is wrong. So, for instance, suppose—as is often argued—that most people would misapply utilitarian moral standards. Perhaps applying utilitarianism is too hard for the common person. Even if so, this does not invalidate utilitarianism. As David Brink notes, utilitarian moral theory means to provide a criterion of right, not a method for making decisions.
So maybe he just meant that in some situations the “objectively right” action is to lie to voters, without actually recommending that politicians go out and do it (just as most utilitarians would not recommend that people try to always act like strict naive utilitarians).
--Megan McArdle