Prompted by Maniakes’, but sufficiently different to post separately:
It cannot have escaped philosophers’ attention that our fellow academics in other fields—especially in the sciences—often have difficulty suppressing their incredulous amusement when such topics as Twin Earth, Swampman, and Blockheads are posed for apparently serious consideration. Are the scientists just being philistines, betraying their tin ears for the subtleties of philosophical investigation, or have the philosophers who indulge in these exercises lost their grip on reality?
These bizarre examples all attempt to prove one “conceptual” point or another by deliberately reducing something underappreciated to zero, so that What Really Counts can shine through. Blockheads hold peripheral behavior constant and reduce internal structural details (and—what comes to the same thing—intervening internal processes) close to zero, and provoke the intuition that then there would be no mind there; internal structure Really Counts. Manthra is more or less the mirror-image; it keeps internal processes constant and reduces control of peripheral behavior to zero, showing, presumably, that external behavior Really Doesn’t Count. Swampman keeps both future peripheral dispositions and internal states constant and reduces “history” to zero. Twin Earth sets internal similarity to maximum, so that external context can be demonstrated to be responsible for whatever our intuitions tell us. Thus these thought experiments mimic empirical experiments in their design, attempting to isolate a crucial interaction between variables by holding other variables constant. In the past I have often noted that a problem with such experiments is that the dependent variable is “intuition”—they are intuition pumps—and the contribution of imagination in the generation of intuitions is harder to control than philosophers have usually acknowledged.
But there is also a deeper problem with them. It is child’s play to dream up further such examples to “prove” further conceptual points. Suppose a cow gave birth to something that was atom-for-atom indiscernible from a shark. Would it be a shark? What is the truth-maker for sharkhood? If you posed that question to a biologist, the charitable reaction would be that you were making a labored attempt at a joke. Suppose an evil demon could make water turn solid at room temperature by smiling at it; would demon-water be ice? Too silly a hypothesis to deserve a response. All such intuition pumps depend on the distinction spelled out by McLaughlin and O’Leary-Hawthorne between “conceptual” and “reductive” answers to the big questions. What I hadn’t sufficiently appreciated in my earlier forthright response to Jackson is that when one says that the truth-maker question requires a conceptual answer, one means an answer that holds not just in our world, or all nomologically possible worlds, but in all logically possible worlds. Smiling demons, cow-sharks, Blockheads, and Swampmen are all, some philosophers think, logically possible, even if they are not nomologically possible, and these philosophers think this is important. I do not. Why should the truth-maker question cast its net this wide? Because, I gather, otherwise its answer doesn’t tell us about the essence of the topic in question. But who believes in real essences of this sort nowadays? Not I.
I don’t understand that quote. A good Bayesian should still pick the aposteriori most probable explanation for an improbable event, even if that explanation has very low prior probability before the event.
I think it’s more than that—he’s saying that if you have a plausible explanation for an event, the event itself is plausible, explanations being models of the world. It’s a warning against setting up excuses for why your model fails to predict the future in advance—you shouldn’t expect your model to fail, so when it does you don’t say, “Oh, here’s how this extremely surprising event fits my model anyway.” Instead, you say “damn, looks like I was wrong.”
Absolutely: I strongly recommend you not try to explain how 3^^^3 people might all get a dustspeck in their eye without anything else happening as a consequence, for example.
Is Eliezer claiming that we aren’t living in a simulation, claiming that if we are living in a simulation, it’s extremely unlikely to generate wild anomalies, or claiming that anything other than those two is vanishingly unlikely?
Prompted by Maniakes’, but sufficiently different to post separately:
Daniel Dennett, “Get Real” (emphasis added).
Eliezer Yudkowsky
(Some discussions here, such as those involving such numbers as 3^^^3, give me the same feeling.)
I don’t understand that quote. A good Bayesian should still pick the aposteriori most probable explanation for an improbable event, even if that explanation has very low prior probability before the event.
I suspect the point is that it’s not worthwhile to look for potential explanations for improbable events until they actually happen.
I think it’s more than that—he’s saying that if you have a plausible explanation for an event, the event itself is plausible, explanations being models of the world. It’s a warning against setting up excuses for why your model fails to predict the future in advance—you shouldn’t expect your model to fail, so when it does you don’t say, “Oh, here’s how this extremely surprising event fits my model anyway.” Instead, you say “damn, looks like I was wrong.”
I don’t, however, think it’s meant to be a warning against contrived thought experiments.
Absolutely: I strongly recommend you not try to explain how 3^^^3 people might all get a dustspeck in their eye without anything else happening as a consequence, for example.
It’s Yudkowsky. Sorry, pet peeve.
Fixed.
Is Eliezer claiming that we aren’t living in a simulation, claiming that if we are living in a simulation, it’s extremely unlikely to generate wild anomalies, or claiming that anything other than those two is vanishingly unlikely?
Sorry to be so ignorant but what is 3^^^3? Google yielded no satisfactory results…
http://en.wikipedia.org/wiki/Knuth_arrow
TheOtherDave’s other comment summed up what it means practically. Also, see http://lesswrong.com/lw/kn/torture_vs_dust_specks/.
Ah thank you, that clarifies things greatly! Up-voted for the technical explanation.
A number so ridiculously big that 3^^^3 * X can be assumed to be bigger than Y for pretty much any values of X and Y.
Bloody p-zombies. Argh. Yes.