Thanks, I’ll check that out soon.
jwdink
Oy, I’m not following you either; apologies. You said:
The common criticism of Pearl is that this assumption fails if one assumes quantum mechanics is true.
...implying that people generally criticize his theory for “breaking” at quantum mechanics. That is, to find a system outside his “subset of causal systems” critics have to reach all the way to quantum mechanics. He could respond “well, QM causes a lot of trouble for a lot of theories.” Not bullet-proof, but still. However, you started (your very first comment) by saying that his theory “breaks” even in the gears example. So why have critics tried criticizing his theory for breaking in complex quantum mechanics, when all along there were much more simple and common causal situations they could have used to criticize the theory for breaking under?
More generally, I just can’t agree with your interpretation of Pearl that he was only trying to describe a subset of causal systems, if such a subset excludes such commonplace examples as the gears example. I think he was trying to describe a theory of how causation and counterfactuals can be formalized and mathemetized to describe most of nature. Perhaps this theory doesn’t apply to nature when described on the quantum mechanical level, but I find it extremely implausible that it doesn’t apply to the vast majority of nature. It was designed to. Can you really watch this video and deny he thinks that his theory applies to classical physics, such as the gears example? Or do you think he’d be stupid enough to not think of the gears example? I’m baffled by your position.
If his theory breaks in situations as mundane and simple as the gears example above, then why have common criticisms employed the vagaries of quantum mechanics in attempting to criticize the Markov assumption? They might as well have just used simple examples involving gears.
The theory is supposed to describe ANY causal system—otherwise it would be a crappy theory of how (normatively) people ought to reason causally, and how (descriptively) people do reason causally.
Reading Math: Pearl, Causal Bayes Nets, and Functional Causal Models
That philosophy itself can’t be supported by empirical evidence; it rests on something else.
Right, and I’m asking you what you think that “something else” is.
I’d also re-assert my challenge to you: if philosophy’s arguments don’t rest on some evidence of some kind, what distinguishes it from nonsense/fiction?
Unless you think the “Heideggerian critique of AI” is a good example. In which case I can engage that.
I think you are making a category error. If something makes claims about phenomena that can be proved/disproved with evidence in the world, it’s science, not philosophy.
Hmm.. I suspect the phrasing “evidence/phenomena in the world” might give my assertion an overly mechanistic sound to it. I don’t mean verifiable/disprovable physical/atomistic facts must be cited—that would be begging the question. I just mean any meaningful argument must make reference to evidence that can be pointed to in support of/ in criticism of the given argument. Note that “evidence” doesn’t exclude “mental phenomena.” If we don’t ask that philosophy cite evidence, what distinguishes it from meaningless nonsense, or fiction?
I’m trying to write a more thorough response to your statement, but I’m finding it really difficult without the use of an example. Can you cite some claim of Heidegger’s or Hegel’s that you would assert is meaningful, but does not spring out of an argument based on empirical evidence? Maybe then I can respond more cogently.
Continental philosophy, on the other hand, if you can manage to make sense of it, actually >can provide new perspectives on the world, and in that sense is worthwhile. Don’t assume >that just because you can’t understand it, it doesn’t have anything to say.
It’s not that people coming from the outside don’t understand the language. I’m not just frustrated the Hegel uses esoteric terms and writes poorly. (Much the same could be said of Kant, and I love Kant.) It’s that, when I ask “hey, okay, if the language is just tough, but there is content to what Hegel/Heidegger/etc is saying, then why don’t you give a single example of some hypothetical piece of evidence in the world that would affirm/disprove the putative claim?” In other words, my accusation isn’t that continental philosophy is hard, it’s that it makes no claims about the objective hetero-phenomenological world.
Typically, I say this to a Hegelian (or whoever), and they respond that they’re not trying to talk about the objective world, perhaps because the objective world is a bankrupt concept. That’s fine, I guess—but are you really willing to go there? Or would you claim that continental philosophy can make meaningful claims about actual phenomena, which can actually be sorted through?
I guess I’m wholeheartedly agreeing with the author’s statement:
You will occasionally stumble upon an argument, but it falls prey to magical categories >and language confusions and non-natural hypotheses.
That’s fantastic. What school was this?
If they can’t stop students from using Wikipedia, pretty soon schools will be reduced from teaching how to gather facts, to teaching how to think!
This is what kind of rubs me the wrong way about the above “idea selection” point. Is the implication here that the only utility of working through Hume or Kant’s original text is to cull the “correct” facts from the chaff? Seems like working through the text could be good for other reasons.
Haha, we must have very different criteria for “confusing.” I found that post very clear, and I’ve struggled quite a bit with most of your posts. No offense meant, of course: I’m just not very versed in the LW vernacular.
I agree generally that this is what an irrational value would mean. However, the presiding implicit assumption was that the utilitarian ends were the correct, and therefore the presiding explicit assumption (or at least, I thought it was presiding… now I can’t seem to get anyone to defend it, so maybe not) was that therefore the most efficient means to these particular ends were the most rational.
Maybe I was misunderstanding the presiding assumption, though. It was just stuff like this:
Lesswrongers will be encouraged to learn that the Torchwood characters were rationalists to a man and woman—there was little hesitation in agreeing to the 456′s demands.
Or this, in response to a call to “dignity”:
How many lives is your dignity worth? Would you be willing to actually kill people for your dignity, or are you only willing to make that transaction if someone else is holding the knife?
I don’t get you
Could you say why?
Okay, that’s fine. So you’ll agree that the various people—who were saying that the decision made in the show was the rational route—these people were speaking (at least somewhat) improperly?
It seems like you are seeing my replies as soldier-arguments for the object-level question about the sacrifice of children, stumped on a particular conclusion that sacrificing children is right, while I’m merely giving opinion-neutral meta-comments about the semantics of such opinions. (I’m not sure I’m reading this right.)
...so you’re NOT attempting to respond to my original question? My original question was “what’s irrational about not sacrificing the children?”
Wonderful post.
Because the brain is a hodge podge of dirty hacks and disconnected units, smoothing over and reinterpreting their behaviors to be part of a consistent whole is necessary to have a unified ‘self’. Drescher makes a somewhat related conjecture in “Good and Real”, introducing the idea of consciousness as a ‘Cartesian Camcorder’, a mental module which records and plays back perceptions and outputs from other parts of the brain, in a continuous stream. It’s the idea of “I am not the one who thinks my thoughts, I am the one who hears my thoughts”, the source of which escapes me. Empirical support of this comes from the experiments of Benjamin Libet, which show that a subconscious electrical processes precede conscious actions—implying that consciousness doesn’t engage until after an action has already been decided. If this is in fact how we handle internal information—smoothing out the rough edges to provide some appearance of coherence, it shouldn’t be suprising that we tend to handle external information in the same matter.
Even this language, I suspect, is couched in a manner that expresses Cartesian Materialist remnants. One of the most interesting things about Dennett is that he believes in free will, despite his masterful grasp of the disunity of conscious experience and action. This, I think, is because he recognizes an important fact: we have to redefine the conscious self as something spaced out over time and location (in the brain), not as the thing that happens AFTER the preceding neuronal indicators.
But perhaps I’m misinterpreting your diction.
Okay, so I’ll ask again: why couldn’t the humans real preference be to not sacrifice the children? Remember, you said:
You can’t decide your preference, preference is not what you actually do, it is what you should do
You haven’t really elucidated this. You’re either pulling an ought out of nowhere, or you’re saying “preference is what you should do if you want to win”. In the latter case, you still haven’t explained why giving up the children is winning, and not doing so is not winning.
And the link you gave doesn’t help at all, since, if we’re going to be looking at moral impulses common to all cultures and humans, I’m pretty sure not sacrificing children is one of them. See: Jonathan Haidt
But there seemed to be some suggestion that an avoidance of sacrificing the children, even to the risk of everyone’s lives was a “less rational” value. If it’s a value, it’s a value… how do you call certain values invalid, or not “real” preferences?
Thanks, that is helpful.
My claim was that, if we simply represent the gears example by representing the underlying (classical) physics of the system via Pearl’s functional causal models, there’s nothing cyclic about the system. Thus, Pearl’s causal theory doesn’t need to resort to the messy expensive stuff for such systems. It only needs to get messy in systems which are a) cyclic, and b) implausible to model via their physics—for example, negative and positive feedback loops (smoking causes cancer causes despair causes smoking).