HA, your analogy fails to hold because the fire isn’t performing a computation, and hence cannot be said to be evaluating the outcomes of any actions.
Cyan2
HA, Dennett’s a compatibilist, so his analogy is meant to demonstrate that making a choice is not an illusory experience. That’s the part I was talking about when I said we can meaningfully discuss the choices it makes.
For example, we can analyze the game and say, “The algorithm blundered in this move—it ignored a line of play which leads to a significant disadvantage,” or perhaps, “This move was excellent—the algorithm decided to sacrifice material for much greater activity for its developed pieces, allowing it to dominate the board; it will probably be able to force a win.” The fact that we can get into the guts of the code and point to the heuristics and evaluating functions that led to these plays does not invalidate the fact that the algorithm really did make choices. For conscious beings, the content of the experience of making a choice is in the evaluating and the acting, not in the exercise of some kind of “free will” that requires the essence of choice to exist outside a deterministic physics.
Given this framework, I’m not really seeing any danger in calling choices non-illusory.
Dennett’s got an analogy to address how choice can be both deterministic and non-illusory. He asks his audience to consider a deterministic chess-playing algorithm. You can play the same game with this algorithm over and over—it doesn’t learn. If you look at its internal state, you can see it generating ply-reply trees and evaluating the positions thus generated. In this view, “making a choice” reduces to “running a decision-making algorithm”. The computer chess player doesn’t have the cognitive apparatus to have an illusory experience of doing anything, and yet it remains meaningful to speak of the reasons it has for making the choices it does.
HA, what do you think of this analogy? (tone: genuine curiosity)
If so, the method is sloppy. The descriptions I have read of the pre-conditions for Gatekeeper participation have a giant hole in them; Eliezer assumed a false equivalence when he wrote them.
If you think people should actually care about the giant hole you perceived in the pre-conditions, you should probably explicitly state what it was.
Bob Unwin, thanks for the link. That argument is definitely worth some careful consideration.
poke, I take “logic” and “reason” to mean making inferences by syllogism. I really have no idea what your usage of the terms denote, so I can’t speak to it. I guess we were talking past each other. But I’m not so sure it’s wise to draw a sharp distinction between “the foundations of the scientific method” and what at least some scientists spend a good deal of time actually doing, i.e., specific applications of mathematical techniques.
Frank McGahon, you’re missing my point. Hint: reread my first sentence and my last sentence.
On cryonics: it’s easy to come up with poorly supported future scenarios—either pro or con. We’ve heard from the cons, so here’s a pro: at the point where it looks plausible to the general public that frozen dead people might be revived, pulling the plug on the freezers may appear to become morally equivalent to pulling the plug on patients with intact brains who are comatose but not medically dead. It may no longer a purely financial question in the eye of the public, especially if some enterprising journalist decides to focus on the issue.
This sort of prognostication is a mug’s game.
...forms of reasoning that are far more subtle and powerful than Bayesian reasoning...
I am always interested in expanding my repertoire. Please give examples with links if possible.
...none of them involve or have any use for “logic” or “reason” or Bayesian probability theory; none of these things are taught, used or applied by scientists...
Logic and reason are not taught, used, or applied by scientists—what!? I’m not sure what the scare-quotes around “logic” and “reason” are supposed to convey, but on its face, this statement is jaw-dropping.
As a working scientist, I can tell you I have fruitfully applied Bayesian probability theory, and that it has informed my entire approach to research. Don’t duplicate Eliezer’s approach and reduce science to a monolithic structure with sharply drawn boundaries.
I have a colleague who is not especially mathematically inclined. He likes to mess around in the data and try to get the most information possible out of it. Although it would surprise him to hear it, all of his scientific inferences can be understood as Bayesian reasoning. Bayesian probability theory nothing more that an explicit formulation of one of the tasks that good working scientists are trained to do—specifically, learning from data.
When [Bayesian reasoning] determines how we seek more data, we become stuck in a feedback loop and trapped in local minimization ruts.
I believe this is incorrect. Bayesian reasoning says (roughly) collect the data that will help nail down your current most uncertain predictions. It’s tricky to encode into Bayesian algorithms the model,
“An underspecified generalization of our current model which is constrained to give the same answers as our current model in presently available experiments but could give different answers in new experimental regimes.”
But Bayesian reasoning says that this possibility is not ruled out by our current evidence or prior information, so we must continue to test our current models in new experimental regimes to optimize our posterior predictive precision.
Switching topics… capital ‘S’ Science may be a useful literary foil, but count me among the group of people who are not convinced that it should be identified with the human activity of science.
Well, Klaus Fuchs was spying for the Russians. I imagine that the military would have put that in the “naughty” category. ;-)
Caledonian, it’s not that I don’t understand what you’re saying, it’s just that I don’t agree. I think I’ll have to leave it there since I want to avoid fruitless arguments over the meanings of words.
Caledonian, my position is that the claim that there is no dragon in my garage and the claim that there is an undetectable dragon in my garage are logically inconsistent, not logically equivalent. From my perspective, the logical equivalence you are insisting on really does require you to ascribe Maxwell’s equations + angels the same credence you ascribe to the bare Maxwell’s equations. Kooky.
The different ‘interpretations’ give the same results for everything we can observe, and the same predictions for the things we can’t yet observe. They are logically equivalent. They are the same thing with different appearances. It is nonsensical to say that one is true, or that another is not true. They are all equally true.
I never thought I’d see Caledonian affirm the existence of angels, even if all they do is use Maxwell’s equations to figure out how to push on charged particles, but there it is in black and white. Wow.
Caledonian, two current theories may have identical consequences and yet suggest very different directions for refinement. For example, a theory which postulates a fundamentally deterministic universe suggests that we look for causes of observable events, whereas a theory which postulates a fundamentally random universe includes events for which it would be fruitless to search for a cause.
The theories are not equivalent, because they have different implications about the next sensible step in understanding the universe.
This post reminds me of an anecdote I read in a biography of Feynman. As a young physics student, he avoided using the principle of least action to solve problems, preferring to solve the differential equations. The nonlocal nature of the variational optimization required by the principle of least action seemed non-physical to him, whereas the local nature of the differential equations seemed more natural. Being a genius, he then went on to resolve the problem when he developed the sum-over-paths approach. It turns out that the path of least action has stationary phase shifts relative to infinitesimally different paths, so only paths near the path of least action combine constructively. Far away from the path of least action, phase shifts vary rapidly with infinitesimal variations in path, so those paths cancel out. Voilà, no spooky nonlocality (although there’s plenty of wacky QM-ness).
- Feb 2, 2010, 12:52 AM; 2 points) 's comment on Open Thread: February 2010 by (
Eliezer, are you by any chance a fan of the Silent Hill videogame franchise? Those zombie nurses strongly remind me of those games.
Sometimes the electron meets itself coming the other way and together it turns into a photon. And sometimes that photon can’t decide whether to go forwards in time or backwards in time, so it does both—but it doesn’t always do so as an electron/positron. So really there’s only one particle, period. ;-)
Psy-Kosh,
Curiously, I have just the opposite orientation—I like the fact that probability theory can be derived as an extension of logic totally independently of decision theory. Cox’s Theorem also does a good job on the “punishing stupid behavior” front. If someone demonstrate a system that disagrees with Bayesian probability theory, when you find a Bayesian solution you can go back to the desiderata and say which one is being violated. But on the math front, I got nothing—there’s no getting around the fact that functional equations are tougher than linear algebra.
HA, I think you’re right that fires can be said to be performing computations (in a deterministic universe). What the chess algorithm does that makes it different from a generic computation is goal-oriented actions driven by an explicit evaluation of possible outcomes. (Computation is necessary but not sufficient for this; I took a wrong step in bringing up generic computation.)
I’ll steal another analogy from Dennett. Your constituent molecules are not alive, but you are. Likewise, your constituent parts considered at a low level may not make choices, but you do. Both “life” and “choice-making” are properties of the arrangements of the bits you’re made up of. Being aware of making choices is another such property.
In my view, the worthwhile things to talk about when discussing a particular choice someone or something made are (i) the information available to the choice-maker, and (ii) the evaluation function it used to rank the available actions. I strongly reject the idea that such a discussion would be invalidated or made meaningless in a deterministic universe, which is where I think the “it’s dangerous to reify illusory choice” position takes us.