AFAIK, the proposition that “Logical and physical reference together comprise the meaning of any meaningful statement” is original-as-a-whole (with many component pieces precedented hither and yon). Likewise I haven’t elsewhere seen the suggestion that the great reductionist project is to be seen in terms of analyzing everything into physics+logic.
An important related idea I haven’t gone into here is the idea that the physical and logical references should be effective or formal, which has been in the job description since, if I recall correctly, the late nineteenth century or so, when mathematics was being axiomatized formally for the first time. This pat is popular, possibly majoritarian; I think I’d call it mainstream. See e.g. http://plato.stanford.edu/entries/church-turing/ although logical specifiability is more general than computability (this is also already-known).
Obviously and unfortunately, the idea that you are not supposed to end up with more and more ontologically fundamental stuff is not well-enforced in mainstream philosophy.
AFAIK, the proposition that “Logical and physical reference together comprise the meaning of any meaningful statement” is original-as-a-whole (with many component pieces precedented hither and yon). Likewise I haven’t elsewhere seen the suggestion that the great reductionist project is to be seen in terms of analyzing everything into physics+logic.
If we take in our hand any volume; of divinity or school metaphysics, for instance; let us ask, Does it contain any abstract reasoning concerning quantity or number? No. Does it contain any experimental reasoning concerning matter of fact and existence? No. Commit it then to the flames: for it can contain nothing but sophistry and illusion.
David Hume, An Enquiry Concerning Human Understanding (1748)
As Mardonius says, 20th century logical empiricism (also called logical positivism or neopositivism) is basically the same idea with “abstract reasoning” fleshed out as “tautologies in formal systems” and “experimental reasoning” fleshed out initially as ” statements about sensory experiences”. So the neopositivists’ original plan was to analyze everything, including physics, in terms of logic + sense data (similar to qualia, in modern terminology). But some of them, like Neurath, considered logic + physics a more suitable foundation from the beginning, and others, like Carnap, became eventually convinced of this as well, so the mature neopositivist position is quite similar to yours.
One key difference is that for you (I think, correct me if I am wrong) reductionism is an ontological enterprise, showing that the only “stuff” there is (in some vague sense) is logic and physics. For the neopositivists, such a statement would be as meaningless as the metaphysics they were trying to “commit to the flames”. Reductionism was a linguistic enterprise: to develop a language in which every meaningful statement is translatable into sentences about physics (or qualia) and logic, in order to make the sciences more unified and coherent and to do away with muddled metaphysical thought.
Even just take the old logical postivist doctrine about analyticity/syntheticity: all statements are either “analytic” (i.e. true by logic (near enough)), or synthetic (true due to experience). That’s at least on the same track. And I’m pretty sure they wouldn’t have had a problem with statements that were partially both.
There is no article on Carnap on the SEP, and I couldn’t find a clear statement on the Vienna Circle article, but there is a fairly good one in the Neurath article:
In his classic work Der Logische Afbau der Welt (1928) (known as the Aufbau and translated as The Logical Structure of the World), Carnap investigated the logical ‘construction’ of objects of inter-subjective knowledge out of the simplest starting point or basic types of fundamental entities (Russell had urged in his late solution to the problem of the external world to substitute logical constructions for inferred entities). He introduced several possible domains of objects, one of which being the psychological objects of private sense experience—analysed as ‘elementary experiences’.
(…)
Neurath first confronted Carnap on yet another alleged feature of his system, namely, subjectivism. He promptly rejected Carnap’s proposals on the grounds that if the language and the system of statements that constitute scientific knowledge are intersubjective, then phenomenalist talk of immediate subjective, private experiences should have no place.
(…)
Following Neurath, Carnap explicitly opposed to the language of experience a narrower conception of intersubjective physicalist language which was to be found in the exact quantitative determination of physics-language realized in the readings of measurement instruments. Remember that for Carnap only the structural or formal features, in this case, of exact mathematical relations (manifested in the topological and metric characteristics of scales), can guarantee objectivity. After the Aufbau, now the unity of science rested on the universal possibility of the translation of any scientific statement into physical language—which in the long run might lead to the reduction of all scientific knowledge to the laws and concepts of physics.
The mature Carnap position seems to be, then, not to reduce everything to logic + fundamental physics (electrons/wavefunctions/etc), as perhaps you thought I had implied, but to reduce everything to logic + observational physics (statements like “Voltimeter reading = 10 volts”). Theoretical sentences about electrons and such are to be reduced (in some sense that varied which different formulations) to sentences of observational physics. This does not mean that for Carnap electrons are not “real”; as I said before, reductionism was conceived as a linguistic proposal, not an ontological thesis.
Cucumbers are both experiences and models, actually. You experience its sight, texture and taste, you model this as a green vegetable with certain properties which predict and constrain your similar future experiences.
Numbers, by comparison, are pure models. That’s why people are often confused about whether they “exist” or not.
You experience its sight, texture and taste, you model this as a green vegetable with certain properties which predict and constrain your similar future experiences.
Are experiences themselves models? If not, are you endorsing the view that qualia are fundamental?
Are experiences themselves models? If not, are you endorsing the view that qualia are fundamental?
Experiences are, of course, themselves a multi-layer combination of models and inputs, and at some point you have to stop, but qualia seem to be at too high a level, given that they appear to be reducible to physiology in most brain models.
How do you know models exist, and aren’t just experiences of a certain sort?
How do you know that unexperienced, unmodeled cucumbers don’t exist? How do you know there was no physical universe prior to the existence of experiencers and modelers?
I’ve played with the idea that there is nothing but experience (Zen and the Art of Motorcycle Maintenance was rather convincing). However, it then becomes surprising that my experience generally behaves as though I’m living in a stable universe with such things as previously unexperienced cucumbers showing up at plausible times.
I think there are three broadly principled and internally consistent epistemological stances: Radical skepticism, solipsism, and realism. Radical skepticism is principled because it simply demands extremely high standards before it will assent to any proposition; solipsism is principled because it combines skepticism with the Cartesian insight that I can be certain of my own experiences; and realism is principled because it tries to argue to the best explanation for phenomena in general, appealing to unexperienced posits that could plausibly generate the data at hand.
I do not tend to think so highly of idealistic and phenomenalistic views that fall somewhere in between solipsism and realism; these I think are not as pristine and principled as the above three views, and their uneven application of skepticism (e.g., doubting that mind-independent cucumbers exist but refusing to doubt that Platonic numbers or Other Minds exist) weakens their case considerably.
Radical stances are often more “consistent and principled” in the sense they’re easier to argue for, i.e., the arguments supporting them are shorter. That doesn’t mean their correct.
How do you know that unexperienced, unmodeled cucumbers don’t exist?
This question is meaningless in the framework I have described (Experience + models = reality). If you provide an argument why this framework is not suitable, i.e., it fails to be useful in a certain situation, feel free to give an example.
This question is meaningless in the framework I have described (Experience + models = reality).
If commitment to your view renders meaningless any discussion of whether your view is correct, then that counts against your view. We need to evaluate the truth of “Experience + models = reality” itself, if you think the statement in question is true. (And if it isn’t true, then what is it?)
If you provide an argument why this framework is not suitable, i.e., it fails to be useful in a certain situation, feel free to give an example.
Your language just sounds like an impoverished version of my language. I can talk about models of cucumbers, and experiences of cucumbers; but I can also speak of cucumbers themselves, which are the spatiotemporally extended referent of ‘cucumber,’ the object modeled by cucumber models, and the object represented by my experiential cucumbers. Experiences occur in brains; models are in brains, or in an abstract Platonic realm; but cucumbers are not, as a rule, in brains. They’re in gardens, refrigerators, grocery stores, etc.; and gardens and refrigerators and grocery stores are certainly not in brains either, since they are too big to fit in a brain.
Another way to motivate my concern: It is possible that we’re all mistaken about the existence of cucumbers; perhaps we’ve all been brainwashed to think they exist, for instance. But to say that we’re mistaken about the existence of cucumbers is not, in itself, to say that we’re mistaken about the existence of any particular experience or model; rather, it’s to say that we’re mistaken about the existence of a certain physical object, a thing in the world outside our skulls. Your view either does not allow us to be mistaken about cucumbers, or gives a completely implausible analysis of what ‘being mistaken about cucumbers’ means in ordinary language.
There may be a cerrtain element of cross purposes here. I’m pretty sure Carnap was only seeking to reduce sentences to epistemic components, not reduce reality to ontological componennts. I’m not sure what Shminux is saying.
True. Accurate. Describing how the world is. Corresponding to an obtaining fact. My argument is:
Cucumbers are real.
Cucumbers are not models.
Cucumbers are not experiences.
Therefore some real things are neither models nor experiences. (Reality is not just models and experiences.)
You could have objected to any of my 3 premises, on the grounds that they are simply false and that you have good evidence to the contrary. But instead you’ve chosen to question what ‘correctness’ means and whether my seemingly quite straightforward argument is even meaningful. Not a very promising start.
Taboo “correct” and all its synonyms, like “true” and “accurate”.
A personal favorite is: Achieves optimal-minimum “Surprising Experience” / “Models”(i.e. possible predictions consistent with the model) ratio.
That the same models achieve correlated / convergent such ratios across agents seems to be evidence that there is a unified something elsewhere that models can more accurately match, or less accurately match.
Note: I don’t understand all of this discussion, so I’m not quite sure just how relevant or adequate this particular definition/reduction is.
What is “obtaining fact” but analyzing (=modeling) an experience?
That a fact obtains requires no analysis, modeling, or experiencing. For instance, if no thinking beings existed to analyze anything, then it would be a fact that there is no thinking, no analyzing, no modeling, no experiencing. Since there would still be facts of this sort, absent any analyzing or modeling by any being, facts cannot be reduced to experiences or analyses of experience.
Yes, given that experiences+models=reality, cucumbers are a subset of reality.
You still aren’t responding to my argument. You’ve conceded premise 1, but you haven’t explained why you think premise 2 or 3 is even open to reasonable doubt, much less outright false.
∃x(cucumber(x))
∀x(cucumber(x) → ¬model(x))
∀x(cucumber(x) → ¬experience(x))
∴ ∃x(¬model(x) ∧ ¬experience(x))
This is a deductively valid argument (i.e., the truth of its premises renders its conclusion maximally probable). And it entails the falsehood of your assertion “Experience + models = reality” (i.e., it at a minimum entails the falsehood of ∀x(model(x) ∨ experience(x))). And all three of my premises are very plausible. So you need to give us some evidence for doubting at least one of my premises, or your view can be rejected right off the bat. (It doesn’t hurt that defending your view will also help us understand what you mean by it, and why you think it better than the alternatives.)
Sure, all counterfactuals are models. But there is a distinction between counterfactuals that model experiences, counterfactuals that model models, and counterfactuals that model physical objects. Certainly not all models are models of models, just as not all words denote words, and not all thoughts are about thoughts.
When we build a model in which no experiences or models exist, we find that there are still facts. In other words, a world can have facts without having experiences or models; neither experiencelessness nor modellessness forces or entails the total absence of states of affairs. If x and y are not equivalent — i.e., if they are not true in all the same models — then x and y cannot mean the same thing. So your suggestion that “obtaining fact” is identical to “analyzing (=modeling) an experience” is provably false. Facts, circumstances, states of affairs, events — none of these can be reduced to claims about models and experiences, even though we must use models and experiences in order to probe the meanings of words like ‘fact,’ ‘circumstance,’ ‘state of affairs.’ (For the same reason, ‘fact’ is not about words, even though ‘fact’ is a word and we must use words to argue about what facts are.)
When we build a model in which no experiences or models exist, we find that there are still facts. In other words, a world can have facts without having experiences or models
Not sure who that “we” is, but I’m certainly not a part of that group.
Anyway, judging by the downvotes, people seem to be getting tired of this debate, so I am disengaging.
Not sure who that “we” is, but I’m certainly not a part of that group.
Are you saying that when you model what the Earth was like prior to the existence of the first sentient and reasoning beings, you find that your model is of oblivion, of a completely factless void in which there are no obtaining circumstances? You may need to get your reality-simulator repaired.
Anyway, judging by the downvotes, people seem to be getting tired of this debate, so I am disengaging.
I haven’t gotten any downvotes for this discussion. If you’ve been getting some, it’s much more likely because you’ve refused to give any positive arguments for your assertion “experience + models = reality” than because people are ‘tired of this debate.’ If you started giving us reasons to accept that statement, you might see that change.
Is there any such definition of meaning that does not pile up incredibly higher power-towers of linguistic complexity and uses even more mental black boxes?
All the evidence I’ve seen so far not only imply that we’ve never found one, but that there might be a reason we would never find one.
OK. There might not be a clean definition of meaning. However, what this sub thread is about Shminux’s
right to set up a personal definition, and use it to reject criticism.
Valid point. Any “gerrymandered” definitions should be done with the intent to clarify or simplify the solution towards a problem, and I’d only evaluate them on their predictive usefulness, not how you can use them to reject or enforce arguments in debates.
Obviously and unfortunately, the idea that you are not supposed to end up with more and more ontologically fundamental stuff inside your philosophy is not mainstream.
I think I must be misunderstanding what you’re saying here because something very similar to this is probably the principle accusation relied upon in metaphysical debates (if not the very top, certainly top 3). So let me outline what is standard in metaphysical discussions so that I can get clear on whether you’re meaning something different.
In metaphysics, people distinguish between quantitative and qualitative parsimony. Quantitative parisimony is about the amount of stuff your theory is committed to (so a theory according to which more planets exist is less quantitatively parsimonious than an alternative). Most metaphysicians don’t care about quantative parsimony. On the other hand, qualitative parsimony is about the types of stuff that your theory is committed to. So if a theory is committed to causation and time, this would be less qualitatively parsimonious than one that that was only committed to causation (just an example, not meant to be an actual case). Qualitative parsimony is seen to be one of the key features of a desirable metaphysical theory. Accusations that your theory postulates extra ontological stuff but doesn’t gain further explanatory power for doing so is basically the go to standard accusation against a metaphysical theory.
Fundamentality is also a major philosophical issue—the idea that some stuff you postulate is ontologically fundamental and some isn’t. Fundamentality views are normally coupled with the view that what really matters is qualitative parsimony of fundamental stuff (rather than stuff generally).
So how does this differ from the claim that you’re saying is not mainstream?
The claim might just need correction to say, “Many philosophers say that simplicity is a good thing but the requirement is not enforced very well by philosophy journals” or something like that. I think I believe you, but do you have an example citation anyway? (SEP entries or other ungated papers are in general good; I’m looking for an example of an idea being criticized due to lack of metaphysical parsimony.) In particular, can we find e.g. anyone criticizing modal logic because possibility shouldn’t be basic because metaphysical parsimony?
In terms of Lewis, I don’t know of someone criticising him for this off-hand but it’s worth noting that Lewis himself (in his book On the Plurality of Worlds) recognises the parsimony objection and feels the need to defend himself against it. In other words, even those who introduce unparsimonious theories in philosophy are expected to at least defend the fact that they do so (of course, many people may fail to meet these standards but the expectation is there and theories regularly get dismissed and ignored if they don’t give a good accounting of why we should accept their unparsimonious nature).
Sensations and brain processes: one of Jack Smart’s main grounds for accepting the identity theory of mind is based around considerations of parsimony
Quine’s paper On What There Is is basically an attack on views that hold that we need to accept the existence of things like pegasus (because otherwise what are we talking about when we say “Pegasus doesn’t exist”). Perhaps a ridiculous debate but it’s worth noting that one of Quine’s main motivations is that this view is extremely unparsimonious.
From memory, some proponents of EDT support this theory because they think that we can achieve the same results as CDT (which they think is right) in a more parsimonious way by doing so (no link for that however as that’s just vague recollection).
I’m not actually a metaphysician so I can’t give an entire roll call of examples but I’d say that the parsimony objection is the most common one I hear when I talk to metaphysicians.
“Make things as simple as possible, but no simpler.”—Albert Einstein
How do you know whether something is as simple as possible?
In terms of publishing, should the standard be as simple as is absolutely possible, or should it be as simple as possible given time and mental constraints?
It still may be hard to resolve when something is as simple as possible.
So modal realism (the idea that possible worlds exist concretely) has been highlighted a few times in this thread as an unparsimonious theory but Lewis has two responses to this:
1.) This is (at least mostly) quantitative unparsimony not qualitative (lots of stuff, not lots of types of stuff). It’s unclear how bad quantitative unparsimony is. Specifically, Lewis argues that there is no difference between possible worlds and actual worlds (actuality is indexical) so he argues that he doesn’t postulate two types of stuff (actuality and possibility) he just postulates a lot more of the stuff that we’re already committed to. Of course, he may be committed to unicorns as well as goats (which the non-realist isn’t) but then you can ask whether he’s really committed to more fundamental stuff than we are.
2.) Lewis argues that his theory can explain things that no-one else can so even if his theory is less parsimonious, it gives rewards in return for that cost.
Now many people will argue that Lewis is wrong, perhaps on both counts but the point is that even with the case that’s been used almost as a benchmark for unparsimonious philosophy in this thread, it’s not as simple as “Lewis postulates two types of stuff when he doesn’t need to, therefore, clearly his theory is not as simple as possible.”
Isn’t this, essentially, a mild departure from late Logical Empiricism to allow for a wider definition of Physical and a more specific definition of Logical references?
The Great Reductionist Project can be seen as figuring out how to express meaningful sentences in terms of a >combination of physical references (statements whose truth-value is determined by a truth-condition directly >correspnding to the real universe we’re embedded in) and logical references (valid implications of premises, >or elements of models pinned down by axioms); where both physical references and logical references are to >be described ‘effectively’ or ‘formally’, in computable or logical form. (I haven’t had time to go into this last part >but it’s an already-popular idea in philosophy of computation.)
And the Great Reductionist Thesis can be seen as the proposition that everything meaningful can be >expressed this way eventually.
Which, to my admittedly rusty knowledge of mid 20th century philosophy, sounds extremely similar to the anti-metaphysics position of Carnap circa 1950. His work on Ramsey sentences, if I recall, was an attempt to reduce mixed statements including theoretical concepts (“appleness”) to a statement consisting purely of Logical and Observational Terms. I’m fairly sure I saw something very similar to your writings in his late work regarding Modal Logic, but I’m clearly going to have to dig up the specific passage.
Amusingly, this endeavor also sounds like your arch-nemesis David Chalmers’ new project, Constructing the World. Some of his moderate responses to various philosophical puzzles may actually be quite useful to you in dismissing sundry skeptical objections to the reductive project; from what I’ve seen, his dualism isn’t indispensable to the interesting parts of the work.
Just to say that in general, apart from the stuff about consciousness, which I disagree with but think is interesting, I think that Chalmers is one of the best philosophers alive today. Seriously, he does a lot of good work.
It’s too bad EY is deeply ideologically committed to a different position on AI, because otherwise his philosophy seems to very closely parallel John Searle’s. Searle is clearer on some points and EY is clearer on others, but other than the AI stuff they take a very similar approach.
EDIT: To be clear, John Searle has written a lot, lot more than the one paper on the Chinese Room, most of it having nothing to do with AI.
So… admittedly my main acquaintance with Searle is the Chinese Room argument that brains have ‘special causal powers’, which made me not particularly interested in investigating him any further. But the Chinese Room argument makes Searle seem like an obvious non-reductionist with respect to not only consciousness but even meaning; he denies that an account of meaning can be given in terms of the formal/effective properties of a reasoner. I’ve been rendering constructive accounts of how to build meaningful thoughts out of “merely” effective constituents! What part of Searle is supposed to be parallel to that?
I guess I must have misunderstood something somewhere along the way, since I don’t see where in this sequence you provide “constructive accounts of how to build meaningful thoughts out of ‘merely’ effective constituents” . Indeed, you explicitly say “For a statement to be … true or alternatively false, it must talk about stuff you can find in relation to yourself by tracing out causal links.” This strikes me as parallel to Searle’s view that consciousness imposes meaning.
But, more generally, Searle says his life’s work is to explain how things like “money” and “human rights” can exist in “a world consisting entirely of physical particles in fields of force”; this strikes me as akin to your Great Reductionist Project.
Searle says his life’s work is to explain how things like “money” and “human rights” can exist in “a world consisting entirely of physical particles in fields of force”;
Someone should tell him this has already been done: dissolving that kind of confusion is literally part of LessWrong 101, i.e. the Mind Projection Fallacy. Money and human rights and so forth are properties of minds modeling particles, not properties of the particles themselves.
That this is still his (or any other philosopher’s) life’s work is kind of sad, actually.
I guess my phrasing was unclear. What Searle is trying to do is generate reductions for things like “money” and “human rights”; I think EY is trying to do something similar and it takes him more than just one article on the Mind Projection Fallacy. (Even once you establish that it’s properties of minds, not particles, there’s still a lot of work left to do.)
Or maybe Searle is tackling a much harder version of the problem, for instance explaining how things like human rights and ethics can be binding or obligatory on people when they are “all in the mind”, explaining why one person should be beholden to another’s mind projection.
Note that “should be beholden” is a concept from within an ethical system; so invoking it in reference to an entire ethical system is a category error.
Also, I feel that the sequences do pretty well at explaining the instrumental reasons that agents with goals have ethics; even ethics which may, in some circumstances, prohibit reaching their goals.
This strikes me as parallel to Searle’s view that consciousness imposes meaning.
Why? Did I mention consciousness somewhere? Is there some reason a non-conscious software program hooked up to a sensor, couldn’t do the same thing?
I don’t think Searle and I agree on what constitutes a physical particle. For example, he thinks ‘physical’ particles are allowed to have special causal powers apart from their merely formal properties which cause their sentences to be meaningful. So far as I’m concerned, when you tell me about the structure of something’s effects on the particle fields, there shouldn’t be anything left after that—anything left is extraphysical.
Searle’s views have nothing to do with attributing novel properties to fundamental particles. They are more to do with identifying mental properties with higher-levle physical
properties, which are themselves irreducible in a sense (but also reducible in another sense).
It’s too bad EY is deeply ideologically committed to a different position on AI, because otherwise his philosophy seems to very closely parallel John Searle’s
Perhaps I’m confused, but isn’t Searle the guy who came up with that stupid Chinese Room thing? I don’t see at all how that’s remotely parallel to LW philosophy, or why it would be a bad thing to be ideologically opposed to his approach to AI. (He seems to think it’s impossible to have AI, after all, and argues from the bottom line for that position.)
I was talking about Searle’s non-AI work, but since you brought it up, Searle’s view is:
qualia exists (because: we experience it)
the brain causes qualia (because: if you cut off any other part of someone they still seem to have qualia)
if you simulate a brain with a Turing machine, it won’t have qualia (because: qualia is clearly a basic fact of physics and there’s no way just using physics to tell whether something is a Turing-machine-simulating-a-brain or not)
I think the first point is unfounded (or misguided). We do things (like moving, and thinking). We notice and can report that we’ve done things, and occasionally we notice and can report that we’ve noticed that we’ve done something. That we can report how things appear to a part of us that can reflect upon stimuli is not important enough to be called ‘quaila’. That we notice that we find experience ‘ineffable’ is not a surprise either—you would not expect the brain to be able to report everything that occurs, down to the neurons firing (or atoms moving).
So, all we really have is the ability to notice and report that which has been advantageous for us to report in the evolutionary history of the human (these stimuli that we can notice are called ‘experiences’). There is nothing mysterious here, and the word ‘qualia’ always seems to be used mysteriously—so I don’t think the first point carries the weight it might appear to.
if you simulate a brain with a Turing machine, it won’t have qualia (because: qualia is clearly a basic fact of physics and there’s no way just using physics to tell whether something is a Turing-machine-simulating-a-brain or not)
Qualia is not clearly a basic fact of physics. I made the point that we would not expect a species designed by natural selection to be able to report or comprehend its most detailed, inner workings, solely on the evidence of what it can report and notice. But this is all skirting around the core idea of LessWrong: The map is not the territory. Just because something seems fundamental does not mean it is. Just because it seems like a Turing machine couldn’t be doing consciousness, doesn’t mean that is how it is. We need to understand how it came to be that we feel what we feel, before go making big claims about the fundamental nature of reality. This is what is worked on in LessWrong, not in Searle’s philosophy.
That we notice that we find ‘ineffable’ is not a surprise either—you would not expect the brain to be able to report everything that occurs, down to the neurons firing (or atoms moving)That we notice that we find ‘ineffable’ is not a surprise either—you would not expect the brain to be able to report everything that occurs, down to the neurons firing (or atoms moving)
If the ineffabiity of qualia is down to the complexity of fine-grained neural behaviour, then the question is why is anything effable—people can communicate about all sorts of things that aren’t sensations (and in many cases are abstract and “in the head”).
I’m not sure that I follow. Can anything we talk about be reduced to less than the basic stimuli we notice ourselves having?
All words (that mean anything) refer to something. When I talk about ‘guitars’, I remember experiences I’ve had which I associate with the word (i.e. guitars). Most humans have similar makeups, in that we learn in similar ways, and experience in similar ways (I’m just talking about the psychological unity of humans, and how far our brain design is from, say, mice). So, we can talk about things, because we’ve learnt to refer certain experiences (words) to others (guitars).
Neither of the two can refer to anything other to the experiences we have. Anything we talk about is in relation to our experiences (Or possibly even meaningless).
Most of the classic reductions are reductions to things beneath perceivable stimuli,eg heat to molecular motion. Reductionism and physialism would be in very bad trouble if language and concpetualistion grounded out where perception does. The theory also mispredicts that we woul be able communicate our sensations , but struggle
to communicate abstract (eg mathemataical) ideas with a distant rleationship, or no relationship to senssation. In fact, the classic reductions are to the basic entities of phyiscs, which are ultimately defined mathematically, and often hard to hard to visualise or otherwise relate to sensation.
You could point out the different constituents of experience that feel fundamental, but they themselves (e.g. Red) don’t feel as though they are made up of anything more than themselves.
When we talk about atoms, however, that isn’t a basic piece of mind that mind can talk about. My mind feels as though it is constituted of qualia, and it can refer to atoms. I don’t experience an atom, I experience large groups of them, in complex arrangements. I can refer to the atom using larger, complex arrangements of neurons (atoms). Even though, when my mind asks what the basic parts of reality are, it has a chain of reference pointing to atoms, each part of that chain is a set of neural connections, that don’t feel reducible.
Even on reflection, our experiences reduce to qualia. We deduce that qualia are made of atoms, but that doesn’t mean that our experience feels like its been reduced to atoms.
I’m saying that we should expect experience to feel as if made of fundamental, ineffable parts, even though we know that it is not. So, qualia aren’t the problem for a turing machine they appear to be.
Also, we all share these experience ‘parts’ with most other humans, due to the psychological unity of humankind. So, if we’re all sat down at an early age, and drilled with certain patterns of mind parts (times-tables), then we should expect to be able to draw upon them at ease.
My original point, however, was just that the map isn’t the territory. Qualia don’t get special attention just because they feel different. They have a perfectly natural explanation, and you don’t get to make game-changing claims about the territory until you’ve made sure your map is pretty spot-on.
I’m saying that we should expect experience to feel as if made of fundamental, ineffable parts, even though we know that it is not.
I don ’t see why. Saying that eperience is really complex neurall activity isn’t enough to explain that, because thought
is really complex neural activity as well, and we can comminicate and unpack concepts.
So, qualia aren’t the problem for a turing machine they appear to be.
Can you write the code for SeeRed() ? Or are you saying that TMs would have ineffable concepts?
. Qualia don’t get special attention just because they feel different. They have a perfectly natural explanation,
You’ve inverted the problem: you have creatd the expectation that nothing mental is effable.
No, I’m saying that no basic, mental part will feel effable. Using our cognition, we can make complex notions of atoms and guitars, built up in our minds, and these will explain why our mental aspects feel fundamental, but they will still feel fundamental.
I’m saying that there are (something like) certain constructs in the brain, that are used whenever the most simple conscious thought or feeling is expressed. They’re even used when we don’t choose to express something, like when we look at something. We immediately see it’s components (surfaces, legs, handles), and the ones we can’t break down (lines, colours) feel like the most basic parts of those representations in our minds.
Perhaps the construct that we identify as red, is set of neurons XYZ firing. If so, whenever we notice (that is, other sets of neurons observe) that XYZ go off, we just take it to be ‘red’. It really appears to be red, and none of the other workings of the neurons can break it any further. It feels ineffable, because we are not privy to everything that’s going on. We can simply use a very restricted portion of the brain, to examine other chunks, and give them different labels.
However, we can use other neuronal patterns, to refer to and talk about atoms. Large groups of complex neural firings can observe and reflect upon experimental results that show that the brain is made of atoms.
Now, even though we can build up a model of atoms, and prove that the basic features of conscious experience (redness, lines, the hearing of a middle C) are made of atoms, the fact is, we’re still using complex neuronal patterns to think about these. The atom may be fundamental, but it takes a lot of complexity for me to think about the atom. Consciousness really is reducible to atoms, but when I inspect consciousness, it still feels like a big complex set of neurons that my conscious brain can’t understand. It still feels fundamental.
Experientially, redness doesn’t feel like atoms because our conscious minds cannot reduce it in experience, but they can prove that it is reducible. People make the jump that, because complex patterns in one part of the brain (one conscious part) cannot reduce another (conscious) part to mere atoms, it must be a fundamental part of reality. However, this does not follow logically—you can’t assume your conscious experience can comprehend everything you think and feel at the most fundamental level, purely by reflection.
I feel I’ve gone on too long, in trying to give an example of how something could feel basic but not be. I’m just saying we’re not privy to everything that’s going on, so we can’t make massive knowledge claims about it i.e. that a turing-machine couldn’t experience what we’re experiencing, purely by appeal to reflection. We just aren’t reflectively transparent.
I can’t really speak for LW as a whole, but I’d guess that among the people here who don’t believe¹ “qualia doesn’t exist”, 1 and 2 are fine, but we have issues with 3, as expanded below. Relatedly, there seems be some confusion between the “boring AI” proposition, that you can make computers do reasoning, and Searle’s “strong AI” thing he’s trying to refute, which says that AIs running on computers would have both consciousness and some magical “intentionality”. “Strong AI” shouldn’t actually concern us, except in talking about EMs or trying to make our FAI non-conscious.
3. if you simulate a brain with a Turing machine, it won’t have qualia
Pretty much disagree.
qualia is clearly a basic fact of physics
Really disagree.
and there’s no way just using physics to tell whether something is a Turing-machine-simulating-a-brain or not
And this seems really unlikely.
¹ I qualify my statement like this because there is a long-standing confusion over the use of the word “qualia” as described in my parenthetical here.
Well, let’s be clear: the argument I laid out is trying to refute the claim that “I can create a human-level consciousness with a Turing machine”. It doesn’t mean you couldn’t create an AI using something other than a pure Turing machine and it doesn’t mean Turing machines can’t do other smart computations. But it does mean that uploading a brain into a Von Neumann machine isn’t going to keep you alive.
So if you disagree that qualia is a basic fact of physics, what do you think it reduces to? Is there anything else that has a first-person ontology the way qualia does?
And if you think physics can tell whether something is a Turing-machine-simulating-a-brain, what’s the physical algorithm for looking at a series of physical particles and deciding whether it’s executing a particular computation or not?
So if you disagree that qualia is a basic fact of physics, what do you think it reduces to?
Something brains do, obviously. One way or another.
And if you think physics can tell whether something is a Turing-machine-simulating-a-brain, what’s the physical algorithm for looking at a series of physical particles and deciding whether it’s executing a particular computation or not?
I should perhaps be asking what evidence Searle has for thinking he knows things like what qualia is, or what a computation is. My statements were both negative: it is not clear that qualia is a basic fact of physics; it is not obvious that you can’t describe computation in physical terms. Searle just makes these assumptions.
If you must have an answer, how about this: a physical system P is a computation of a value V if adding as premises the initial and final states of P and a transition function describing the physics of P shortens a formal proof that V = whatever.
if you simulate a brain with a Turing machine, it won’t have qualia (because: qualia is clearly a basic fact of physics and there’s no way just using physics to tell whether something is a Turing-machine-simulating-a-brain or not)
There’s your problem. Why the hell should we assume that “qualia is clearly a basic fact of physics ”?
Well, I probably can’t explain it as eloquently as others here—you should try the search bar, there are probably posts on the topic much better than this one—but my position would be as follows:
Qualia are experienced directly by your mind.
Everything about your mind seems to reduce to your brain.
Therefore, qualia are probably part of your brain.
Furthermore, I would point out two things: one, that qualia seem to be essential parts of having a mind; I certainly can’t imagine a mind without qualia; and two, that we can view (very roughly) images of what people see in the thalamus, which would suggest that what we call “qualia” might simply be part of, y’know, data processing.
Re #1: I certainly agree that we experience things, and that therefore the causes of our experience exist. I don’t really care what name we attach to those causes… what matters is the thing and how it relates to other things, not the label. That said, in general I think the label “qualia” causes more trouble due to conceptual baggage than it resolves, much like the label “soul”.
Re #2: This argument is oversimplistic, but I find the conclusion likely. More precisely: there are things outside my brain (like, say, my adrenal glands or my testicles) that alter certain aspects of my experience when removed, so it’s possible that the causes of those aspects reside outside my brain. That said, I don’t find it likely; I’m inclined to agree that the causes of my experience reside in my brain. I still don’t care much what label we attach to those causes, and I still think the label “qualia” causes more confusion due to conceptual baggage than it resolves.
Re #3: I see no reason at all to believe this. The causes of experience are no more “clearly a basic fact of physics” than the causes of gravity; all that makes them seem “clearly basic” to some people is the fact that we don’t understand them in adequate detail yet.
the brain causes qualia (because: if you cut off any other part of someone they still seem to have qualia)
if you simulate a brain with a Turing machine, it won’t have qualia (because: qualia is clearly a basic fact of physics and there’s no way just using physics to tell whether something is a Turing-machine-simulating-a-brain or not)
Which part does LW disagree with and why?
The whole thing: it’s the Chinese Room all over again, a intuition pump that begs the very question it’s purportedly answering. (Beginning an argument for the existence of qualia with a bare assertion that they exist is a little more obvious than the way that the word “understanding” is fudged in the Chinese Room argument, but basically it’s the same.)
I suppose you could say that there’s a grudging partial agreement with your point number two: that “the brain causes qualia”. The rest of what you listed, however, is drivel, as is easy to see if you substitute some other term besides “qualia”, e.g.:
Free will exists (because: we experience it)
The brain causes free will (because if you cut off any part, etc.)
If you simulate a brain with a Turing machine, it won’t have free will because clearly it’s a basic fact of physics and there’s no way to tell just using physics whether something is a machine simulating a brain or not.
It doesn’t matter what term you plug into this in place of “qualia” or “free will”, it could be “love” or “charity” or “interest in death metal”, and it’s still not saying anything more profound than, “I don’t think machines are as good as real people, so there!”
Or more precisely: “When I think of people with X it makes me feel something special that I don’t feel when I think of machines with X, therefore there must be some special quality that separates people from machines, making machine X ‘just a simulation’.” This is the root of all these Searle-ian arguments, and they are trivially dissolved by understanding that the special feeling people get when they think of X is also a property of how brains work.
Specifically, the thing that drives these arguments is our inbuilt machinery that classifies things as mind-having or not-mind-having, for purposes of prediction-making. But the feeling that we get that a thing is mind-having or not-mind-having is based on what was useful evolutionarily, not on what the actual truth is. Searlian (Surly?) arguments are thus in exactly the same camp as any other faith-based argument: elevating one’s feelings to Truth, irrespective of the evidence against them.
(Beginning an argument for the existence of qualia with a bare assertion that they exist is a little more obvious than the way that the word “understanding” is fudged in the Chinese Room argument, but basically it’s the same.)
Just a nit pick: the argument Aaron presented wasn’t an argument for the existence of qualia, and so taking the existence of qualia as a premise doesn’t beg the question. Aaron’s argument was an argument agains artificial consciousness.
Also, I think Aaron’s presentation of (3) was a bit unclear, but it’s not so bad a premise as you think. (3) says that since qualia are not reducible to purely physical descriptions, and since a brain-simulating turing-machine is entirely reducible to purely physical descriptions, brain-simulating turing-machines won’t experience qualia. So if we have qualia, and count as conscious in virtue of having qualia (1), then brain-simulating turing machines won’t count as conscious. If we don’t have qualia, i.e. if all our mental states are reducible to purely physical descriptions, then the argument is unsound because premise (1) is false.
You’re right that you can plug many a term in to replace ‘qualia’, so long as those things are not reducible to purely physical descriptions. So you couldn’t plug in, say, heart-attacks.
This is the root of all these Searle-ian arguments, and they are trivially dissolved by understanding that the special feeling people get when they think of X is also a property of how brains work.
Could you explain this a bit more? I don’t see how it’s relevant to the argument. Searle is not arguing on the basis of any special feelings. This seems like a straw man to me, at the moment, but I may not be appreciating the flaws in Searle’s argument.
the argument Aaron presented wasn’t an argument for the existence of qualia, and so taking the existence of qualia as a premise doesn’t beg the question
In order for the argument to make any sense, you have to buy into several assumptions which basically are the argument. It’s “qualia are special because they’re special, QED”. I thought about calling it circular reasoning, except that it seems closer to begging the question. If you have a better way to put it, by all means share.
Could you explain this a bit more? I don’t see how it’s relevant to the argument. Searle is not arguing on the basis of any special feelings. This seems like a straw man to me, at the moment, but I may not be appreciating the flaws in Searle’s argument.
When I said that our mind detection circuitry was the root of the argument, I didn’t mean that Searle was overtly arguing on the basis of his feelings. What I’m saying is, the only evidence for Searle-type premises are the feelings created by our mind-detection circuitry. If you assume these feelings mean something, then Searle-ish arguments will seem correct, and Searle-ish premises will seem obvious beyond question.
However, if you truly grok the mind-projection fallacy, then Searle-type premises are just as obviously nonsensical, and there’s no reason to pay any attention to the arguments built on top of them. Even as basic a tool as Rationalist Taboo suffices to debunk the premises before the argument can get off the ground.
you have to buy into several assumptions which basically are the argument.
Any vald argument has a conclusion that is entiailed by its premises taken jointly. Circularity is when the whole conclusion is entailed by one premise, with the others being window-dressing.
you have to buy into several assumptions which basically are the argument.
I think there is a way that ripe tomatoes seem visually: how is that mind-projection.
But … if you’re assuming that qualia are “not reducible to purely physical descriptions”, and you need qualia to be conscious, then obviously brain-simulations wont be conscious. But those assumptions seem to be the bulk of the position he’s defending, aren’t they?
But those assumptions seem to be the bulk of the position he’s defending, aren’t they?
Right, the argument comes down, for most of us, to the first premise: do we or do we not have mental states irreducible to purely physical conditions. Aaron didn’t present an argument for that, he just presented Searle’s argument against AI from that. But you’re right to ask for a defense of that premise, since it’s the crucial one and it’s (at the moment) undefended here.
Presenting an obvious result of a nonobvious premise as if it was a nonobvious conclusion seems suspicious, as if he’s trying to trick listeners into accepting his conclusion even when their priors differ.
Presenting a trivial conclusion from nontrivial premises as a nontrivial conclusion seems suspicious
Not only suspicious, but impossible: if the premises are non-trivial, the conclusion is non-trivial.
In every argument, the conclusion follows straight away from the premises. If you accept the premises, and the argument is valid, then you must accept the conclusion. The conclusion does not need any further support.
. (3) says that since qualia are not reducible to purely physical descriptions, and since a brain-simulating turing-machine is entirely reducible to purely physical descriptions, brain-simulating turing-machines won’t experience qualia.
To pick a further nit, the argument is more that qualia can’t be engineered into an AI. If an AI implementation has qualia at all, it would be serendipitous.
To pick a further nit, the argument is more that qualia can’t be engineered into an AI. If an AI implementation has qualia at all, it would be serendipitous.
That’s a possibility, but not as I laid out the argument: if being conscious entails having qualia, and if qualia are all irreducible to purely physical descriptions, and every state of a turning machine is reducible to a purely physical description, then turing machines can’t simulate consciousness. That’s not very neat, but I do believe it’s valid. Your alternative is plausible, but it requires my ‘turning machines are reducible to purely physical descriptions’ premise to be false.
Beginning an argument for the existence of qualia with a bare assertion that they exist
Huh? This isn’t an argument for the existence of qualia—it’s an attempt to figure out whether you believe in qualia or not. So I take it you disagree with step one, that qualia exists? Do you think you are a philosophical zombie?
I do think essentially the same argument goes through for free will, so I don’t find your reductio at all convincing. There’s no reason, however, to believe that “love” or “charity” is a basic fact of physics, since it’s fairly obvious how to reduce these. Do you think you can reduce qualia?
I don’t understand why you think this is a claim about my feelings.
Suppose that neuroscientists some day show that the quale of seeing red matches a certain brain structure or a neuron firing pattern or a neuro-chemical process in all humans. Would you then say that the quale of red has been reduced?
Imagine a flashlight with a red piece of cellophane over it pointed at a wall. Scientists some day discover that the red dot on the wall is caused by the flashlight—it appears each and every time the flashlight fires and only when the flashlight is firing. However, the red dot on the wall is certainly not the same as the flashlight: one is a flashlight and one is a red dot.
The red dot, on the other hand, could be reduced to some sort of interaction between certain frequencies of light-waves and wall-atoms and so on. But it will certainly not get reduced to flashlights.
By the same token, you are not going to reduce the-subjective-experience-of-seeing-red to neurons; subjective experiences aren’t made out of neurons any more than red dots are made of flashlights.
By the same token, you are not going to reduce the-subjective-experience-of-seeing-red to neurons; subjective experiences aren’t made out of neurons any more than red dots are made of flashlights.
Ok, that’s where we disagree. To me the subjective experience is the process in my brain and nothing else.
By the same token, you are not going to reduce the-subjective-experience-of-seeing-red to neurons; subjective experiences aren’t made out of neurons any more than red dots are made of flashlights.
I take it you disagree with step one, that qualia exists?
I think that anyone talking seriously about “qualia” is confused, in the same way that anyone talking seriously about “free will” is.
That is, they’re words people use to describe experiences as if they were objects or capabilities. Free will isn’t something you have, it’s something you feel. Same for “qualia”.
I do think essentially the same argument goes through for free will
Dissolving free will is considered an entry-level philosophical exercise for Lesswrong. If you haven’t covered that much of the sequences homework, it’s unlikely that you’ll find this discussion especially enlightening.
(More to the point, you’re doing the rough equivalent of bugging people on a newsgroup about a question that is answered in the FAQ or an RTFM.)
Free will isn’t something you have, it’s something you feel.
So you say. It is not standardly defined that way.
Same for “qualia”.
Qualia are defined as feelings, sensations etc. Since we have feelings, sensations etc we have qualia. I do not see the confusion in using the word “”qualia”
My intuition certainly says that Martha has a feeling of ineffable learning. Do you at least agree that this proves the unreliability of our intuitions here?
the neuron firing pattern is presumably the cause of the quale, it’s certainly not the quale itself.
And you seem to consider this self-evident. Well, it seemed self-evident to me that Martha’s physical reaction would ‘be’ a quale. So where do we go from there?
(Suppose your neurons reacted all the time the way they do now when you see orange light, except that they couldn’t connect it to anything else—no similarities, no differences, no links of any kind. Would you see anything?)
You’ve heard of functionalism, right? You’ve browsed the SEP entry?
Have you also read the mini-sequence I linked? In the grandparent I said “physical reaction” instead of “functional”, which seems like a mistake on my part, but I assumed you had some vague idea of where I’m coming from.
I do think essentially the same argument goes through for free will
Could you expand on this point, please? It generally agreed* that “free will vs determinism” is a dilemma that we dissolved long ago. I can’t see what else you could mean by this, so …
I guess it really depends on what you mean by free will. If by free will, pjeby meant some kind of qualitative experience, then it strikes me that what he means by it is just a form of qualia and so of course the qualia argument goes through. If he means by it something more complicated, then I don’t see how point one holds (we experience it), and the argument obviously doesn’t go through.
Beginning an argument for the existence of qualia with a bare assertion that they exist
But that’s not contentious. Qualia are things like the appearence of tomatoes or taste of lemon. I’ve seen tomatoes and tasted lemons.
This is the root of all these Searle-ian arguments, and they are trivially dissolved by understanding that the special feeling people get when they think of X is also a property of how brains work.
But Searle says that feelngs, understanding, etc are properties of how the brain works. What he argues against is the claim that they are computational properties. But it is also uncontentious that physiclaism can be true and computationalism false.
if you simulate a brain with a Turing machine, it won’t have qualia (because: qualia is clearly a basic fact of physics and there’s no way just using physics to tell whether something is a Turing-machine-simulating-a-brain or not)
It isn’t even clear to Searle that qualia are physically basic. He thinks consciousness is a a high-level outcome
of the brain’s concrete causal powers. His objection to computaional apporaches is rooted in the abstract nature of computation, not in the physcial basiscness of qualia. (In fact, he doesn’t use the word “qualia”, although he often seems to be talking about the same thing).
Mainstream status:
AFAIK, the proposition that “Logical and physical reference together comprise the meaning of any meaningful statement” is original-as-a-whole (with many component pieces precedented hither and yon). Likewise I haven’t elsewhere seen the suggestion that the great reductionist project is to be seen in terms of analyzing everything into physics+logic.
An important related idea I haven’t gone into here is the idea that the physical and logical references should be effective or formal, which has been in the job description since, if I recall correctly, the late nineteenth century or so, when mathematics was being axiomatized formally for the first time. This pat is popular, possibly majoritarian; I think I’d call it mainstream. See e.g. http://plato.stanford.edu/entries/church-turing/ although logical specifiability is more general than computability (this is also already-known).
Obviously and unfortunately, the idea that you are not supposed to end up with more and more ontologically fundamental stuff is not well-enforced in mainstream philosophy.
This seems awfully similar to Hume’s fork:
David Hume, An Enquiry Concerning Human Understanding (1748)
As Mardonius says, 20th century logical empiricism (also called logical positivism or neopositivism) is basically the same idea with “abstract reasoning” fleshed out as “tautologies in formal systems” and “experimental reasoning” fleshed out initially as ” statements about sensory experiences”. So the neopositivists’ original plan was to analyze everything, including physics, in terms of logic + sense data (similar to qualia, in modern terminology). But some of them, like Neurath, considered logic + physics a more suitable foundation from the beginning, and others, like Carnap, became eventually convinced of this as well, so the mature neopositivist position is quite similar to yours.
One key difference is that for you (I think, correct me if I am wrong) reductionism is an ontological enterprise, showing that the only “stuff” there is (in some vague sense) is logic and physics. For the neopositivists, such a statement would be as meaningless as the metaphysics they were trying to “commit to the flames”. Reductionism was a linguistic enterprise: to develop a language in which every meaningful statement is translatable into sentences about physics (or qualia) and logic, in order to make the sciences more unified and coherent and to do away with muddled metaphysical thought.
Is there a good statement of the “mature neopositivist” / Carnap’s position?
Even just take the old logical postivist doctrine about analyticity/syntheticity: all statements are either “analytic” (i.e. true by logic (near enough)), or synthetic (true due to experience). That’s at least on the same track. And I’m pretty sure they wouldn’t have had a problem with statements that were partially both.
There is no article on Carnap on the SEP, and I couldn’t find a clear statement on the Vienna Circle article, but there is a fairly good one in the Neurath article:
The mature Carnap position seems to be, then, not to reduce everything to logic + fundamental physics (electrons/wavefunctions/etc), as perhaps you thought I had implied, but to reduce everything to logic + observational physics (statements like “Voltimeter reading = 10 volts”). Theoretical sentences about electrons and such are to be reduced (in some sense that varied which different formulations) to sentences of observational physics. This does not mean that for Carnap electrons are not “real”; as I said before, reductionism was conceived as a linguistic proposal, not an ontological thesis.
Experience + logic != physics + logic > causality + logic
Experience + models = reality
Cucumbers are neither experiences nor models. Yet I’m pretty sure reality includes at least one cucumber.
Cucumbers are both experiences and models, actually. You experience its sight, texture and taste, you model this as a green vegetable with certain properties which predict and constrain your similar future experiences.
Numbers, by comparison, are pure models. That’s why people are often confused about whether they “exist” or not.
Are experiences themselves models? If not, are you endorsing the view that qualia are fundamental?
Experiences are, of course, themselves a multi-layer combination of models and inputs, and at some point you have to stop, but qualia seem to be at too high a level, given that they appear to be reducible to physiology in most brain models.
How do you know models exist, and aren’t just experiences of a certain sort?
How do you know that unexperienced, unmodeled cucumbers don’t exist? How do you know there was no physical universe prior to the existence of experiencers and modelers?
I’ve played with the idea that there is nothing but experience (Zen and the Art of Motorcycle Maintenance was rather convincing). However, it then becomes surprising that my experience generally behaves as though I’m living in a stable universe with such things as previously unexperienced cucumbers showing up at plausible times.
I think there are three broadly principled and internally consistent epistemological stances: Radical skepticism, solipsism, and realism. Radical skepticism is principled because it simply demands extremely high standards before it will assent to any proposition; solipsism is principled because it combines skepticism with the Cartesian insight that I can be certain of my own experiences; and realism is principled because it tries to argue to the best explanation for phenomena in general, appealing to unexperienced posits that could plausibly generate the data at hand.
I do not tend to think so highly of idealistic and phenomenalistic views that fall somewhere in between solipsism and realism; these I think are not as pristine and principled as the above three views, and their uneven application of skepticism (e.g., doubting that mind-independent cucumbers exist but refusing to doubt that Platonic numbers or Other Minds exist) weakens their case considerably.
Radical stances are often more “consistent and principled” in the sense they’re easier to argue for, i.e., the arguments supporting them are shorter. That doesn’t mean their correct.
This question is meaningless in the framework I have described (Experience + models = reality). If you provide an argument why this framework is not suitable, i.e., it fails to be useful in a certain situation, feel free to give an example.
If commitment to your view renders meaningless any discussion of whether your view is correct, then that counts against your view. We need to evaluate the truth of “Experience + models = reality” itself, if you think the statement in question is true. (And if it isn’t true, then what is it?)
Your language just sounds like an impoverished version of my language. I can talk about models of cucumbers, and experiences of cucumbers; but I can also speak of cucumbers themselves, which are the spatiotemporally extended referent of ‘cucumber,’ the object modeled by cucumber models, and the object represented by my experiential cucumbers. Experiences occur in brains; models are in brains, or in an abstract Platonic realm; but cucumbers are not, as a rule, in brains. They’re in gardens, refrigerators, grocery stores, etc.; and gardens and refrigerators and grocery stores are certainly not in brains either, since they are too big to fit in a brain.
Another way to motivate my concern: It is possible that we’re all mistaken about the existence of cucumbers; perhaps we’ve all been brainwashed to think they exist, for instance. But to say that we’re mistaken about the existence of cucumbers is not, in itself, to say that we’re mistaken about the existence of any particular experience or model; rather, it’s to say that we’re mistaken about the existence of a certain physical object, a thing in the world outside our skulls. Your view either does not allow us to be mistaken about cucumbers, or gives a completely implausible analysis of what ‘being mistaken about cucumbers’ means in ordinary language.
There may be a cerrtain element of cross purposes here. I’m pretty sure Carnap was only seeking to reduce sentences to epistemic components, not reduce reality to ontological componennts. I’m not sure what Shminux is saying.
Define “correct”.
True. Accurate. Describing how the world is. Corresponding to an obtaining fact. My argument is:
Cucumbers are real.
Cucumbers are not models.
Cucumbers are not experiences.
Therefore some real things are neither models nor experiences. (Reality is not just models and experiences.)
You could have objected to any of my 3 premises, on the grounds that they are simply false and that you have good evidence to the contrary. But instead you’ve chosen to question what ‘correctness’ means and whether my seemingly quite straightforward argument is even meaningful. Not a very promising start.
Sorry, EY is right, “define” is not a strong enough word. Taboo “correct” and all its synonyms, like “true” and “accurate”.
This is somewhat better. What is “obtaining fact” but analyzing (=modeling) an experience?
Yes, given that experiences+models=reality, cucumbers are a subset of reality.
A personal favorite is: Achieves optimal-minimum “Surprising Experience” / “Models”(i.e. possible predictions consistent with the model) ratio.
That the same models achieve correlated / convergent such ratios across agents seems to be evidence that there is a unified something elsewhere that models can more accurately match, or less accurately match.
Note: I don’t understand all of this discussion, so I’m not quite sure just how relevant or adequate this particular definition/reduction is.
That a fact obtains requires no analysis, modeling, or experiencing. For instance, if no thinking beings existed to analyze anything, then it would be a fact that there is no thinking, no analyzing, no modeling, no experiencing. Since there would still be facts of this sort, absent any analyzing or modeling by any being, facts cannot be reduced to experiences or analyses of experience.
You still aren’t responding to my argument. You’ve conceded premise 1, but you haven’t explained why you think premise 2 or 3 is even open to reasonable doubt, much less outright false.
∃x(cucumber(x))
∀x(cucumber(x) → ¬model(x))
∀x(cucumber(x) → ¬experience(x))
∴ ∃x(¬model(x) ∧ ¬experience(x))
This is a deductively valid argument (i.e., the truth of its premises renders its conclusion maximally probable). And it entails the falsehood of your assertion “Experience + models = reality” (i.e., it at a minimum entails the falsehood of ∀x(model(x) ∨ experience(x))). And all three of my premises are very plausible. So you need to give us some evidence for doubting at least one of my premises, or your view can be rejected right off the bat. (It doesn’t hurt that defending your view will also help us understand what you mean by it, and why you think it better than the alternatives.)
This is a counterfactual. I’m happy to consider a model where this is true, as long as you concede that this is a model.
Sure, all counterfactuals are models. But there is a distinction between counterfactuals that model experiences, counterfactuals that model models, and counterfactuals that model physical objects. Certainly not all models are models of models, just as not all words denote words, and not all thoughts are about thoughts.
When we build a model in which no experiences or models exist, we find that there are still facts. In other words, a world can have facts without having experiences or models; neither experiencelessness nor modellessness forces or entails the total absence of states of affairs. If x and y are not equivalent — i.e., if they are not true in all the same models — then x and y cannot mean the same thing. So your suggestion that “obtaining fact” is identical to “analyzing (=modeling) an experience” is provably false. Facts, circumstances, states of affairs, events — none of these can be reduced to claims about models and experiences, even though we must use models and experiences in order to probe the meanings of words like ‘fact,’ ‘circumstance,’ ‘state of affairs.’ (For the same reason, ‘fact’ is not about words, even though ‘fact’ is a word and we must use words to argue about what facts are.)
Not sure who that “we” is, but I’m certainly not a part of that group.
Anyway, judging by the downvotes, people seem to be getting tired of this debate, so I am disengaging.
Are you saying that when you model what the Earth was like prior to the existence of the first sentient and reasoning beings, you find that your model is of oblivion, of a completely factless void in which there are no obtaining circumstances? You may need to get your reality-simulator repaired.
I haven’t gotten any downvotes for this discussion. If you’ve been getting some, it’s much more likely because you’ve refused to give any positive arguments for your assertion “experience + models = reality” than because people are ‘tired of this debate.’ If you started giving us reasons to accept that statement, you might see that change.
But its just an exterme case of the LW Bad Habit of employig gerrymandered definitions of “meaning”.
As opposed to...?
(Just because there’s a black box doesn’t mean we shouldn’t ever work on anything that requires using the black box.)
Using definitions rooted in linguisitics, semiotics, etc.
Is there any such definition of meaning that does not pile up incredibly higher power-towers of linguistic complexity and uses even more mental black boxes?
All the evidence I’ve seen so far not only imply that we’ve never found one, but that there might be a reason we would never find one.
OK. There might not be a clean definition of meaning. However, what this sub thread is about Shminux’s right to set up a personal definition, and use it to reject criticism.
Valid point. Any “gerrymandered” definitions should be done with the intent to clarify or simplify the solution towards a problem, and I’d only evaluate them on their predictive usefulness, not how you can use them to reject or enforce arguments in debates.
“Gerrymandering” has the connotation of self-serving, as in the political meaning of the term. Hence I do not see it as ever being useful.
I think I must be misunderstanding what you’re saying here because something very similar to this is probably the principle accusation relied upon in metaphysical debates (if not the very top, certainly top 3). So let me outline what is standard in metaphysical discussions so that I can get clear on whether you’re meaning something different.
In metaphysics, people distinguish between quantitative and qualitative parsimony. Quantitative parisimony is about the amount of stuff your theory is committed to (so a theory according to which more planets exist is less quantitatively parsimonious than an alternative). Most metaphysicians don’t care about quantative parsimony. On the other hand, qualitative parsimony is about the types of stuff that your theory is committed to. So if a theory is committed to causation and time, this would be less qualitatively parsimonious than one that that was only committed to causation (just an example, not meant to be an actual case). Qualitative parsimony is seen to be one of the key features of a desirable metaphysical theory. Accusations that your theory postulates extra ontological stuff but doesn’t gain further explanatory power for doing so is basically the go to standard accusation against a metaphysical theory.
Fundamentality is also a major philosophical issue—the idea that some stuff you postulate is ontologically fundamental and some isn’t. Fundamentality views are normally coupled with the view that what really matters is qualitative parsimony of fundamental stuff (rather than stuff generally).
So how does this differ from the claim that you’re saying is not mainstream?
The claim might just need correction to say, “Many philosophers say that simplicity is a good thing but the requirement is not enforced very well by philosophy journals” or something like that. I think I believe you, but do you have an example citation anyway? (SEP entries or other ungated papers are in general good; I’m looking for an example of an idea being criticized due to lack of metaphysical parsimony.) In particular, can we find e.g. anyone criticizing modal logic because possibility shouldn’t be basic because metaphysical parsimony?
In terms of Lewis, I don’t know of someone criticising him for this off-hand but it’s worth noting that Lewis himself (in his book On the Plurality of Worlds) recognises the parsimony objection and feels the need to defend himself against it. In other words, even those who introduce unparsimonious theories in philosophy are expected to at least defend the fact that they do so (of course, many people may fail to meet these standards but the expectation is there and theories regularly get dismissed and ignored if they don’t give a good accounting of why we should accept their unparsimonious nature).
Sensations and brain processes: one of Jack Smart’s main grounds for accepting the identity theory of mind is based around considerations of parsimony
Quine’s paper On What There Is is basically an attack on views that hold that we need to accept the existence of things like pegasus (because otherwise what are we talking about when we say “Pegasus doesn’t exist”). Perhaps a ridiculous debate but it’s worth noting that one of Quine’s main motivations is that this view is extremely unparsimonious.
From memory, some proponents of EDT support this theory because they think that we can achieve the same results as CDT (which they think is right) in a more parsimonious way by doing so (no link for that however as that’s just vague recollection).
I’m not actually a metaphysician so I can’t give an entire roll call of examples but I’d say that the parsimony objection is the most common one I hear when I talk to metaphysicians.
Why shouldn’t it? I haven’t seen any reduction of it that deals with this objection.
Would that be desirable? If a contributor can argue persuasively for dropping parsimony, why should that be suppressed?
Surely that should be modal realism.
“Make things as simple as possible, but no simpler.”—Albert Einstein
How do you know whether something is as simple as possible?
In terms of publishing, should the standard be as simple as is absolutely possible, or should it be as simple as possible given time and mental constraints?
You keep trying to make it simpler, but you fail to do so without losing something in return.
It still may be hard to resolve when something is as simple as possible.
So modal realism (the idea that possible worlds exist concretely) has been highlighted a few times in this thread as an unparsimonious theory but Lewis has two responses to this:
1.) This is (at least mostly) quantitative unparsimony not qualitative (lots of stuff, not lots of types of stuff). It’s unclear how bad quantitative unparsimony is. Specifically, Lewis argues that there is no difference between possible worlds and actual worlds (actuality is indexical) so he argues that he doesn’t postulate two types of stuff (actuality and possibility) he just postulates a lot more of the stuff that we’re already committed to. Of course, he may be committed to unicorns as well as goats (which the non-realist isn’t) but then you can ask whether he’s really committed to more fundamental stuff than we are.
2.) Lewis argues that his theory can explain things that no-one else can so even if his theory is less parsimonious, it gives rewards in return for that cost.
Now many people will argue that Lewis is wrong, perhaps on both counts but the point is that even with the case that’s been used almost as a benchmark for unparsimonious philosophy in this thread, it’s not as simple as “Lewis postulates two types of stuff when he doesn’t need to, therefore, clearly his theory is not as simple as possible.”
Isn’t this, essentially, a mild departure from late Logical Empiricism to allow for a wider definition of Physical and a more specific definition of Logical references?
I don’t see anything similar to this post on a quick skim of http://plato.stanford.edu/entries/logical-empiricism/ . Please specify.
Well, I was specifically thinking of this passage
Which, to my admittedly rusty knowledge of mid 20th century philosophy, sounds extremely similar to the anti-metaphysics position of Carnap circa 1950. His work on Ramsey sentences, if I recall, was an attempt to reduce mixed statements including theoretical concepts (“appleness”) to a statement consisting purely of Logical and Observational Terms. I’m fairly sure I saw something very similar to your writings in his late work regarding Modal Logic, but I’m clearly going to have to dig up the specific passage.
Amusingly, this endeavor also sounds like your arch-nemesis David Chalmers’ new project, Constructing the World. Some of his moderate responses to various philosophical puzzles may actually be quite useful to you in dismissing sundry skeptical objections to the reductive project; from what I’ve seen, his dualism isn’t indispensable to the interesting parts of the work.
Just to say that in general, apart from the stuff about consciousness, which I disagree with but think is interesting, I think that Chalmers is one of the best philosophers alive today. Seriously, he does a lot of good work.
He also reads LessWrong, I think.
I am about 90% certain that he is djc.
I’d agree; the link to philpapers (a Chalmers project), claiming to be a pro, having access to leading decision theorists—all consistent.
It’s either Chalmers or a deliberate impersonator. ‘DJC’ stands for ‘David John Chalmers.’
It’s too bad EY is deeply ideologically committed to a different position on AI, because otherwise his philosophy seems to very closely parallel John Searle’s. Searle is clearer on some points and EY is clearer on others, but other than the AI stuff they take a very similar approach.
EDIT: To be clear, John Searle has written a lot, lot more than the one paper on the Chinese Room, most of it having nothing to do with AI.
So… admittedly my main acquaintance with Searle is the Chinese Room argument that brains have ‘special causal powers’, which made me not particularly interested in investigating him any further. But the Chinese Room argument makes Searle seem like an obvious non-reductionist with respect to not only consciousness but even meaning; he denies that an account of meaning can be given in terms of the formal/effective properties of a reasoner. I’ve been rendering constructive accounts of how to build meaningful thoughts out of “merely” effective constituents! What part of Searle is supposed to be parallel to that?
I guess I must have misunderstood something somewhere along the way, since I don’t see where in this sequence you provide “constructive accounts of how to build meaningful thoughts out of ‘merely’ effective constituents” . Indeed, you explicitly say “For a statement to be … true or alternatively false, it must talk about stuff you can find in relation to yourself by tracing out causal links.” This strikes me as parallel to Searle’s view that consciousness imposes meaning.
But, more generally, Searle says his life’s work is to explain how things like “money” and “human rights” can exist in “a world consisting entirely of physical particles in fields of force”; this strikes me as akin to your Great Reductionist Project.
Someone should tell him this has already been done: dissolving that kind of confusion is literally part of LessWrong 101, i.e. the Mind Projection Fallacy. Money and human rights and so forth are properties of minds modeling particles, not properties of the particles themselves.
That this is still his (or any other philosopher’s) life’s work is kind of sad, actually.
I guess my phrasing was unclear. What Searle is trying to do is generate reductions for things like “money” and “human rights”; I think EY is trying to do something similar and it takes him more than just one article on the Mind Projection Fallacy. (Even once you establish that it’s properties of minds, not particles, there’s still a lot of work left to do.)
Or maybe Searle is tackling a much harder version of the problem, for instance explaining how things like human rights and ethics can be binding or obligatory on people when they are “all in the mind”, explaining why one person should be beholden to another’s mind projection.
Note that “should be beholden” is a concept from within an ethical system; so invoking it in reference to an entire ethical system is a category error.
Also, I feel that the sequences do pretty well at explaining the instrumental reasons that agents with goals have ethics; even ethics which may, in some circumstances, prohibit reaching their goals.
Not necessarily. Many approaches to this problem try to lever an ethical “should” off a rational “should”.
Why? Did I mention consciousness somewhere? Is there some reason a non-conscious software program hooked up to a sensor, couldn’t do the same thing?
I don’t think Searle and I agree on what constitutes a physical particle. For example, he thinks ‘physical’ particles are allowed to have special causal powers apart from their merely formal properties which cause their sentences to be meaningful. So far as I’m concerned, when you tell me about the structure of something’s effects on the particle fields, there shouldn’t be anything left after that—anything left is extraphysical.
Searle’s views have nothing to do with attributing novel properties to fundamental particles. They are more to do with identifying mental properties with higher-levle physical properties, which are themselves irreducible in a sense (but also reducible in another sense).
That’s confusing. What senses?
See the link I gave to start with.
Perhaps I’m confused, but isn’t Searle the guy who came up with that stupid Chinese Room thing? I don’t see at all how that’s remotely parallel to LW philosophy, or why it would be a bad thing to be ideologically opposed to his approach to AI. (He seems to think it’s impossible to have AI, after all, and argues from the bottom line for that position.)
I was talking about Searle’s non-AI work, but since you brought it up, Searle’s view is:
qualia exists (because: we experience it)
the brain causes qualia (because: if you cut off any other part of someone they still seem to have qualia)
if you simulate a brain with a Turing machine, it won’t have qualia (because: qualia is clearly a basic fact of physics and there’s no way just using physics to tell whether something is a Turing-machine-simulating-a-brain or not)
Which part does LW disagree with and why?
To offer my own reasons for disagreement,
I think the first point is unfounded (or misguided). We do things (like moving, and thinking). We notice and can report that we’ve done things, and occasionally we notice and can report that we’ve noticed that we’ve done something. That we can report how things appear to a part of us that can reflect upon stimuli is not important enough to be called ‘quaila’. That we notice that we find experience ‘ineffable’ is not a surprise either—you would not expect the brain to be able to report everything that occurs, down to the neurons firing (or atoms moving). So, all we really have is the ability to notice and report that which has been advantageous for us to report in the evolutionary history of the human (these stimuli that we can notice are called ‘experiences’). There is nothing mysterious here, and the word ‘qualia’ always seems to be used mysteriously—so I don’t think the first point carries the weight it might appear to.
Qualia is not clearly a basic fact of physics. I made the point that we would not expect a species designed by natural selection to be able to report or comprehend its most detailed, inner workings, solely on the evidence of what it can report and notice. But this is all skirting around the core idea of LessWrong: The map is not the territory. Just because something seems fundamental does not mean it is. Just because it seems like a Turing machine couldn’t be doing consciousness, doesn’t mean that is how it is. We need to understand how it came to be that we feel what we feel, before go making big claims about the fundamental nature of reality. This is what is worked on in LessWrong, not in Searle’s philosophy.
If the ineffabiity of qualia is down to the complexity of fine-grained neural behaviour, then the question is why is anything effable—people can communicate about all sorts of things that aren’t sensations (and in many cases are abstract and “in the head”).
I’m not sure that I follow. Can anything we talk about be reduced to less than the basic stimuli we notice ourselves having?
All words (that mean anything) refer to something. When I talk about ‘guitars’, I remember experiences I’ve had which I associate with the word (i.e. guitars). Most humans have similar makeups, in that we learn in similar ways, and experience in similar ways (I’m just talking about the psychological unity of humans, and how far our brain design is from, say, mice). So, we can talk about things, because we’ve learnt to refer certain experiences (words) to others (guitars).
Neither of the two can refer to anything other to the experiences we have. Anything we talk about is in relation to our experiences (Or possibly even meaningless).
Most of the classic reductions are reductions to things beneath perceivable stimuli,eg heat to molecular motion. Reductionism and physialism would be in very bad trouble if language and concpetualistion grounded out where perception does. The theory also mispredicts that we woul be able communicate our sensations , but struggle to communicate abstract (eg mathemataical) ideas with a distant rleationship, or no relationship to senssation. In fact, the classic reductions are to the basic entities of phyiscs, which are ultimately defined mathematically, and often hard to hard to visualise or otherwise relate to sensation.
You could point out the different constituents of experience that feel fundamental, but they themselves (e.g. Red) don’t feel as though they are made up of anything more than themselves.
When we talk about atoms, however, that isn’t a basic piece of mind that mind can talk about. My mind feels as though it is constituted of qualia, and it can refer to atoms. I don’t experience an atom, I experience large groups of them, in complex arrangements. I can refer to the atom using larger, complex arrangements of neurons (atoms). Even though, when my mind asks what the basic parts of reality are, it has a chain of reference pointing to atoms, each part of that chain is a set of neural connections, that don’t feel reducible.
Even on reflection, our experiences reduce to qualia. We deduce that qualia are made of atoms, but that doesn’t mean that our experience feels like its been reduced to atoms.
Where is that heading? Is it supposed to tell my why qualia are ineffable....or rather, why qualia are more ineffable than cognition?
I’m saying that we should expect experience to feel as if made of fundamental, ineffable parts, even though we know that it is not. So, qualia aren’t the problem for a turing machine they appear to be.
Also, we all share these experience ‘parts’ with most other humans, due to the psychological unity of humankind. So, if we’re all sat down at an early age, and drilled with certain patterns of mind parts (times-tables), then we should expect to be able to draw upon them at ease.
My original point, however, was just that the map isn’t the territory. Qualia don’t get special attention just because they feel different. They have a perfectly natural explanation, and you don’t get to make game-changing claims about the territory until you’ve made sure your map is pretty spot-on.
I don ’t see why. Saying that eperience is really complex neurall activity isn’t enough to explain that, because thought is really complex neural activity as well, and we can comminicate and unpack concepts.
Can you write the code for SeeRed() ? Or are you saying that TMs would have ineffable concepts?
You’ve inverted the problem: you have creatd the expectation that nothing mental is effable.
No, I’m saying that no basic, mental part will feel effable. Using our cognition, we can make complex notions of atoms and guitars, built up in our minds, and these will explain why our mental aspects feel fundamental, but they will still feel fundamental.
I’m not continuing this discussion, it’s going nowhere new. I will offer Orthonormal’s sequence on qualia as explanatory however: http://lesswrong.com/lw/5n9/seeing_red_dissolving_marys_room_and_qualia/
You seem to be hinting, but are not quite saying, that qualia are basic and therefore ineffable, whilst thoughts are non-basic and therefore effable.
Confirming the above would be somewhere new.
I’m saying that there are (something like) certain constructs in the brain, that are used whenever the most simple conscious thought or feeling is expressed. They’re even used when we don’t choose to express something, like when we look at something. We immediately see it’s components (surfaces, legs, handles), and the ones we can’t break down (lines, colours) feel like the most basic parts of those representations in our minds.
Perhaps the construct that we identify as red, is set of neurons XYZ firing. If so, whenever we notice (that is, other sets of neurons observe) that XYZ go off, we just take it to be ‘red’. It really appears to be red, and none of the other workings of the neurons can break it any further. It feels ineffable, because we are not privy to everything that’s going on. We can simply use a very restricted portion of the brain, to examine other chunks, and give them different labels.
However, we can use other neuronal patterns, to refer to and talk about atoms. Large groups of complex neural firings can observe and reflect upon experimental results that show that the brain is made of atoms.
Now, even though we can build up a model of atoms, and prove that the basic features of conscious experience (redness, lines, the hearing of a middle C) are made of atoms, the fact is, we’re still using complex neuronal patterns to think about these. The atom may be fundamental, but it takes a lot of complexity for me to think about the atom. Consciousness really is reducible to atoms, but when I inspect consciousness, it still feels like a big complex set of neurons that my conscious brain can’t understand. It still feels fundamental.
Experientially, redness doesn’t feel like atoms because our conscious minds cannot reduce it in experience, but they can prove that it is reducible. People make the jump that, because complex patterns in one part of the brain (one conscious part) cannot reduce another (conscious) part to mere atoms, it must be a fundamental part of reality. However, this does not follow logically—you can’t assume your conscious experience can comprehend everything you think and feel at the most fundamental level, purely by reflection.
I feel I’ve gone on too long, in trying to give an example of how something could feel basic but not be. I’m just saying we’re not privy to everything that’s going on, so we can’t make massive knowledge claims about it i.e. that a turing-machine couldn’t experience what we’re experiencing, purely by appeal to reflection. We just aren’t reflectively transparent.
I can’t really speak for LW as a whole, but I’d guess that among the people here who don’t believe¹ “qualia doesn’t exist”, 1 and 2 are fine, but we have issues with 3, as expanded below. Relatedly, there seems be some confusion between the “boring AI” proposition, that you can make computers do reasoning, and Searle’s “strong AI” thing he’s trying to refute, which says that AIs running on computers would have both consciousness and some magical “intentionality”. “Strong AI” shouldn’t actually concern us, except in talking about EMs or trying to make our FAI non-conscious.
Pretty much disagree.
Really disagree.
And this seems really unlikely.
¹ I qualify my statement like this because there is a long-standing confusion over the use of the word “qualia” as described in my parenthetical here.
Well, let’s be clear: the argument I laid out is trying to refute the claim that “I can create a human-level consciousness with a Turing machine”. It doesn’t mean you couldn’t create an AI using something other than a pure Turing machine and it doesn’t mean Turing machines can’t do other smart computations. But it does mean that uploading a brain into a Von Neumann machine isn’t going to keep you alive.
So if you disagree that qualia is a basic fact of physics, what do you think it reduces to? Is there anything else that has a first-person ontology the way qualia does?
And if you think physics can tell whether something is a Turing-machine-simulating-a-brain, what’s the physical algorithm for looking at a series of physical particles and deciding whether it’s executing a particular computation or not?
Something brains do, obviously. One way or another.
I should perhaps be asking what evidence Searle has for thinking he knows things like what qualia is, or what a computation is. My statements were both negative: it is not clear that qualia is a basic fact of physics; it is not obvious that you can’t describe computation in physical terms. Searle just makes these assumptions.
If you must have an answer, how about this: a physical system P is a computation of a value V if adding as premises the initial and final states of P and a transition function describing the physics of P shortens a formal proof that V = whatever.
They’re not assumptions, they’re the answers to questions that have the highest probability going for them given the evidence.
There’s your problem. Why the hell should we assume that “qualia is clearly a basic fact of physics ”?
Because it’s the only thing in the universe we’ve found with a first-person ontology. How else do you explain it?
Well, I probably can’t explain it as eloquently as others here—you should try the search bar, there are probably posts on the topic much better than this one—but my position would be as follows:
Qualia are experienced directly by your mind.
Everything about your mind seems to reduce to your brain.
Therefore, qualia are probably part of your brain.
Furthermore, I would point out two things: one, that qualia seem to be essential parts of having a mind; I certainly can’t imagine a mind without qualia; and two, that we can view (very roughly) images of what people see in the thalamus, which would suggest that what we call “qualia” might simply be part of, y’know, data processing.
Another not-speaking-for-LW answer:
Re #1: I certainly agree that we experience things, and that therefore the causes of our experience exist. I don’t really care what name we attach to those causes… what matters is the thing and how it relates to other things, not the label. That said, in general I think the label “qualia” causes more trouble due to conceptual baggage than it resolves, much like the label “soul”.
Re #2: This argument is oversimplistic, but I find the conclusion likely.
More precisely: there are things outside my brain (like, say, my adrenal glands or my testicles) that alter certain aspects of my experience when removed, so it’s possible that the causes of those aspects reside outside my brain. That said, I don’t find it likely; I’m inclined to agree that the causes of my experience reside in my brain. I still don’t care much what label we attach to those causes, and I still think the label “qualia” causes more confusion due to conceptual baggage than it resolves.
Re #3: I see no reason at all to believe this. The causes of experience are no more “clearly a basic fact of physics” than the causes of gravity; all that makes them seem “clearly basic” to some people is the fact that we don’t understand them in adequate detail yet.
The whole thing: it’s the Chinese Room all over again, a intuition pump that begs the very question it’s purportedly answering. (Beginning an argument for the existence of qualia with a bare assertion that they exist is a little more obvious than the way that the word “understanding” is fudged in the Chinese Room argument, but basically it’s the same.)
I suppose you could say that there’s a grudging partial agreement with your point number two: that “the brain causes qualia”. The rest of what you listed, however, is drivel, as is easy to see if you substitute some other term besides “qualia”, e.g.:
Free will exists (because: we experience it)
The brain causes free will (because if you cut off any part, etc.)
If you simulate a brain with a Turing machine, it won’t have free will because clearly it’s a basic fact of physics and there’s no way to tell just using physics whether something is a machine simulating a brain or not.
It doesn’t matter what term you plug into this in place of “qualia” or “free will”, it could be “love” or “charity” or “interest in death metal”, and it’s still not saying anything more profound than, “I don’t think machines are as good as real people, so there!”
Or more precisely: “When I think of people with X it makes me feel something special that I don’t feel when I think of machines with X, therefore there must be some special quality that separates people from machines, making machine X ‘just a simulation’.” This is the root of all these Searle-ian arguments, and they are trivially dissolved by understanding that the special feeling people get when they think of X is also a property of how brains work.
Specifically, the thing that drives these arguments is our inbuilt machinery that classifies things as mind-having or not-mind-having, for purposes of prediction-making. But the feeling that we get that a thing is mind-having or not-mind-having is based on what was useful evolutionarily, not on what the actual truth is. Searlian (Surly?) arguments are thus in exactly the same camp as any other faith-based argument: elevating one’s feelings to Truth, irrespective of the evidence against them.
Just a nit pick: the argument Aaron presented wasn’t an argument for the existence of qualia, and so taking the existence of qualia as a premise doesn’t beg the question. Aaron’s argument was an argument agains artificial consciousness.
Also, I think Aaron’s presentation of (3) was a bit unclear, but it’s not so bad a premise as you think. (3) says that since qualia are not reducible to purely physical descriptions, and since a brain-simulating turing-machine is entirely reducible to purely physical descriptions, brain-simulating turing-machines won’t experience qualia. So if we have qualia, and count as conscious in virtue of having qualia (1), then brain-simulating turing machines won’t count as conscious. If we don’t have qualia, i.e. if all our mental states are reducible to purely physical descriptions, then the argument is unsound because premise (1) is false.
You’re right that you can plug many a term in to replace ‘qualia’, so long as those things are not reducible to purely physical descriptions. So you couldn’t plug in, say, heart-attacks.
Could you explain this a bit more? I don’t see how it’s relevant to the argument. Searle is not arguing on the basis of any special feelings. This seems like a straw man to me, at the moment, but I may not be appreciating the flaws in Searle’s argument.
In order for the argument to make any sense, you have to buy into several assumptions which basically are the argument. It’s “qualia are special because they’re special, QED”. I thought about calling it circular reasoning, except that it seems closer to begging the question. If you have a better way to put it, by all means share.
When I said that our mind detection circuitry was the root of the argument, I didn’t mean that Searle was overtly arguing on the basis of his feelings. What I’m saying is, the only evidence for Searle-type premises are the feelings created by our mind-detection circuitry. If you assume these feelings mean something, then Searle-ish arguments will seem correct, and Searle-ish premises will seem obvious beyond question.
However, if you truly grok the mind-projection fallacy, then Searle-type premises are just as obviously nonsensical, and there’s no reason to pay any attention to the arguments built on top of them. Even as basic a tool as Rationalist Taboo suffices to debunk the premises before the argument can get off the ground.
Any vald argument has a conclusion that is entiailed by its premises taken jointly. Circularity is when the whole conclusion is entailed by one premise, with the others being window-dressing.
I think there is a way that ripe tomatoes seem visually: how is that mind-projection.
But … if you’re assuming that qualia are “not reducible to purely physical descriptions”, and you need qualia to be conscious, then obviously brain-simulations wont be conscious. But those assumptions seem to be the bulk of the position he’s defending, aren’t they?
Right, the argument comes down, for most of us, to the first premise: do we or do we not have mental states irreducible to purely physical conditions. Aaron didn’t present an argument for that, he just presented Searle’s argument against AI from that. But you’re right to ask for a defense of that premise, since it’s the crucial one and it’s (at the moment) undefended here.
Presenting an obvious result of a nonobvious premise as if it was a nonobvious conclusion seems suspicious, as if he’s trying to trick listeners into accepting his conclusion even when their priors differ.
[Edited for terminology.]
Not only suspicious, but impossible: if the premises are non-trivial, the conclusion is non-trivial.
In every argument, the conclusion follows straight away from the premises. If you accept the premises, and the argument is valid, then you must accept the conclusion. The conclusion does not need any further support.
Y’know, you’re right. Trivial is not the right word at all.
To pick a further nit, the argument is more that qualia can’t be engineered into an AI. If an AI implementation has qualia at all, it would be serendipitous.
That’s a possibility, but not as I laid out the argument: if being conscious entails having qualia, and if qualia are all irreducible to purely physical descriptions, and every state of a turning machine is reducible to a purely physical description, then turing machines can’t simulate consciousness. That’s not very neat, but I do believe it’s valid. Your alternative is plausible, but it requires my ‘turning machines are reducible to purely physical descriptions’ premise to be false.
Huh? This isn’t an argument for the existence of qualia—it’s an attempt to figure out whether you believe in qualia or not. So I take it you disagree with step one, that qualia exists? Do you think you are a philosophical zombie?
I do think essentially the same argument goes through for free will, so I don’t find your reductio at all convincing. There’s no reason, however, to believe that “love” or “charity” is a basic fact of physics, since it’s fairly obvious how to reduce these. Do you think you can reduce qualia?
I don’t understand why you think this is a claim about my feelings.
Suppose that neuroscientists some day show that the quale of seeing red matches a certain brain structure or a neuron firing pattern or a neuro-chemical process in all humans. Would you then say that the quale of red has been reduced?
Of course not!
and why not?
Because the neuron firing pattern is presumably the cause of the quale, it’s certainly not the quale itself.
I don’t understand what else is there.
Imagine a flashlight with a red piece of cellophane over it pointed at a wall. Scientists some day discover that the red dot on the wall is caused by the flashlight—it appears each and every time the flashlight fires and only when the flashlight is firing. However, the red dot on the wall is certainly not the same as the flashlight: one is a flashlight and one is a red dot.
The red dot, on the other hand, could be reduced to some sort of interaction between certain frequencies of light-waves and wall-atoms and so on. But it will certainly not get reduced to flashlights.
By the same token, you are not going to reduce the-subjective-experience-of-seeing-red to neurons; subjective experiences aren’t made out of neurons any more than red dots are made of flashlights.
Ok, that’s where we disagree. To me the subjective experience is the process in my brain and nothing else.
There’s no arguemnt there. Your point about qualia is illustrated by your point about flashlights, but not entailed by it.
How do you know this?
There’s no certainty either way.
Reduction is an explanatory process: a mere observed correlation does not qualify.
I think that anyone talking seriously about “qualia” is confused, in the same way that anyone talking seriously about “free will” is.
That is, they’re words people use to describe experiences as if they were objects or capabilities. Free will isn’t something you have, it’s something you feel. Same for “qualia”.
Dissolving free will is considered an entry-level philosophical exercise for Lesswrong. If you haven’t covered that much of the sequences homework, it’s unlikely that you’ll find this discussion especially enlightening.
(More to the point, you’re doing the rough equivalent of bugging people on a newsgroup about a question that is answered in the FAQ or an RTFM.)
This is probably a good answer to that question.
Because (as with free will) the only evidence anyone has (or can have) for the concept of qualia is their own intuitive feeling that they have some.
So you say. It is not standardly defined that way.
Qualia are defined as feelings, sensations etc. Since we have feelings, sensations etc we have qualia. I do not see the confusion in using the word “”qualia”
Well, would that mean writing a series like this?
My intuition certainly says that Martha has a feeling of ineffable learning. Do you at least agree that this proves the unreliability of our intuitions here?
Who said anything about our intuitions (except you, of course)?
You keep making statements like,
And you seem to consider this self-evident. Well, it seemed self-evident to me that Martha’s physical reaction would ‘be’ a quale. So where do we go from there?
(Suppose your neurons reacted all the time the way they do now when you see orange light, except that they couldn’t connect it to anything else—no similarities, no differences, no links of any kind. Would you see anything?)
I guess you need to do some more thinking to straighten out your views on qualia.
Goodnight, Aaron Swartz.
downvoted posthumously.
Let’s back up for a second:
You’ve heard of functionalism, right? You’ve browsed the SEP entry?
Have you also read the mini-sequence I linked? In the grandparent I said “physical reaction” instead of “functional”, which seems like a mistake on my part, but I assumed you had some vague idea of where I’m coming from.
Or you do. You claim the truth of your claims is self-evident, yet it is not evident to, say, hairyfigment, or Eliezer, or me for that matter.
If I may ask, have you always held this belief, or do you recall being persuaded of it at some point? If so, what convinced you?
Could you expand on this point, please? It generally agreed* that “free will vs determinism” is a dilemma that we dissolved long ago. I can’t see what else you could mean by this, so …
[*EDIT: here, that is]
I guess it really depends on what you mean by free will. If by free will, pjeby meant some kind of qualitative experience, then it strikes me that what he means by it is just a form of qualia and so of course the qualia argument goes through. If he means by it something more complicated, then I don’t see how point one holds (we experience it), and the argument obviously doesn’t go through.
But that’s not contentious. Qualia are things like the appearence of tomatoes or taste of lemon. I’ve seen tomatoes and tasted lemons.
But Searle says that feelngs, understanding, etc are properties of how the brain works. What he argues against is the claim that they are computational properties. But it is also uncontentious that physiclaism can be true and computationalism false.
It isn’t even clear to Searle that qualia are physically basic. He thinks consciousness is a a high-level outcome of the brain’s concrete causal powers. His objection to computaional apporaches is rooted in the abstract nature of computation, not in the physcial basiscness of qualia. (In fact, he doesn’t use the word “qualia”, although he often seems to be talking about the same thing).