I have not been able to imagine a pair of (painting+context with a subject)s which have two completely different subjects but are almost identical in their coordinate-positioning of quarks.
Well, wouldn’t a painting of the Mona Lisa, and a computer screen depicting said painting, have very different quarks, and quark patterns? While two computer screens depicting some completely different subject would be much more similar to each other? This is what I was trying to get at.
The two computer screens depicting completely different subjects have almost everything in common, in that they are of the same material. However, where they differ—namely, the color of each pixel—is where all the information about the painting is contained. So the screens have enough different information (at the quark level) to distinguish what the paintings are about.
So I don’t think you are getting at why “about-ness” isn’t related to the quarks of the painting. I think a better example is a stick figure. A child’s stick figure can be anybody. What the painting is about is in her head, or your head, or in the head of anyone thinking about what the painting is about.
So it’s not in the quarks of the painting at all. “About-ness” is in the quarks of the thoughts of the person looking at the painting, right? (And according to reductionism, completely determined by the quarks in the painting, the quarks of the observer, and the quarks of their mutual environment.)
Above, you wrote:
there’s also nothing legitimate on the level of quarks [of the painting] that could be used to differentiate between a painting that has a subject and a painting that is just random blobs
Thus I agree with this statement as it is written, because I think the difference in the subjects of the paintings are found instead in the thoughts of the beholder. Would you agree that there is a legitimate difference at the level of quarks between the thought that a painting has a subject and the thought that a painting is just random blobs?
The two computer screens depicting completely different subjects have almost everything in common, in that they are of the same material. However, where they differ—namely, the color of each pixel—is where all the information about the painting is contained. So the screens have enough different information (at the quark level) to distinguish what the paintings are about.
But the two screens with two different subjects are probably more similar than a screen and a painting with the same subject, in terms of coordinates of quarks. Additionally, it’s not clear to me that there’s a one-to-one correspondence between color and quarks. Even establishing a correspondence between color and chemical make up is extremely difficult, due to the influence of natural selection in how we see color (I remember Dennett having a cool chapter on this in CE.)
I don’t want to make our disagreement sound more stark than it actually is. I agree that the about-ness is in the mind of the beholder, and the stick figure is a good example as well… but I think this just emphasizes my point. Let me put it this way: Given the data for the point-coordinates of the three entities, could a mind choose which one had which subject? No, even though the criteria is buried abstrusely somewhere in there. The point being that the models are inextricably separate in the imagination, and its therefore not clear to me why its a priori logically necessary that they all collapse into the same territory (though I agree that they do, ultimately).
Maybe I’ve misunderstood you and you’re not talking about what “about” means. Are you talking about how it seems impossible that we can decode the quarks into our perception of reality? And thus that while you agree everything is quarks, there’s some intermediate scale helping us interpret that would be better identified as ‘fundamental’? (If I’m wrong just downvote once, and I’ll delete, I don’t want to make this thread more confusing.
Haha if I just downvoted it, then I wouldn’t be able to explain what I do mean.
I’m simply attempting to disagree with the logical necessity of reductionism. I said this earlier, I thought it was pretty clear:
My contention is that it’s possible to reduce the levels, but not logically necessary—and I support this contention with the fact that we don’t necessarily collapse the levels in our reasoning, and we can’t collapse the levels in our imagination.
So, the fact that a painting has a subject is a good example of this: I can’t imagine the specific differences between a) the quark-configuration that would lead to me believing its “about a subject”, versus b) the quark-configuration that would lead to me believing its just a blob. I can believe that quarks are ultimately responsible, but I’m not obligated to do so by a priori logical necessity.
So I’m not contending anything about what the most fundamental level is. I’m just saying that non-reductionism isn’t inconceivable.
I can believe that quarks are ultimately responsible, but I’m not obligated to do so by a priori logical necessity.
This is a slippery concept. With some tiny probability anything is possible, even that 2+2=3. When philosophers argue for what is logically possible and what isn’t, they implicitly apply an anthropomorphic threshold. Think of that picture with almost-the-same atoms but completely different message.
The extent to which something is a priori impossible is also probabilistic. You say “impossible”, but mean “overwhelmingly improbable”. Of course it’s technically possible that the territory will play a game of supernatural and support a fundamental object behaving according to a high-level concept in your mind. But this is improbable to an extent of being impossible, a priori, without need for further experiments to drive the certainty to absolute.
Of course it’s technically possible that the territory will play a game of supernatural and support a fundamental object behaving according to a high-level concept in your mind. But this is improbable to an extent of being impossible, a priori, without need for further experiments to drive the certainty to absolute.
Not quite sure what you’re saying here. If you’re saying:
1)”Entities in the map will not magically jump into the territory,” Then I never disagreed with this. What I disagreed with is your labeling certain things as obviously in the map and others obviously in the territory. We can use whatever labels you like: I still don’t know why irreducible entities in the territory are “incredibly improbable prior to any empirical evidence.”
2)”The territory can’t support irreducible entities,” you still haven’t asserted why this is “incredibly improbable prior to any empirical evidence.”
I can believe that quarks are ultimately responsible, but I’m not obligated to do so by a priori logical necessity.
I feel that someone should point out how difficult this discussion might be in light of the overwhelming empirical evidence for reductionism. Non-reductionist theories tend to get… reduced. In other words, reductionism’s logical status is a fairly fine distinction in practice.
That said, I wonder if the claim can’t be near-equivalently rephrased “it’s impossible to imagine a non-reductionist scenario without populating it with your own arbitrary fictions”. Your use of the term “conceivable” seems to mean (or include) something like “choose an arbitrary state space of possible worlds and an observation relation over that space”. Clearly anything goes.
You’re simply expanding your definition of “everything” to include arbitrary chunks of state space you bolted on, some of which are underdetermined by their interactions with every previous part of “everything”. I don’t have a fully fleshed-out logical theory of everything on hand, so I’ll give you the benefit of the doubt that what you’re saying isn’t logically invalid. Either way, it’s pointless. If there’s no link between levels, there’s no way to distinguish between states in the extended space except by some additional a priori process. Good luck acquiring or communicating evidence for such processes.
That said, I wonder if the claim can’t be near-equivalently rephrased “it’s impossible to imagine a non-reductionist scenario without populating it with your own arbitrary fictions”.
Ah, that’s very interesting. Now we’re getting somewhere.
I don’t think it has to be arbitrary. Couldn’t the following scenario be the case?:
The universe is full of entities that experiments show reducible to fundamental elements with laws (say, quarks), or entities that induction + parsimony tells us ought to be reducible to fundamental elements (since these entities are made of quarks, we just haven’t quite figured out the reduction of their emergent properties yet)… BUT there is one exception in this universe, a certain type of stuff whose behavior is quantifiable, yet not reducible to quarks. In fact, we have no reason to believe this certain type of stuff is even made of the fundamental stuff everything else seems to be. Every experiment would defy reducing this entity down to quarks, to the point that it would actually be against Occam’s Razor to try and reduce this entity to quarks! It would be a type of dualism, I suppose. It’s not a priori logically excluded, and it’s not arbitrary.
I think we might separate the ideas that there’s only one type of particle and that the world is reductionist. It is an open question as to whether everything can be reduced to a single fundamental thing (like strings) and it wouldn’t be a logical impossibility to discover that there were two or three kinds of things interacting. (Or would it?)
Reductionism, as I understand it, is the idea that the higher levels are completely explained by (are completely determined by) the lower levels. Any fundamentally new type of particle found would just be added to what we consider “lower level”.
So what does it say about the world that it is reductionist? I propose the following two things are being asserted:
(1) There’s no rule that operates at an intermediate level that doesn’t also operate on the lower levels. This means that you can’t start adding new rules when a certain level of organization is reached. For example, if you have a law that objects with mass behave a certain way, you can’t apply it to everything that has mass but not quarks. This is a consistency rule.
(2) Any rule that applies to an intermediate level is reducible to rules that can be expressed with and applied at the lower level. For example, we have the rule that two competing organisms cannot coexist in the same niche. Even though it would be very difficult to demonstrate, a reductionist worldview argues that in principle this rule can be derived from the rules we already apply to quarks.
When people argue about reductionism, they are usually arguing about (2). They have some idea that at a certain level of organization, new rules can come into play that simply aren’t expressible in the lower levels—they’re totally new rules.
Here’s a thought experiment about an apple that helped me sort through these ideas:
Suppose that I have two objects, one in my right hand and one in my left hand. The one in my left hand is an apple. The one in my right hand has exactly the same quarks in exactly the same states. But somehow, for some reason, they’re different. This implies that there is some degree of freedom between the lower level and the higher level. Now it follows that this free state is determined in some way; to determine an apple in my left hand and a non-apple in my right, either by some kind of rule or randomly, or both. In any case, we would observe this rule. Call it X. So the higher level, the object being an apple or non-apple, depends upon the lower levels and X.
(a) Was X there all along ? If so, X is part of the lower level and we just discovered it, we need to add it in to our lower level theory.
(b) What if X wasn’t “there” all along? What if for some reason, X only applies at intermediate levels? …either because
(i) X is inconsistently applied or because
(ii) X is not describable as a function of lower level terms
The case (a) doesn’t assert anything about the universe, it just illustrates a confusion that can result from not understanding what “lower level” means. I don’t think (b) in either part is logically impossible because you can run a simulation with these rules.
Until you require (and obviously you want to) that the universe is a closed system. Then I don’t think you can have b(i) or b(ii). A rule (Rule 1) that is inconsistently applied (bi) requires another rule (Rule 2) determining when to apply it. Rule 1 being inconsistent in a system means that Rule 2 is outside the system. If a phenomenon cannot be described by the states of the system (the lower level) (bii) then it depends on something else outside the system. So I think I’ve deduced that the logical impossibility of reductionism depends upon the universe being a closed system.
If the physical universe isn’t closed—if we allow the metaphysical—then non-reductionism is not logically impossible.
Where does randomness come in? Is the universe necessarily deterministic because of (bii) being impossible, so that the higher levels must depend deterministically on the lower levels? (I’m talking about whether a truly stochastic component is possible in Brownian motion or the creation of particles in a vacuum, etc).
Another thing to think about is how these ideas affect our theories about gravity. We have no direct evidence that gravity satisfies consistency or that it is expressible in terms of lowest level physics. Does anyone know if any well-considered theories are ever proposed for gravity that don’t satisfy these rules?
Reductionism, as I understand it, is the idea that the higher levels are completely explained by (are completely determined by) the lower levels. Any fundamentally new type of particle found would just be added to what we consider “lower level”.
Oh! Certainly. But this doesn’t seem to exclude “mind”, or some element thereof, from being irreducible—which is what Eliezer was trying to argue, right? He’s trying to support reductionism, and this seems to include an attack on “fundamentally mental” entities. Based on what you’re saying, though, there could be a fundamental type of particle, called “feelions,” or “qualions”—the entities responsible for what we call “mind”—which would not reduce down to quarks, and therefore would deserve to be called their own fundamental type of “stuff.” It’s a bit weird to me to call this a reductionist theory, and its certainly not a reductionist materialist theory.
Everything else you said seems to me right on—emergent properties that are irreducible to their constituents in principle seems somewhat incoherent to me.
its certainly not a reductionist materialist theory
In what way would these “feelions” or “qualions” not be materials? Your answer to this question may reveal some interesting hidden assumptions.
It’s a bit weird to me to call this a reductionist theory
Are you sure it’s weird because it’s not reductionist? Or because such a theory would never be seen outside of a metaphysical theory? So you automatically link the idea that minds are special because they have “qualions” with “metaphysical nonsense”.
But what if qualions really existed, in a material way and there were physical laws describing how they were caught and accumulated by neural cells. There’s absolutely no evidence for such a theory, so it’s crazy, but its not logically impossible or inconsistent with reductionism, right?
what if qualions really existed, in a material way and there were physical laws describing how they were caught and accumulated by neural cells. There’s absolutely no evidence for such a theory, so it’s crazy, but its not logically impossible or inconsistent with reductionism, right?
Hmm… excellent point. Here I do think it begins to get fuzzy… what if these qualions fundamentally did stuff that we typically attribute to higher-level functions, such as making decisions? Could there be a “self” qualion? Could their behavior be indeterministic in the sense that we naively attribute to humans? What if there were one qualion per person, which determined everything about their consciousness and personality irreducibly? I feel that, if these sorts of things were the case, we would no longer be within the realm of a “material” theory. It seems that Eliezer would agree:
By far the best definition I’ve ever heard of the supernatural is Richard Carrier’s: A “supernatural” explanation appeals to ontologically basic mental things, mental entities that cannot be reduced to nonmental entities. This is the difference, for example, between saying that water rolls downhill because it wants to be lower, and setting forth differential equations that claim to describe only motions, not desires. It’s the difference between saying that a tree puts forth leaves because of a tree spirit, versus examining plant biochemistry. Cognitive science takes the fight against supernaturalism into the realm of the mind. Why is this an excellent definition of the supernatural? I refer you to Richard Carrier for the full argument. But consider: Suppose that you discover what seems to be a spirit, inhabiting a tree: a dryad who can materialize outside or inside the tree, who speaks in English about the need to protect her tree, et cetera. And then suppose that we turn a microscope on this tree spirit, and she turns out to be made of parts—not inherently spiritual and ineffable parts, like fabric of desireness and cloth of belief; but rather the same sort of parts as quarks and electrons, parts whose behavior is defined in motions rather than minds. Wouldn’t the dryad immediately be demoted to the dull catalogue of common things?
Based on his post eventually insisting on the a priori incoherence of such possibilities (we look inside the dryad and find out he’s not made of dull quarks), I inferred that he thought fundamentally mental things, too, are excluded a priori. You seem to now disagree, as I do. Is that right?
Where things seem to get fuzzy is where things seem to go wrong. Nevertheless, forging ahead..
fundamentally mental things
If they are being called “fundamentally mental” because they interact by one set of rules with things that are mental and a different set of rules with things that are not mental, then it’s not consistent with a reductionist worldview (and it’s also confused because you’re not getting at how mental is different from non-mental). However, if they are being called fundamentally mental because they happen to be mechanistically involved in mental mechanisms, but still interact with all quarks in one consistent way everywhere, it’s logically possible.
Also you asked if these qualions could be indeterministic. It doesn’t matter if you apply this question to a hypothesized new particle. The question is, is indeterminism possible in a closed system? If so, we could postulate quarks as a source of indeterminism.
If they are being called “fundamentally mental” because they interact by one set of rules with things that are mental and a different set of rules with things that are not mental, then it’s not consistent with a reductionist worldview...
Is it therefore a priori logically incoherent? That’s what I’m trying to understand. Would you exclude a “cartesian theatre” fundamental particle a priori?
(and it’s also confused because you’re not getting at how mental is different from non-mental). However, if they are being called fundamentally mental because they happen to be mechanistically involved in mental mechanisms, but still interact with all quarks in one consistent way everywhere, it’s logically possible.
What do you mean by mechanical? I think we’re both resting on some hidden assumption about dividing the mental from the physical/mechanical. I think you’re right that it’s hard to articulate, but this makes Eliezer’s original argument even more confusing. Could you clarify whether or not you’re agreeing with his argument?
If they are being called “fundamentally mental” because they interact by one set of rules with things that are mental and a different set of rules with things that are not mental, then it’s not consistent with a reductionist worldview..
I deduce that the above case would be inconsistent with reductionism. And I think that it is logically incoherent, because I think non-reductionism is logically incoherent, because I think that reductionism is equivalent with the idea of a closed universe, which I think is logically necessary. You may disagree with any step in the chain of this reasoning.
What do you mean by mechanical?
I think you guessed: I meant that there is no division between the mental and physical/mechanical. Believing that a division is a priori possible is definitely non-reductionist. If that is what Eliezer is saying, then I agree with him.
To summarize, my argument is:
[logic --> closed universe --> reductionism --> no division between the mental and the physical/mechanical]
Could you explain? If I were presented with a data sheet full of numbers, and told “these are the point coordinates of the fundamental building blocks of three entities. Please tell me what these entities are, and if applicable, what they are about” I would be unable to do so. Would you?
Given a computer that can handle the representation and convert it into form acceptable by the interface of your mind, this data can be converted into a high-level description. The data determines its high-level properties, even if you are unable to extract them, just like a given number determines which prime factors it has, even if you are unable to factor it.
I happen to agree. However, the claim of reductionism is that what you’ve described is the case for ALL entities. I’m trying to figure out why this claim is logically necessary, and any disagreement is a confusion.
The claim is about the absence of high-level concepts in the territory. These appear only the mind, as computational abstractions in processing low-level data. The logical incoherence comes from the disagreement between the definition of high-level concepts as classes of states of the territory, which their role in the mind’s algorithm entails, and assumption that the very same concepts obey laws of physics. It’s virtually impossible for the convenience of computational abstraction to correspond exactly to the reality of physical laws, and even more impossible for this correspondence to persist. High-level concepts ever change in the minds according to chance and choice, while fundamental laws are a given, not subservient to telepathic teleological necessity.
Edit: changed “classes of low-level concepts” to “classes of states of the territory”.
That was a bit confusing, and I have to go now, so I’ll try and give a more thorough response later. I’ll just say right now that I don’t think it’s as easy as you claim to differentiate between “higher-level” and “lower-level” entities/concepts/laws, or rather, to decide whether an entity is actually a fundamental thing with laws, or whether its just a concept. You appeal to changeability, but this seems like unsteady ground.
EDIT: Here’s a better way of formulating my objection: tell me the obvious, a priori logically necessary criteria for a person to distinguish between “entities within the territory” and “high-level concepts.” If you can’t give any, then this is a big problem: you don’t know that the higher level entities aren’t within the territory. They could be within the territory, or they could be “computational abstractions.” Either position is logically tenable, so it makes no sense to say that this is where the logical incoherence comes in.
Thus I agree with this statement as it is written, because I think the difference in the subjects of the paintings are found instead in the thoughts of the beholder. Would you say that there is a legitimate difference between the thought that a painting has a subject and the thought that a painting is just random blobs?
But surely there’s something in the painting that is causing the observer to have different thoughts for different subjects. But that something in the painting is not anything discernible on the level of quarks. This is why I brought the example up, after all. It was in response to:
if the boring old normal model is correct, your brain is made of quarks, and so your brain will only be able to envision and concretely predict things that can predicted by quarks.
I believe (I could be wrong, since I started this thread asking for a clarification) that the implication of this statement (derived from the context) was that “brains made of quarks can’t think about things as if they’re irreducibly not made of quarks.”
First of all, saying “brains made of quarks can’t think [blank] because quarks themselves aren’t [blank],” seems to me equivalent to saying that paintings can’t be about something because quarks can’t be about something. It’s confusing the abilities and properties of one level for those of another. I know this is a stretch, but be generous, because I think the parallelism is important.
Second of all, we think about things as if they’re not quarks all the time. We can “predict” or “envision” the subject of the painting without thinking about the quark coordinates at all (and such coordinates would not help us envision or predict anything to do with the subject).
So I clearly need some help understanding what Eliezer actually meant. I find no reason to believe that brains made of quarks can’t think about things as if they’re not made of quarks. (Or rather, Eliezer only seems to allow this if it’s a “confusion.” I don’t understand what he means by this.)
I have not been able to imagine a pair of (painting+context with a subject)s which have two completely different subjects but are almost identical in their coordinate-positioning of quarks.
You can, though? Can you give an example?
Well, wouldn’t a painting of the Mona Lisa, and a computer screen depicting said painting, have very different quarks, and quark patterns? While two computer screens depicting some completely different subject would be much more similar to each other? This is what I was trying to get at.
The two computer screens depicting completely different subjects have almost everything in common, in that they are of the same material. However, where they differ—namely, the color of each pixel—is where all the information about the painting is contained. So the screens have enough different information (at the quark level) to distinguish what the paintings are about.
So I don’t think you are getting at why “about-ness” isn’t related to the quarks of the painting. I think a better example is a stick figure. A child’s stick figure can be anybody. What the painting is about is in her head, or your head, or in the head of anyone thinking about what the painting is about.
So it’s not in the quarks of the painting at all. “About-ness” is in the quarks of the thoughts of the person looking at the painting, right? (And according to reductionism, completely determined by the quarks in the painting, the quarks of the observer, and the quarks of their mutual environment.)
Above, you wrote:
Thus I agree with this statement as it is written, because I think the difference in the subjects of the paintings are found instead in the thoughts of the beholder. Would you agree that there is a legitimate difference at the level of quarks between the thought that a painting has a subject and the thought that a painting is just random blobs?
But the two screens with two different subjects are probably more similar than a screen and a painting with the same subject, in terms of coordinates of quarks. Additionally, it’s not clear to me that there’s a one-to-one correspondence between color and quarks. Even establishing a correspondence between color and chemical make up is extremely difficult, due to the influence of natural selection in how we see color (I remember Dennett having a cool chapter on this in CE.)
I don’t want to make our disagreement sound more stark than it actually is. I agree that the about-ness is in the mind of the beholder, and the stick figure is a good example as well… but I think this just emphasizes my point. Let me put it this way: Given the data for the point-coordinates of the three entities, could a mind choose which one had which subject? No, even though the criteria is buried abstrusely somewhere in there. The point being that the models are inextricably separate in the imagination, and its therefore not clear to me why its a priori logically necessary that they all collapse into the same territory (though I agree that they do, ultimately).
Maybe I’ve misunderstood you and you’re not talking about what “about” means. Are you talking about how it seems impossible that we can decode the quarks into our perception of reality? And thus that while you agree everything is quarks, there’s some intermediate scale helping us interpret that would be better identified as ‘fundamental’? (If I’m wrong just downvote once, and I’ll delete, I don’t want to make this thread more confusing.
Haha if I just downvoted it, then I wouldn’t be able to explain what I do mean.
I’m simply attempting to disagree with the logical necessity of reductionism. I said this earlier, I thought it was pretty clear:
So, the fact that a painting has a subject is a good example of this: I can’t imagine the specific differences between a) the quark-configuration that would lead to me believing its “about a subject”, versus b) the quark-configuration that would lead to me believing its just a blob. I can believe that quarks are ultimately responsible, but I’m not obligated to do so by a priori logical necessity.
So I’m not contending anything about what the most fundamental level is. I’m just saying that non-reductionism isn’t inconceivable.
This is a slippery concept. With some tiny probability anything is possible, even that 2+2=3. When philosophers argue for what is logically possible and what isn’t, they implicitly apply an anthropomorphic threshold. Think of that picture with almost-the-same atoms but completely different message.
The extent to which something is a priori impossible is also probabilistic. You say “impossible”, but mean “overwhelmingly improbable”. Of course it’s technically possible that the territory will play a game of supernatural and support a fundamental object behaving according to a high-level concept in your mind. But this is improbable to an extent of being impossible, a priori, without need for further experiments to drive the certainty to absolute.
Not quite sure what you’re saying here. If you’re saying:
1)”Entities in the map will not magically jump into the territory,” Then I never disagreed with this. What I disagreed with is your labeling certain things as obviously in the map and others obviously in the territory. We can use whatever labels you like: I still don’t know why irreducible entities in the territory are “incredibly improbable prior to any empirical evidence.”
2)”The territory can’t support irreducible entities,” you still haven’t asserted why this is “incredibly improbable prior to any empirical evidence.”
I feel that someone should point out how difficult this discussion might be in light of the overwhelming empirical evidence for reductionism. Non-reductionist theories tend to get… reduced. In other words, reductionism’s logical status is a fairly fine distinction in practice.
That said, I wonder if the claim can’t be near-equivalently rephrased “it’s impossible to imagine a non-reductionist scenario without populating it with your own arbitrary fictions”. Your use of the term “conceivable” seems to mean (or include) something like “choose an arbitrary state space of possible worlds and an observation relation over that space”. Clearly anything goes.
You’re simply expanding your definition of “everything” to include arbitrary chunks of state space you bolted on, some of which are underdetermined by their interactions with every previous part of “everything”. I don’t have a fully fleshed-out logical theory of everything on hand, so I’ll give you the benefit of the doubt that what you’re saying isn’t logically invalid. Either way, it’s pointless. If there’s no link between levels, there’s no way to distinguish between states in the extended space except by some additional a priori process. Good luck acquiring or communicating evidence for such processes.
Ah, that’s very interesting. Now we’re getting somewhere.
I don’t think it has to be arbitrary. Couldn’t the following scenario be the case?:
The universe is full of entities that experiments show reducible to fundamental elements with laws (say, quarks), or entities that induction + parsimony tells us ought to be reducible to fundamental elements (since these entities are made of quarks, we just haven’t quite figured out the reduction of their emergent properties yet)… BUT there is one exception in this universe, a certain type of stuff whose behavior is quantifiable, yet not reducible to quarks. In fact, we have no reason to believe this certain type of stuff is even made of the fundamental stuff everything else seems to be. Every experiment would defy reducing this entity down to quarks, to the point that it would actually be against Occam’s Razor to try and reduce this entity to quarks! It would be a type of dualism, I suppose. It’s not a priori logically excluded, and it’s not arbitrary.
I think we might separate the ideas that there’s only one type of particle and that the world is reductionist. It is an open question as to whether everything can be reduced to a single fundamental thing (like strings) and it wouldn’t be a logical impossibility to discover that there were two or three kinds of things interacting. (Or would it?)
Reductionism, as I understand it, is the idea that the higher levels are completely explained by (are completely determined by) the lower levels. Any fundamentally new type of particle found would just be added to what we consider “lower level”.
So what does it say about the world that it is reductionist? I propose the following two things are being asserted:
(1) There’s no rule that operates at an intermediate level that doesn’t also operate on the lower levels. This means that you can’t start adding new rules when a certain level of organization is reached. For example, if you have a law that objects with mass behave a certain way, you can’t apply it to everything that has mass but not quarks. This is a consistency rule.
(2) Any rule that applies to an intermediate level is reducible to rules that can be expressed with and applied at the lower level. For example, we have the rule that two competing organisms cannot coexist in the same niche. Even though it would be very difficult to demonstrate, a reductionist worldview argues that in principle this rule can be derived from the rules we already apply to quarks.
When people argue about reductionism, they are usually arguing about (2). They have some idea that at a certain level of organization, new rules can come into play that simply aren’t expressible in the lower levels—they’re totally new rules.
Here’s a thought experiment about an apple that helped me sort through these ideas:
Suppose that I have two objects, one in my right hand and one in my left hand. The one in my left hand is an apple. The one in my right hand has exactly the same quarks in exactly the same states. But somehow, for some reason, they’re different. This implies that there is some degree of freedom between the lower level and the higher level. Now it follows that this free state is determined in some way; to determine an apple in my left hand and a non-apple in my right, either by some kind of rule or randomly, or both. In any case, we would observe this rule. Call it X. So the higher level, the object being an apple or non-apple, depends upon the lower levels and X.
(a) Was X there all along ? If so, X is part of the lower level and we just discovered it, we need to add it in to our lower level theory.
(b) What if X wasn’t “there” all along? What if for some reason, X only applies at intermediate levels? …either because
The case (a) doesn’t assert anything about the universe, it just illustrates a confusion that can result from not understanding what “lower level” means. I don’t think (b) in either part is logically impossible because you can run a simulation with these rules.
Until you require (and obviously you want to) that the universe is a closed system. Then I don’t think you can have b(i) or b(ii). A rule (Rule 1) that is inconsistently applied (bi) requires another rule (Rule 2) determining when to apply it. Rule 1 being inconsistent in a system means that Rule 2 is outside the system. If a phenomenon cannot be described by the states of the system (the lower level) (bii) then it depends on something else outside the system. So I think I’ve deduced that the logical impossibility of reductionism depends upon the universe being a closed system.
If the physical universe isn’t closed—if we allow the metaphysical—then non-reductionism is not logically impossible.
Where does randomness come in? Is the universe necessarily deterministic because of (bii) being impossible, so that the higher levels must depend deterministically on the lower levels? (I’m talking about whether a truly stochastic component is possible in Brownian motion or the creation of particles in a vacuum, etc).
Another thing to think about is how these ideas affect our theories about gravity. We have no direct evidence that gravity satisfies consistency or that it is expressible in terms of lowest level physics. Does anyone know if any well-considered theories are ever proposed for gravity that don’t satisfy these rules?
Oh! Certainly. But this doesn’t seem to exclude “mind”, or some element thereof, from being irreducible—which is what Eliezer was trying to argue, right? He’s trying to support reductionism, and this seems to include an attack on “fundamentally mental” entities. Based on what you’re saying, though, there could be a fundamental type of particle, called “feelions,” or “qualions”—the entities responsible for what we call “mind”—which would not reduce down to quarks, and therefore would deserve to be called their own fundamental type of “stuff.” It’s a bit weird to me to call this a reductionist theory, and its certainly not a reductionist materialist theory.
Everything else you said seems to me right on—emergent properties that are irreducible to their constituents in principle seems somewhat incoherent to me.
In what way would these “feelions” or “qualions” not be materials? Your answer to this question may reveal some interesting hidden assumptions.
Are you sure it’s weird because it’s not reductionist? Or because such a theory would never be seen outside of a metaphysical theory? So you automatically link the idea that minds are special because they have “qualions” with “metaphysical nonsense”.
But what if qualions really existed, in a material way and there were physical laws describing how they were caught and accumulated by neural cells. There’s absolutely no evidence for such a theory, so it’s crazy, but its not logically impossible or inconsistent with reductionism, right?
Hmm… excellent point. Here I do think it begins to get fuzzy… what if these qualions fundamentally did stuff that we typically attribute to higher-level functions, such as making decisions? Could there be a “self” qualion? Could their behavior be indeterministic in the sense that we naively attribute to humans? What if there were one qualion per person, which determined everything about their consciousness and personality irreducibly? I feel that, if these sorts of things were the case, we would no longer be within the realm of a “material” theory. It seems that Eliezer would agree:
Based on his post eventually insisting on the a priori incoherence of such possibilities (we look inside the dryad and find out he’s not made of dull quarks), I inferred that he thought fundamentally mental things, too, are excluded a priori. You seem to now disagree, as I do. Is that right?
Where things seem to get fuzzy is where things seem to go wrong. Nevertheless, forging ahead..
If they are being called “fundamentally mental” because they interact by one set of rules with things that are mental and a different set of rules with things that are not mental, then it’s not consistent with a reductionist worldview (and it’s also confused because you’re not getting at how mental is different from non-mental). However, if they are being called fundamentally mental because they happen to be mechanistically involved in mental mechanisms, but still interact with all quarks in one consistent way everywhere, it’s logically possible.
Also you asked if these qualions could be indeterministic. It doesn’t matter if you apply this question to a hypothesized new particle. The question is, is indeterminism possible in a closed system? If so, we could postulate quarks as a source of indeterminism.
Is it therefore a priori logically incoherent? That’s what I’m trying to understand. Would you exclude a “cartesian theatre” fundamental particle a priori?
What do you mean by mechanical? I think we’re both resting on some hidden assumption about dividing the mental from the physical/mechanical. I think you’re right that it’s hard to articulate, but this makes Eliezer’s original argument even more confusing. Could you clarify whether or not you’re agreeing with his argument?
I deduce that the above case would be inconsistent with reductionism. And I think that it is logically incoherent, because I think non-reductionism is logically incoherent, because I think that reductionism is equivalent with the idea of a closed universe, which I think is logically necessary. You may disagree with any step in the chain of this reasoning.
I think you guessed: I meant that there is no division between the mental and physical/mechanical. Believing that a division is a priori possible is definitely non-reductionist. If that is what Eliezer is saying, then I agree with him.
To summarize, my argument is:
[logic --> closed universe --> reductionism --> no division between the mental and the physical/mechanical]
Yes, and it does.
Could you explain? If I were presented with a data sheet full of numbers, and told “these are the point coordinates of the fundamental building blocks of three entities. Please tell me what these entities are, and if applicable, what they are about” I would be unable to do so. Would you?
Given a computer that can handle the representation and convert it into form acceptable by the interface of your mind, this data can be converted into a high-level description. The data determines its high-level properties, even if you are unable to extract them, just like a given number determines which prime factors it has, even if you are unable to factor it.
I happen to agree. However, the claim of reductionism is that what you’ve described is the case for ALL entities. I’m trying to figure out why this claim is logically necessary, and any disagreement is a confusion.
The claim is about the absence of high-level concepts in the territory. These appear only the mind, as computational abstractions in processing low-level data. The logical incoherence comes from the disagreement between the definition of high-level concepts as classes of states of the territory, which their role in the mind’s algorithm entails, and assumption that the very same concepts obey laws of physics. It’s virtually impossible for the convenience of computational abstraction to correspond exactly to the reality of physical laws, and even more impossible for this correspondence to persist. High-level concepts ever change in the minds according to chance and choice, while fundamental laws are a given, not subservient to telepathic teleological necessity.
Edit: changed “classes of low-level concepts” to “classes of states of the territory”.
That was a bit confusing, and I have to go now, so I’ll try and give a more thorough response later. I’ll just say right now that I don’t think it’s as easy as you claim to differentiate between “higher-level” and “lower-level” entities/concepts/laws, or rather, to decide whether an entity is actually a fundamental thing with laws, or whether its just a concept. You appeal to changeability, but this seems like unsteady ground.
EDIT: Here’s a better way of formulating my objection: tell me the obvious, a priori logically necessary criteria for a person to distinguish between “entities within the territory” and “high-level concepts.” If you can’t give any, then this is a big problem: you don’t know that the higher level entities aren’t within the territory. They could be within the territory, or they could be “computational abstractions.” Either position is logically tenable, so it makes no sense to say that this is where the logical incoherence comes in.
But surely there’s something in the painting that is causing the observer to have different thoughts for different subjects. But that something in the painting is not anything discernible on the level of quarks. This is why I brought the example up, after all. It was in response to:
I believe (I could be wrong, since I started this thread asking for a clarification) that the implication of this statement (derived from the context) was that “brains made of quarks can’t think about things as if they’re irreducibly not made of quarks.”
First of all, saying “brains made of quarks can’t think [blank] because quarks themselves aren’t [blank],” seems to me equivalent to saying that paintings can’t be about something because quarks can’t be about something. It’s confusing the abilities and properties of one level for those of another. I know this is a stretch, but be generous, because I think the parallelism is important.
Second of all, we think about things as if they’re not quarks all the time. We can “predict” or “envision” the subject of the painting without thinking about the quark coordinates at all (and such coordinates would not help us envision or predict anything to do with the subject).
So I clearly need some help understanding what Eliezer actually meant. I find no reason to believe that brains made of quarks can’t think about things as if they’re not made of quarks. (Or rather, Eliezer only seems to allow this if it’s a “confusion.” I don’t understand what he means by this.)