I can’t accept your philosophical diagnosis without knowing more about how it was obtained. What did you do, email her the three paragraphs and ask what she thought? Your comment history shows there’s a long backstory here, that you are an enthusiastic believer in a complex of ideas including Bayesianism, uploading, and immortalism, and there’s a persistent clash about this with people who are close to you. You call your mother a philosophical “non-realist”, she says you’re a control freak who doesn’t want to die… Clearly we need a new Balzac (is it Houellebecq?) to write about this 21st-century generation gap, in which the children of post-Christian agnostics grow up to be ideologically aggressive posthuman rationalists. It sounds wonderfully dialectical and ironic: your mother’s intellectual permissiveness probably gave you the space in which to develop your rationality, and yet your rationality now turns you against her radical open-mindedness or principle of not believing anything. Extreme agnosticism is not the same as “non-realism”, and she probably rejects your “tautologies” because they seem to come packaged with a lot of other stuff that she wants to reject.
“Extreme agnosticism” sounds mostly accurate. She will doubt as a matter of principle, but she won’t put a probability on that doubt. As for why I believed what I wrote here…
We talked. A lot. It spanned over multiple conversations, for several months, if not over a year. First, I tried to talk about transhumanist things, like mind uploading. She found it impossible sounding, scary, horrible, and sad. We talked about the potential power of science. She seems to think that science isn’t omnipotent (sounds true enough), and some specific things, like the understanding of the human soul, seems definitely out of reach. But I don’t recall she ever stuck her neck out and flatly said that there’s no way science could ever unravel the mysteries of our minds, even in principle (I personally have some doubt, because of the self-referencing involved. But I don’t think these difficulties would prevent us from understanding enough low-level mechanisms to effectively emulate a brain.).
We moved on to more basic things, like reductionism. She often “accuses” me of wanting to control everything with math. So I tried to assert that our world is math all the way down, even if it’s way too complicated for us to actually use accurate math. But she doesn’t seem to make the bridge between the laws of physics and a full human brain. She seems to assert that there is something there that is by nature incomprehensible. But when I call that “magic”, she rejects the term.
At some point, I wrote this (French or English depending on your browser settings). I don’t think very highly of it, but I thought it would at least serve my point: stopped being called intolerant just because I take the logical step from believing something to asserting that someone who doesn’t believe it is mistaken. (Modulo tiny uncertainties.) It didn’t work at all. She just found it juvenile, besides the point, and by the way, the colour of my socks and the existence of God are not the same thing, and should not be reasoned about in the same way. My informal formulation of the Auman’s agreement theorem also fell on deaf ears.
We had some more fruitless debates, where I believe she doesn’t understand me, and where she believe she perfectly understands me, but I cannot perceive her arguments the same way humans can’t perceive ultra-violet, which is why I reject them, the same way some ignorant fool would say “there’s no such thing as invisible light”. This feels very close to saying that I lack some brain circuitry, though I don’t think she would actually say that if I asked. But I do feel like I talk to some mystic who claim to have higher perceptions, and I should not call them hallucinations just because I lack them. (Sounds like Freudian psychoanalysis: If you don’t believe it, something is wrong with you.) Of course, her high status (being my mother and older than me) doesn’t help. Heck, she even said she used to think like me, but got past that. So I’m clearly immature. I suspect she hopes I will understand her when I get older.
So I ended up writing this (French only for now). She hasn’t read it so far, but I told her about the first paragraphs (which I roughly translated in my post here). Then she told me there’s something wrong with this. (But again, she won’t outright contradict me, and say there isn’t a world out there.)
By the way, she thinks I believe all that stuff for some “deep reason”, which I take to mean “something unrelated to the actual accuracy of such beliefs”. She thinks I have some deep fear inside that makes me cling to that. (No kidding: I see small hopes of making a paradise out of our world, and I would give up on them? OK, if it’s impossible after all, let us enjoy our short lives. But if there is a possibility, then missing it is unforgivable.) Strangest of all, she sees a contradiction between my humanistic, left-wing, environmentalist ideas, and my consequentialist, positivist, transhumanist ideas.
Now to her credit, I must note that she probably changed her mind about longevity: without taking into account some problems like world population, she wouldn’t be against doubling our life expectancy, or more. Living forever seems too much yet, but a couple centuries seems like a good idea. (She is closer to death lately: her aunt, which she loves dearly, starts to have health problems that may prove serious in the coming years, if not months.)
One way to test your mother’s attitude to science, explanation, and so on, would be to see what she thinks of theories of the mind which sound like nonreductionistic quantum mysticism to you. What would she think of the theory that qualia are in the quantum-gravity transitions of the microtubule, and the soul is a bose-einstein condensate in the brain? I predict that she would find that sort of theory much more agreeable and plausible. I think she’s not hostile to reality or to understanding, she’s hostile to reductionism that falsifies subjective reality.
People here and elsewhere believe in ordinary reductionist materialism because they think they have to—because they think it is a necessary implication of the scientifically examined world—not because that outlook actually makes sense. For someone who truly believes in an atomistic physical universe, the natural belief is dualism: matter is made of atoms, mind is some other sort of thing. It’s only the belief in the causal closure and causal self-sufficiency of atomistic physics that leads people to come up with all the variations of mental materialism: eliminativism, epiphenomenalism, various “identity theories” such as functionalism. A lot of these so-called materialisms are actually dualisms, but they are property dualism rather than substance dualism: the mind is the brain, but it has properties like “being in a certain state of consciousness”, which are distinct from, yet somehow correlated with, properties like “being made of atoms arranged in a certain way”.
I regard this situation as temporary and contingent. It’s the consequence of the limitations of our current science and currently available concepts. I fully expect that new data from biology, new perspectives in physics, and a revival of rigorous studies of subjectivity like transcendental phenomenology, is eventually going to give us a physically monistic account of what the self is, in which consciousness as it is subjectively experienced is regarded as the primary ontological reality of self-states, and the traditional physical description as just an abstracted account, a mathematical black box which does not concern itself with intrinsic properties, only an abstracted causal model. But abstracted causal models are the whole of natural-scientific ontology at the present time, and materialists try to believe that that is the fundamental nature of reality, and the aspect of reality which we experience more or less directly in subjectivity, is some sort of alien overlay.
The folk opposition to reductionist materialism derives to a large degree from people in touch with the nature of subjective experience—even if they can’t express its nature with the rigor of a philosopher—and who perceive—again, more intuitively than rigorously—how much of reality is lacking in a strictly “mathematical” or “naturalistic” ontology. In rejecting reductionism, they are getting something right, compared to the brash advocates of materialist triumphalism, who think there’s no problem in saying “I’m just a program, and reality is just atoms”.
I know it must sound scandalous or bizarre to hear such sentiments on Less Wrong, but this really is the ultimate problem. The natural-scientific thinkers are trying to make models of the mind, but the intuitive skeptics are keeping them honest, and the situation will not be resolved by anything less than a new ontology, which will look in certain respects very “old” and retro, because it will reinstate into existence everything that was swept under the carpet of consciousness in order to construct the physical/computational paradigm of reality. It is very clear that people with a highly developed capacity for thinking abstractly are capable of blinding themselves to vast tracts of reality, in order to reify their abstractions and assert that these abstractions are the whole of reality. It is one particular form of belief projection to which “rationalists” are especially susceptible. And until the enormous task of perceiving and articulating the true ontology, and the way that it fits into science or that science fits into it, has been done, all that the enemies of premature reification can do is to make suggestive statements like this one, hoping that something will strike a chord and reawaken enough prescientific awareness in the listener for them to detach themselves a little from their constructs and “see” what the intuitives see.
Similarly, some part of the rejection of life extension through uploading comes from a rejection of the metaphysic implied. It looks like the uploader is denying reality. Life extension through rejuvenation is much more acceptable for this reason—though even there, the wisdom of the human race says that striving for literal immortality is unhealthy because it’s surely impossible, and it’s unhealthy to attempt impossibilities because it only sets you up for suffering when the inevitable comes. There are a bunch of other psychological issues here, about how much striving and how much uncertainty is rational, the value of life and the rationality of creating it, and so on, where I think transhumanism is often more in the right than tradition. But I will assert emphatically that the crude reductionisms we have available to us now are radically at odds with the facts of subjective experience, and so therefore they are wrong. It is better to revert to agnosticism about fundamental reality, if that is what it takes to retain awareness of subjectivity, rather than to reify mathematics and develop distorted ideas, so here I do side with your mother.
I upvoted your comment/house because I think it can be looted for valuables, but not because I think it’s sturdy enough to live in.
A lot of these so-called materialisms are actually dualisms, but they are property dualism rather than substance dualism: the mind is the brain, but it has properties like “being in a certain state of consciousness”, which are distinct from, yet somehow correlated with, properties like “being made of atoms arranged in a certain way”.
This is not true. The reductionist claim is that the arrangement of the atoms is entirely sufficient to produce consciousness, and not that there is consciousness and then the atoms. Until you shake this style of thought, you will never be able to see single-level-of-reality reductionism as anything more than a mutated form of dualism, which is not what it is.
But abstracted causal models are the whole of natural-scientific ontology at the present time, and materialists try to believe that that is the fundamental nature of reality,
No! Of course, if a more accurate map of reality is developed, the reductionists will say that “this is the closest we have to knowing the true base level of reality.” Only strawman-level reductionists will say “this is the most accurate map we have? Okay, that’s base reality.” It could be that the laws of physics do fit in 500 bits, or it could be that they’re just like onion layers and for whatever reason there is no bottom layer or no one ever finds it. But it is not the case that reductionism is the claim that the extent to which we have figured out how our subjectivitity is delusional, that that is The True Reality. But it’s far better than just plucking from the naive intuitions. We know where they came from, after all, and it wasn’t from a deep experimental study into reality.
and the aspect of reality which we experience more or less directly in subjectivity, is some sort of alien overlay.
Also not true! Why, if there was a direct one-to-one correspondence between subjective experience and reality, there would never be any surprising facts, and there would be no need to distinguish the map and the territory. In fact, I confess I have no idea what such a world would look like. What would it be like to be the universe? It is a wrong question, certainly. The subjective delusions arise from experiencing reality imperfectly, or else, once again, we would have already known about atoms and gluons and whatever-mathematics-are-really-down-there.
But I will assert emphatically that the crude reductionisms we have available to us now are radically at odds with the facts of subjective experience, and so therefore they are wrong.
And I will assert twice as emphatically that the reductionism we have available to us now, while incomplete (and knowing that it is so), is not at odds with subjective experience (they add up to normality, after all) and do more to explain the facts of subjective experience than any dualism, substance or otherwise.
You have said in the past that the computational theory of mind implies dualism. When I first saw this, I was outraged and indignant and did not wish to read any further. Later I discovered that you make much more sense than this initial impression led me to think, so I read more of your work, and yet I never found an argument that supported this claim. Do show me, if you’ve got one.
I will show, however, that even if the computational theory of mind is wrong (as implying dualism would necessarily force it to be), this does not matter for transhumanist realism. For even if you could not copy the brain on a computer, obviously the brain exists, so there is some way of creating them. It can and will be understood so that new brains can be made, even if its substrate isn’t “computations”. (I admit, though, that I have no idea what else it might be doing, that isn’t computable).
Also, curse you for getting me to write in your style of incredibly long comments!
Edit: This comment was upvoted three seconds after I posted it. I don’t know how or why.
There were and are materialisms which explicitly talk about multiple levels of reality. Someone who believes that the brain is made of atoms but that consciousness is “strongly emergent” is still a materialist—at least compared to someone else who believes in a separate soul-substance.
But yes, mostly I am saying that a lot of materialism involves stealth dualism—the materialists are property dualists and don’t realize it.
One place you can see this, is when people talk about consciousness as “how it feels to be an X”, where X is something material (or computational). For example, X may be a certain arrangement of atoms in space. And how it feels to be X is… some detailed specific conjunction of sensations, thoughts, intentions, and so on, that adds up to a single complex experience.
Obviously we could make a 3D plot of where all those atoms are, and zoom around it and into it, view it from different angles, and we’ll still see nothing but a constellation of atoms in space. You won’t “see the experience from the inside” no matter how many such views you try.
“Single level of reality” implies that there is nothing more to those atoms than what can be seen in such a view. Yet the experience is supposed to be there, somewhere. I conclude that a conventional materialist theory of consciousness involves positing that the brain has properties (the “feels” or “qualia” that make up a conscious experience) in addition to the properties already stipulated by physics.
But abstracted causal models are the whole of natural-scientific ontology at the present time, and materialists try to believe that that is the fundamental nature of reality,
No! Of course, if a more accurate map of reality is developed, the reductionists will say that “this is the closest we have to knowing the true base level of reality.” Only strawman-level reductionists will say “this is the most accurate map we have? Okay, that’s base reality.”
You’ve missed my real point. Yes, a materialist is happy to say that their currently favored model is probably not the whole story. I’m saying that all the available models will suffer from the same deficit.
Consider the argument I just gave, about how the “feels” are nowhere to be seen in the atom plot, yet they are supposed to exist, yet only atoms are supposed to exist. This is a contradiction that will not be affected by adding new atoms or rearranging the old ones. All models of the world as atoms in interaction are “abstracted causal models”, the result of a centuries-long effort to understand the world without talking about so-called secondary properties, which have to be reintroduced once you want to explain consciousness itself. And it’s at that point that these subjective properties form an “overlay”—they have to be added to the physical base.
and the aspect of reality which we experience more or less directly in subjectivity, is some sort of alien overlay.
Also not true! Why, if there was a direct one-to-one correspondence between subjective experience and reality, there would never be any surprising facts, and there would be no need to distinguish the map and the territory.
There’s supposed to be a 1-to-1 correspondence between subjective experience and the physical reality of the part of the brain responsible for being the experience—not a 1-to-1 correspondence between subjective experience and the physical world external to the brain.
I hope it’s now clear that I’m not accusing materialists of identifying their models with reality at that level. It’s the identification of experiences themselves with physical parts of the brain where the problem lies, given the physical ontology we have. Obviously, if physics already posited the existence of entities that could be straightforwardly identified with elementary qualia, the situation would be rather different.
the reductionism we have available to us now, while incomplete [...] is not at odds with subjective experience (they add up to normality, after all)
Adding up to normality is a slogan and a (doomed) aspiration here. I believe 2+2=5, I know it sounds strange, but it’s OK because it adds up to normality! Except that normality is 4, not 5. Or in this case, “normality”, i.e. reality, is that experiences exist. Even if we were to take a virtual trip through an atom-plot of a brain, and we arrived somewhere and you pointed at a specific cluster of atoms and said, “There’s part of an experience! That cluster of atoms is one pixel of a visual sensation of red”, I’m still not going to see the redness (or even see the “seeing of redness”) no matter what angle I choose to view that cluster of atoms. If the redness is there, it is there in addition to all the properties that feature in the physics.
You have said in the past that the computational theory of mind implies dualism. [...] I never found an argument that supported this claim. Do show me, if you’ve got one.
Maybe it implies trialism. We end up with three levels here: the level of atoms (i.e. the fundamental physical level), the computational state machine which describes cognition and consciousness, and the experiences which we are supposed to be explaining.
A computational theory of consciousness says that a given state of consciousness just is a particular state in a particular state machine. The argument for dualism here is similar to the argument for dualism I gave for the arrangement of atoms, except that now we’re not dealing with just one arrangement of atoms, we’re dealing with an enormous equivalence class of such arrangements—all those arrangements which instantiate the relevant state machine. Pick any instance, any individual member of that equivalence class, and the previous argument applies: you won’t “see the experience from the inside”, no matter how you examine the physical configuration. The existence of the experience somewhere “in” the configuration implies extra properties beyond the basic physical ones like position and momentum.
Systematically associating conscious states with computational states will allow you to have a systematic property dualism, but it will still be property dualism.
Okay, I understand your position much better. Here’s why it is wrong:
Your argument about arranging atoms to make consciousness
also applies to arranging atoms to make apples.
You can look at this arrangement of atoms all you want, but you still won’t “see” the appleness unless you’re some sort of lifeform that has mechanisms that recognize apples easily, like humans.
Presumably consciousness is a lot more complicated than apples, and worse yet is how it isn’t a relatively durable object that humans can experience with all of their senses (indeed, none of the classical ones). So it intuitively feels like it’s different, but that doesn’t make it so.
I will see some aspects of the apple but not others. I will see its shape, because you can make shapes by arranging atoms in space, but I won’t see its color. Then there are attributes like the fact that it grew on a tree, which I will be able to “see” if the atom-plot extends that far in space and time.
Before we go any further, I would like to know if this “counterargument by apple” is something you thought up by yourself, or if you got it from somewhere. I have an interest in knowing how these defensive memes spread.
ETA: I will try to write a little more in the way of rebuttal. But first, I will allow myself one complaint, that I have made before: arguments like this should not even be necessary. It should be obvious that, e.g., if you had a universe consisting of an arrangement of particles in space whose only properties are their relative positions, that nothing in that universe has a color. The property of being colored just does not exist there. And so, if you want to maintain that conscious mental states exist in such a universe, and that they include the experience of color, you are going to have to introduce color as an additional property somehow—a property that exists somewhere inside the assemblages of
particles that are supposed to be the experiences.
So what of the attempt to rebut this with “appleness”, as a reductio ad absurdum? Well, we can start by distinguishing between the apple that exists in the external world, the experience of the apple, and the concept of an apple. Before atomism, before neuroscience, human beings are supposedly naive realists who think that what they experience is the thing itself—though if they are grown up just a little, they will already be positing that reality is a little different to their experience, just by supposing that entities continue to exist even when they are out of sight.
But let’s suppose that we have come to believe that the world of experience is somehow just “in our minds” or “in our brains”, and that it is an imperfect image or representation of an external world. This distinction has been understood for centuries. It is presupposed by the further distinction between primary and secondary properties that has been methodologically important for the development of physics: we will develop theories of space, time, shape, and motion, but we won’t worry about color, taste, or smell, because those qualities are in the perceiver only, not in the external world.
So here I sit, I see an apple, and it looks red. The physicist tells me that the apple in the external world is not red in that way. It is a colorless object made of colorless particles, but they have the property of reflecting light at a certain wavelength, and when that arrives in my eye it stimulates my brain to construct the experience of redness with
which I am familiar. All right; it may be disorienting to the former naive realist to suppose that the external world doesn’t contain color, that it’s just an arrangement of atoms possessing the property of location but no property of coloredness. But the scientific realist just has to get used to the idea that everything they are seeing is in their head, including the colors.
But wait! Now it’s the era of neuroscience and molecular biology and cognitive science. The inside of your head is now also supposed to be made of colorless atoms. So it now seems like there’s no place left in the universe where you can find an object that is actually colored. Outside your head and inside your head, there is nothing but colorless particles arranged in space. And yet there are the colors, right in front of you. The apple looks as objectively red as it ever did.
Historically, property dualism and strong emergence has been a common response to this situation, among people who thought clearly enough to see the difficulty. For example, see Bertrand Russell writing about two types of space, physical space and subjective space. Physical space is where the atoms are located, subjective space is where the colors and the experienced objects are located.
So why don’t functionalists and other contemporary materialists openly avow property dualism? I think a lot of them just habitually associate experiences and mental activity with “brain states” and “computation”, and don’t actually notice that they are lining up two different things. The attitudes of instinctive programmers towards computers probably also contribute somehow. People get used to attributing semantic states and numerous other properties to what goes on in a
computer, and forget, or never even learn, that those attributed properties are not intrinsic properties of the physical computer, no more than the shapes of letters on a page are intrinsically connected to the sounds and the meanings that they represent. The meanings that are associated with those shapes are a product of culture and of the mental intentionality of the person actively interpreting those shapes as symbols. This also applies to just about everything that goes on in a computer. A computer is a universal state machine capable of temporarily instantiating specific state machines which can causally model just about anything. But the computer doesn’t literally contain what it is causally modeling, just as emails don’t literally contain the meanings that people extract from them.
Another confusion that occurs is treating basic sensory properties like categories. There is no reason to believe in a fundamental property of “appleness”. If I identify an object I experience as an apple, it is because it possesses a conjunction of other properties, like shape, color, perhaps taste, perhaps physical context, which lead me to deduce that this thing in front of me is one of those edible objects, grown on a plant, that I have encountered before. But consider the
properties on the basis of which that identification is made. Sometimes it is argued that, for example, “red” or “redness” is also just a category, and so if you can show that the brain is a computer which computationally classifies optical stimuli according to wavelength, you have accounted for the existence of colors. It may also be added that different cultures have different color words, whose scope is not the same, so there is no reason to believe in colors above and
beyond cognitive and cultural constructs, and wavelengths of light.
But what color categories classify are specific instances of specific shades of color. We can group and regroup the spectrum of shades differently, but in the end the instances of color have an existence independent of, and prior to, the words and categories we use to designate them. And that is the level at which the existence of color refutes any claim to the ontological completeness of a physics of colorless particles. You can organize the motions of particles so
that they form state machines undergoing conditional changes of state that can be termed “classification of stimuli”. But you do not thereby magically bring into being the existence of color itself.
Ironically, in a sense, such magic is precisely what a functionalist theory of consciousness (and of the existence of conscious persons) claims: that just the existence of the appropriate state machine is enough to guarantee the existence of the associated experience or the associated person. Since the ontological ingredients of these experiences can be lacking in the computational substrate, the implication is that they come into being when the state machine does, in a type of lawful property dualism where the fundamental laws of psychophysical parallelism refer to computational properties on the physical side.
Now of course, people who believe in mind uploading would viscerally reject the idea that they are saying that nonmaterial qualia or even nonmaterial souls would materialize when their emulation started running on the computers of the post-singularity future. That’s supposed to be a dumb idea reserved perhaps for Hollywood, and writers and an audience whose minds are still half-choked with spiritual delusions about the nature of personhood, and for whom computers and technology are just props for a new type of magic. CGI can show a misty soul congealing around the microprocessors, ghosts of the departed can show up in virtual reality, Neo can have his “matrix vision” even when
he’s unplugged and in the real world…
My thesis is that people who believe in standard materialist theories of mind, and who would pride themselves on knowing enough to reject that sort of hokum, are doing exactly the same thing on a higher level. These aren’t childish delusions because they are based on a lot of genuine knowledge. It is actually the case that you can put a chip in someone’s brain and it will restore certain simple neurological functions. It does appear that large tracts of the nervous system truly can be understood as a type of physical computer. But that’s because we are describing unconscious
activities, activities that take place “out of sight”—more precisely, out of awareness—so problems like “where is the color” don’t even arise. “Consciousness” or “experience” is the problem, because it is the repository for all the types of Being that we experience, but which are not present in the ontology of the natural sciences.
It should be obvious that, e.g., if you had a universe consisting of an arrangement of particles in space whose only properties are their relative positions, that nothing in that universe has a color.
I assume that by “color”, you mean the subjective experience of colour, not the fact that an object reflects or emits certain kinds of light. Because “reflecting and emitting certain kinds of light” can be explain in terms of “arrangement of particles”, in our universe.
And so, if you want to maintain that conscious mental states exist in such a universe, and that they include the experience of color, you are going to have to introduce color as an additional property somehow.
I bet you don’t actually think like that. If it is obvious to you that an “arrangement of particles” universe cannot have subjective experience of colour in it, that’s because in the first place, it is obvious to you that it can’t have subjective experience period.
I do not have the energy to properly respond to your comment. It is simply too long. Instead, at least for now, I will just respond to this:
Before we go any further, I would like to know if this “counterargument by apple” is something you thought up by yourself, or if you got it from somewhere.
I came up with it myself. It’s a good question, because that is not true of most of the arguments I wield.
There were and are materialisms which explicitly talk about multiple levels of reality. Someone who believes that the brain is made of atoms but that consciousness is “strongly emergent” is still a materialist—at least compared to someone else who believes in a separate soul-substance
The problem with “strong emergence” is that it can be used to “explain” anything and is thus worthless.
If evolutionary biology could explain a toaster oven, not just a tree, it would be worthless. There’s a lot more to evolutionary theory than pointing at Nature and saying, “Now purpose is allowed,” or “Evolution did it!” The strength of a theory is not what it allows, but what it prohibits; if you can invent an equally persuasive explanation for any outcome, you have zero knowledge.
I like quantum mind, but despite the unity of superpositions matching the apparent unity of subjective experience, does it really give us much? I think the answer is no, at least until we have a better understanding of the physics of (quantum) computation, a better theory of computation in light of that, and a highly advanced computationalism/monadology in light of that. And even then Leibniz’ solution to the mind-body problem was literally Goddidit. (Which is an intriguing and coherent theory that explains all the evidence, but you’d think there’d be something better. Also Leibniz’ God causally influences monads, which aren’t supposed to be influence-able, so his metaphysic seems sort of broken, even if you can fix that bug with a neat trick or two maybe.) Quantum mind might help us do uploads, but it still wouldn’t have the answer to the mind-body problems, we still wouldn’t know if the uploads were conscious. Or is apparently matching a phenomenological property with a physical property (unity of experience/superposition) somehow a big philosophical step in the right direction?
You know, I do have this nagging doubt: why am I me, and not someone else? I do see a problem with subjective experience. On the one hand, it doesn’t make intuitive sense in a universe that runs on math, but on the other, what could there be beyond the causal stuff? I sense something fishy.
I too view reductionistic materialism as mainly an empirical claim. What I do view as necessary is the mere existence of something. I think, therefore “something” is. Maybe that “something” is limited to my personal experience, but whatever it is, it works somehow, and what I think won’t change it (unless magical thinking works, but then that is how the world runs).
I am not confident mind uploading should work. But I have empirical reasons to believe it may. First, we have cut&paste transportation. I’m confident it works because current physics says so. The universe doesn’t care whether I landed in vulcan by shuttle or by energy beam, it’s the same configuration. Current laws of physics could be mistaken (they’re not even complete, so they are mistaken somewhere), but this “no identity” stuff looks like something that won’t go away.
Second, I imagined this thought experiment: suspend you, restart you in a green room, suspend you again, then restart you in the laboratory. Result: you have the memory of having been in a green room. The other possibility is, suspend you, scan your brain, run the emulation in a simulated green room identical to the real one, pause the em, rewire your brain to match the em end state, restart you in the laboratory. Result: you have the memory of having been in a green room. It’s the same configuration in both case, so no memory is less real than the other. Conclusion: you have been in a green room. It doesn’t matter if it was physically or in uploaded form.
Note that I become much less confident when I think about leaving up my physical brain (edit: I mean, my original protoplasm wetware) for good.
If uploading doesn’t work, it still can be valuable: If I have goals beyond my own existence, a ghost may be better at achieving them rather than nothing at all. It also prevents total oblivion.
Maybe my basic point is that there is more to the “stuff” than just “being causal”. This is why I talk about abstracted causal models as ontologically deficient. Describing yourself or the world as a state machine just says that reality is a merry-go-round of “states” which follow each other according to a certain pattern. It says nothing about the nature of those states, except that they follow the pattern. This is why functionalist theories of mind lead to patternist theories of identity.
But it’s clear that what we can see of reality is made of more than just causality. Causal relations are very important constitutive relations, but then we can ask about the relata themselves, the things connected by causality, and we can also look for connecting relations that aren’t causal relations. Being shaped like a square isn’t a causal relation. It’s a fact that can play a causal role, but it is not itself made of causality.
These are ontological questions, and the fact that we can ask them and even come up with the tentative ontologies that we do, itself must have ontological implications, and then you can attempt an ontological analysis of these implication relations… If you could go down that path, using beyond-Einsteinian intellectual superpowers, you should figure out the true ontology, or as much of it as is accessible to our sort of minds. I consider Husserl to be the person who got the furthest here.
One then wants to correlate this ontology derived from a phenomenological-epistemological circle of reflection, with the world-models produced in physics and biology, but since the latter models just reduce to state-machine models, they cannot in themselves move you beyond ontological hollowness. Eventually you must use an ontology derived from the analysis of conscious experience itself, to interpret the formal ontology employed by natural science. This doesn’t have to imply panpsychism; you may be able to say that some objects really are “things without an inside”, and other objects do “have a subjectivity”, and be able to specify exactly what it is that makes a difference.
This is a little removed from the indexical problem of
why am I me, and not someone else?
That’s a question which probably has no answer, beyond enumerating the causes of what you are. The deep reasons are reserved for why there is something rather than nothing, and why it is the sort of universe it is. But in a universe with many minds, you were always going to be one among many.
If you were to find that the nature of your personal existence looked rather improbable, that would revive the question a little. For example, if we thought electrons were conscious, then being a conscious being at the Avogadro’s-number-of-electrons level of organization, rather than at the single-electron level of organization, might look suspiciously improbable, given the much larger numbers of electrons in the universe. But then the question would be “why am I human, and not just an electron?” which isn’t quite what you asked.
I think, therefore “something” is
I agree with this part.
The universe doesn’t care whether I landed in vulcan by shuttle or by energy beam, it’s the same configuration.
I think what you’re saying is that in the present, there’s no difference between your current configuration having resulted from a life lived for 20+ years, and your current configuration having materialized five seconds ago. Well, if by hypothesis the configuration is exactly the same in the two scenarios under consideration, then the configuration is exactly the same. That much is true tautologically or by assumption.
Now if we ask whether it’s “still you” in both cases—one where you live out your life with physical continuity, and one in which you are briefly eradicated and then replaced by a physical duplicate—you do have some freedom of self-definition, so the answer may depend a little on the definition. (For now I will not consider the Yudkowskian possibility that there is a unique correct definition of personal identity to be found by superintelligent extrapolation of human cognitive dispositions, analogous to the CEV theory of how to arrive at a human-correct morality.)
But there are obvious and not-so-obvious problems with just saying “the configuration’s the same, therefore there’s no difference”. An obvious problem: suppose we make more than one copy of you—are they both “you”? Less obvious: what if the history of how the configuration was created does matter, in deciding whether you are the same person as before?
Does “having the memory of being in a green room” really imply “you have been in a green room”? We don’t normally trust memory that absolutely, and here we are talking about “memories” that were copied into the brain from a blueprint, rather than being caused in the usual fashion, by endogenous processing of sensory input. It is reasonable to imagine that you could be that person, whose brain was rewired in that way, and that after reflecting for long enough on the situation and on how the process worked, you concluded that it wasn’t you who was in that room, or even that nobody was in that room.
I’m not even convinced that the unlimited capacity to recreate a whole conscious mind “in midstream”, implied by so many thought-experiments, is necessarily possible. There are dynamical systems where you just can’t get to places deep in the state-space without crossing intermediate territory. If all that matters for identity is having the right ensemble of mesoscopic computational states (i.e. described at a level of coarseness, relative to the exact microphysical description, which would reduce a whole neuron to just a few bits), then it should be possible to create a person in mid-stream. But if the substrate of consciousness is a single quantum Hilbert space, for some coherent physical subsystem of the brain, then it’s much less obvious that you can do that. You might be able to bang together a classical simulation of what goes on in that Hilbert space, in mid-stream, but that’s the whole point of my version of quantum-mind thinking—that substrates matter, and just implementing a state machine doesn’t guarantee consciousness, let alone persistence of identity.
But it’s clear that what we can see of reality is made of more than just causality.
Not to me. For instance, while conciousness is still mysterious to me, it sure has causal power, if only the power to make me think of it —and the causal power to make Chalmers write papers about it.
I think what you’re saying is that in the present, there’s no difference between your current configuration having resulted from a life lived for 20+ years, and your current configuration having materialized five seconds ago. Well, if by hypothesis the configuration is exactly the same in the two scenarios under consideration, then the configuration is exactly the same. That much is true tautologically or by assumption.
I think I mean something stronger than that. You may want to re-read the part of the Quantum Physics sequence. The universe actually doesn’t even encode the notion of different particles, so that talking about putting this carbon atom there and that carbon atom here doesn’t even makes sense. When you swap 2 atoms, you’re back to square one in a stronger sense than when you swap 2 numbered (but otherwise indistinguishable) billiard balls. Configuration space is folded on itself, so it really is the same configuration, not a different one that happens to be indistinguishable from the inside.
substrates matter, and just implementing a state machine doesn’t guarantee consciousness, let alone persistence of identity.
Err… Let my brain be replaced by a silicon chip. Let’s leave aside the question of personal identity. Is that thing concious ? It will behave the same as me, and write about conciousness the same I do. If you believe that, and believe it still isn’t concious, I guess you believe in PZombies. I don’t. Maybe changing my substrate would kill me, but I strongly believe the result is still concious, and human in the dimensions I care about.
For instance, while conciousness is still mysterious to me, it sure has causal power
I agree that consciousness has causal power. I’m saying consciousness is not just causal power. It’s “something” that has causal power. The ontological deficiencies of materialist and computational theories of consciousness all lie in what they say about the nature of this “something”. They say it’s a collection of atoms and/or a computational state machine. The “collection of atoms” theory explains neither the brute features of consciousness like color, nor the subtle features like its “unity”. The state machine theory has the same problems and also requires that you reify a particular abstracted description of the physical reality. In both cases, if one were to insist that that really is the ontological basis of everything, property dualism would be necessary, just to accommodate phenomenological (experiential) reality. But since we now have a physics based on Hilbert spaces and exotic algebras, rather than on particles arranged in space, I would hope to find a physical ontology that can explain consciousness without property dualism, and in which the physical description of the brain contained “entities” which really could be identified with the “entities” constituting conscious experience, and not just correlated with them.
The universe actually doesn’t even encode the notion of different particles, so that talking about putting this carbon atom there and that carbon atom here doesn’t even makes sense.
The basis for that statement is that when you calculate the transition probability from “particle at x0, particle at y0” to “particle at x1, particle at y1″, you sum over histories where x0 goes to x1 and y0 goes to y1, as well as over histories where x0 goes to y1 and y0 goes to x1. But note that in any individual history, there is persistence of identity.
I suppose the real logic here is something like “I am a particular configuration, and contributions to my amplitude came from histories in which my constituent particles had different origins.” So you ground your identity in the present moment, and deny that you even had a unique previous state.
Pardon me for being skeptical about that claim—that my present moment is either to be regarded as existing timelessly and not actually as one stage in a connected flow of time, or alternatively that it is to be regarded as a confluence of multiple intersecting histories that immediately then diverges into multiple futures rather than a unique one.
The ontological implications of quantum mechanics are far from self-evident. If I truly felt driven to believe in the many worlds interpretation, I would definitely want to start with an ontology of many histories that are self-contained but which are interacting neighbors. In a reality like that, there’s no splitting and joining, there are just inter-world “forces”. For some reason, no-one has even really tried to develop such a model, despite the conservation of probability density flow which allows a formalism like Bohmian mechanics to work.
Returning to the question of identity for particles, another option, which is more in line with my own ideas, is to think of the ontological state as a tensor product of antisymmetrized n-particle states where the size of n is variable both between the tensor factors and during the history of an individual factor. The ontology here is one in which the world isn’t really made of “particles” at all, it’s made of “entities” with a varying number of degrees of freedom, and a “particle” is just an entity with the minimum number of degrees of freedom. The fungibility of “particles” here would only apply to degrees of freedom within a single entity; the appearance of fungibility between different entities would have a dynamical origin. I have no idea whether you can do that in a plausible, uncontrived way; it’s yet another possibility that hasn’t been explored. And there are still more possibilities.
If you believe that, and believe it still isn’t conscious, I guess you believe in PZombies.
Yes, definitely. Especially if we’re going to talk about imperfect simulations, as has been discussed on one or two recent threads. A spambot, or a smiley face on a stick, is a type of “simulated human being”. We definitely agree, there’s no-one home in either of those situations, right? The intuition that an upload would be conscious arises from the belief that a human brain is conscious, a human brain consists of numerous discrete processors in decentralized communication with each other, and so to be conscious must somehow arise from being a particular sort of computational network. But although we don’t know the precise condition, the universality of computation implies that some sufficiently accurate simulation would be capable of reproducing that network of computation in a new medium, in a way that meets the unknown criterion of consciousness, and so therefore conscious uploads must be possible.
I have argued in a recent comment that functionalism, and also ordinary atomistic materialism, implies property dualism. The constituent properties of consciousness, especially the basic sensory properties, do not exist in standard physical ontology, which historically was constructed explicitly to exclude those sensory properties. So if you want to extend physical ontology to account for consciousness as well, you have to add some new ingredients. Personally I hope for a new physical ontology which doesn’t have to be dualistic, and I even just mentioned a possible mathematical ingredient, namely a division of the world into “multi-particle” tensor factors rather than into single particles. If a single whole conscious experience could be identified with a single tensor factor, that would at least begin to explain the unity of consciousness; you would have elementary degrees of freedom canonically and objectively clustered together into complex unities, whereas in the current ontology, you just have mobs of particles whose edges are a bit fuzzy and arbitrary, something which provides a poor ontological foundation for a theory of objectively existing persons.
Returning to the issue of zombies, suppose for the purposes of argument that people really are sharply defined tensor factors of the wavefunction of the universe, and that conscious states, in our current formalism, would correspond to some of these antisymmetrized n-fermion wavefunctions that I’ve mentioned. The point is that, in this scenario, consciousness is always a property of a single tensor factor, but that you could simulate one of those very-high-dimensional tensor factors by using a large number of low-dimensional tensor factors. This implies that you could simulate consciousness without the simulation being conscious.
I don’t at all insist that this is how things work. The business with the tensor factors would be one of my better ideas, but it’s just a beginning—it’s a long conceptual trek from an n-fermion wavefunction to an intricate state of consciousness such as we experience—and the way things actually work may be very very different. What I do insist is that none of the orthodox materialist theories of mind work. An explicit property dualism, such as David Chalmers has proposed, at least has room in its ontology for consciousness, but it seems contrived to me. So I think the answer is some thing that we haven’t thought of yet, that involves quantum biology, new physical ontology, and revived respect for the ontology of mind.
Your writing is difficult to read for me. I’m tired right now, so I plan to answer properly later, in a few days. Hopefully my brain will do better processing.
I assume by “physical brain” here you mean one made of protoplasm. What does contemplating the possibility that you aren’t running on such a brain now do to your confidence?
If I knew that I am currently running on a silicon chip (Gunm-style), then I would be highly confident that replacing that chip by another, identical one, preserves my identity, because it’s the same configuration. Moreover, replacing my old chip by a newer one, before the physical deterioration significantly affects the actual software processing, probably would work as well.
But if we’re talking about running my software on a different chip through, say, a virtual machine that emulate my original chip, then I would be less confident that it would still be me. As confident as I am that, an EM of my current wetware would still be me. Which is, currently, not confident enough to make the leap.
Ah, and if I do learn that I run on a chip, I won’t turn crazy. I may be worried if I knew my wetware self were still running around, and I may not tell my mother, but besides that I don’t really care. If I knew that my wetware self was “dead”, then I would wonder if I should feel sorry for him, or if I’m actually him. Because I value my life, I know that my wetware self did too. But I’d probably get over it with the knowledge that the rest of the world (including my family) didn’t lose anything, (or at least they wouldn’t suspect a thing).
Presumably the reason you have such confidence about the interchangeability of identical chips is because your experience encompasses lots of examples of such chips behaving interchangeably to support a given application. More generally, you’ve learned the lesson through experience that while two instances of the same product coming off similar assembly lines may not be 100% identical, they are reliably close enough along the dimensions we care about to be interchangeable.
And, lacking such experience about hardware/wetware interchangeability, you are properly less certain about the corresponding conclusion.
Presumably, if that sort of experience became commonplace, your confidence would increase.
As I often say; you are not your meat. You are the unique pattern of information-flow that occurs within your meat. The meat is not necessary to the information, but the information does require a substrate.
Consider the following set of statements: 1) “I am my meat.” 2) “I am the unique pattern of information-flow that occurs within my meat.” 3) “I am the class of patterns of information-flow that can occur within meat, of which this unique pattern is one example.” 4) “I am the class of patterns of information-flow that can occur within any substrate, of which this unique pattern is one example.” 5) “I am all the matter and energy in the universe.”
What sorts of experiences would constitute evidence for one of them over the others?
The class of patterns of information-flow that can occur within meat includes the pattern of information-flow that occurs within your meat. 3 therefore asserts that I am you, in addition to being me. 2 does not assert this. They seem like different claims to me, insofar as any of these claims are different from the others.
I’m not really sure what non-local phenomena are, or what they have to do with psychic powers, or what they have to do with the proper referent for “I”.
Good point. This is precisely the source of my doubt, and the reason why I’m not sure that changing substrate preserves identity.
The thing is, quantum mechanics makes me confident that if I go from configuration X to configuration Y, through a path that preserves identity, then any path from X to Y preserves my identity. But I am less confident about intermediate states (like the temporary emulation in the simulated green room).
I’m not sure that’s a meaningful question. I undoubtedly change from year to year, so… But there is some kind of continuity, which I’m afraid could be broken by a change of substrate. (But then again, we could change my substrate bit by bit…
If it weren’t, I would not care, because it wouldn’t break anything I value. If preservation of identity doesn’t even happen currently in our mundane world, I would be stupid to value it. And I’ll happily upload, then (modulo the mundane risk of being badly emulated of course).
But first, I must be convinced that either identity wasn’t preserved in the first place, or that uploading preserves identity, or that I was just confused because the world actually works like… who knows.
A change of substrate occurs daily for you. It’s just of a similar class. What beyond simple “yuck factor” gives you cause to believe that a transition from cells to silicon would impact your identity? That it would look different?
No, it doesn’t. You could argue that there’s a renewal of atoms (most notably water), but swapping water atoms doesn’t have physical meaning, so… No. Heck, even cut&paste transportation doesn’t change substrate.
The “yuck factor” I feel cause me to doubt this: If an EM of me would be created during my sleep, what probability would I assign to wake up as silicon, or as wetware? I’m totally not sure I can say 1⁄2.
Actually it’s more complicated than that. Not just water atoms; over time your genetic pattern changes—the composition of cancerous to non-cancerous cells; the composition of senescent to non-senescent cells; the physical structures of the brain itself change.
Neurogenesis does occur in adults—so not even on a cellular level is your brain the same today as it was yesterday.
Furthermore—what makes you confident you are not already in a Matrix? I have no such belief, myself. Too implausible to believe we are in the parent of all universes given physics simulations work.
Yes, they do. And that’s the end of this dialogue.
(EDIT: By end of this dialogue I meant that he and I were at an impasse and unable to adjust our underlying assumptions to a coherent agreement in this discussion. They are too fundamentally divergent for “Aumanning.”)
For someone who truly believes in an atomistic physical universe, the natural belief is dualism.
That’s the kind of worldview that got shown invalid in the last century in all sorts of areas.
On the quantum level dualism is dead. A electron doesn’t have to be either in place A or in place B.
Modern models of the humans brains also describe system properties that are non-dualistic in nature.
Dualism is no good paradigm for modelling complex systems.
Just because an atom is usually either in place A or in place B doesn’t mean that the same dualism is true or
useful for modelling other parts of our world.
There’s nothing inherently truth seeking in using atomistic physics as the central reference.
We are talking about mind-matter dualism: substance dualism, where matter is one type of thing and mind is another type of thing, and also property dualism, where everything is made of matter, but mental states involve material objects with extra properties outside of those usually discussed in physics. You appear to be talking about some other kind of “dualism”.
I think extra properties outside of physics conveys a stronger notion than what this view actually tries to explain. Property dualism, such as emergent materialism or epiphenomenalism, doesn’t really think there are any extra properties other than the standard physical ones, it is just that when those physical properties are arranged and interact in a certain way they manifest what we experience as subjective experience and qualia and those phenomena aren’t further reducible in an explanatory sense, even though they are reducible in the standard sense of being arrangements of atoms.
So, why is that therefore an incomplete understanding? I always thought of qualia as included within the same class of questions as, and let me quote Parfit here, “Why anything, why this?” We may never know why there is something rather than nothing in the deep sense, not just in the sense of Larry Krausse saying ‘because of the relativistic quantum field’, but in ‘why the field in the first place’, even if it is the only logical way for a universe to exist given a final TOE, but that does not hinder our ability to figure out how the universe works from a scientific perspective. I feel it is the same when discussing subjective experience and qualia. The universe is here, it evolves, matter interacts and phenomena emerge, and when that process ends up at neural systems, those systems (maybe just a certain subset of them) experience what we call subjectivity. From this subjective vantage point, we can use science to look back at that evolved process and see how the physical material is architected and understand its dynamics and create similar systems , but there may not be a deeper answer to why or what qualia is other than its correlated emergence from the physical instantiations and interactions. That is not anti-reductionist, and it is not anywhere near the same class of thought as substance dualism.
People offer many noble rationales for public education, but the data suggest they were adopted to create patriotic citizens for war.
The basic argument structure is that public education either exists for ‘creating patriotic citizens for war’ or it exist for ‘noble purposes’.
That’s dualism. People who believe in strong reductionism tend to make arguments that are structured that way.
What do I mean by strong reductionism? Weak reductionism is the the belief that a world is determined by the way it works on the lowest level. Strong reductionism is the belief that you can basically ignore the halting problem and understand how a system works by understanding how it works on the lowest level.
But she doesn’t seem to make the bridge between the laws of physics and a full human brain.
loup-vaillant wants to use dualistic thinking for the way the full human brain works. I sat in a lecture in the Free University of Berlin about how the human brain works the professor told me:
“You can’t understand how the human brain works if all you are doing is studying neurons, you actually need to study the full system in action.”
Even when the system might be determined by it’s the way neurons work you can’t understand it on that level.
The stuff that you can then say about the human brain doesn’t tend to be either true or false but useful or not useful given a specific purpose.
loup-vaillant however wants to convince his mother that it makes dualism works on that level. That it makes sense to distinguish between true and false statements.
Some opinions about facts aren’t open to criticism. They are deemed personal, and as such no worse than opposite opinions. Attacking them is often considered rude, if not outright intolerant. This essay is about why this should stop.
Imagine that a powerful majority of a people share the same opinion. What kind of society would you prefer? One where it is considered OK to believe differently, because personal thoughts are exceptions from public rules? Or one, where the opinion of the majority is considered so important that it is considered OK to attack people who disagree, and there is no good excuse for disagreement?
I have simply replaced “truth” with “opinion of a powerful majority”. Why is this legitimate? Simply, because if someone has an opinion, they consider it truth. And if the agree with each other, the more sure they are. And if they are powerful enough, who dares to openly disagree? Especially if there is a rule that it is OK to attack people do disagree.
Therefore we have a rule that it is OK to have your own opinions about private matters. We have often seen that people who try to break this rule, do it to increase their power, even if their professed goals are noble.
But this situations is different, because unlike those people, you are actually right. Therefore those social rules obviously don’t apply to you. Is there a good reason to follow those rules anyway?
Maybe I didn’t convey the meaning I wanted to. The reason I wrote this article was because I was called intolerant for merely pointing out that, given that I strongly believe X, I also strongly believe those who believe non-X to be mistaken. Merely noticing the link is enough to be called intolerant. This is nuts. Human, I know, but nuts nevertheless. Consistency is not intolerance.
I perfectly understand that I can be mistaken about X (infinite certainty, biases, and all that). I just can’t stand when people disagree and see no problem whatsoever. Then when I point out that there is a problem, I am called intolerant. I suppose people believe I want to force them to my side. Factual opinions are not utility functions, but people keep forgetting that. As if changing your mind meant you lost. Actually, you usually win when you do that.
I do understand that we, as imperfect humans, can agree to disagree. But not on principle. I’m okay with admitting that at present, trying to resolve the disagreement doesn’t seem worth the trouble, but we should at least reckon there is a problem.
The bottom line is, when there is disagreement, and one cares about truth, then there is a problem. This problem may, or may not, be worth solving, but pretending everyone can have contradictory opinions that should never be attacked is just weak.
If she’s arguing from a position of separate magisteria which have to be reasoned about differently, I would probably try this tactic. Point out that we do not automatically gravitate to reasoning correctly about mundane things; you can use examples from Greek philosophers and alchemists and so on. Correct processes of mundane reasoning are something we’ve had to develop over time by refining our methods in situations where would could tell if our conclusions were wrong.
That being the case, how does she know that her different procedure for reasoning about non-mundane things is one that works? If it were simply wrong, how would she be able to tell? If her procedure for reasoning about non-mundane things can be used to draw contradictory conclusions (it almost certainly can,) point out that you have on the one hand a set of confusing apparent contradictions that must somehow all be true, and on the other hand the possibility that the reasoning procedure simply doesn’t work.
If her procedure for reasoning about non-mundane things can be used to draw contradictory conclusions
From what I read, the procedure for reasoning about non-mundane things is used to avoid drawing any conclusions whatsoever, much less contradictory ones. It’s intellectual cowardice masquerading as deep wisdom. (Sorry for dissing your mom, loup-vaillant.)
I largely agree with Cyan, but with a little more empathy for your mom’s viewpoint. For example, you write:
There is something. All that there is, we generally call “reality”. Note that by this definition, reality is unique.
So you throw out a description and a quantifier, and slap a label on the result. Doesn’t that sound a little similar to naive set theory? Maybe it’s not as straightforward as it looks.
I’m not actually resistant to defining “reality” your way; I think it’s not actually a step toward sets that don’t contain themselves. But it takes some sophistication to see that, and your mom might lack the formal skills to discriminate innocent-looking “logic” that leads to paradox from innocent-looking logic that doesn’t. Note that she needn’t have studied set theory to have run into similar exercises in labeling and deductive argument that subtly lead to insane results.
If that’s the case, she should see a god which really does hate homosexuality, eating pork, and considers working on the sabbath worthy of death, or wants the whole world to live under Sharia law, as equiprobable with one that loves everyone. She most likely behaves as if she had some means of discriminating between supernatural hypotheses even if she disavows being able to.
Clearly we need a new Balzac (is it Houellebecq?) to write about this 21st-century generation gap, in which the children of post-Christian agnostics grow up to be ideologically aggressive posthuman rationalists.
I’m not sure. Naively I would expect most children of post-Christian agnostics to grow up to have some kind of mystical New Age beliefs.
Because they’ve been given space to develop a spiritual worldview and no particular reason not to, but not a framework for it, so they end up adopting a semi-random gaggle of relatively nonthreatening and nontotalizing supernaturalist beliefs? That’s plausible, but it won’t give you anything self-consistent. Maybe aggressive posthuman rationalism is what you get when you try to culture New Age beliefs in someone sensitive to ideological contradictions.
Maybe aggressive posthuman rationalism is what you get when you try to culture New Age beliefs in someone sensitive to ideological contradictions.
I think you would be just as likely to find them turning to some “strong” religion or even mainstream skepticism (of the kind that treats cryonics and the singularity as supernatural claims).
Yeah, that happens—a fair number of the born-again narratives I’ve come across read like that. But the reason I was thinking of this group in particular is that, for a lot of people on the post-Christian agnostic spectrum, organized religions really are the bad guys: nondenominational Christianity is usually given a pass, but actual churches get blamed for all sorts of stuff. That’s a nontrivial obstacle for someone raised in that milieu.
Dharmic religions don’t seem to count as “organized” in this context, for reasons which are kind of opaque to me but probably have to do with exoticism. So I expect a lot of Western Buddhists and Hindus come out of this sort of space too—n=1, but that’s more or less how my college roommate found Hinduism.
Dharmic religions don’t seem to count as “organized” in this context, for reasons which are kind of opaque to me but probably have to do with exoticism.
Unfortunately, radical Islam also frequently gets a similar pass on grounds of exoticism, not to mention being a “victim of the crusades and the war on terror”.
I can’t accept your philosophical diagnosis without knowing more about how it was obtained. What did you do, email her the three paragraphs and ask what she thought? Your comment history shows there’s a long backstory here, that you are an enthusiastic believer in a complex of ideas including Bayesianism, uploading, and immortalism, and there’s a persistent clash about this with people who are close to you. You call your mother a philosophical “non-realist”, she says you’re a control freak who doesn’t want to die… Clearly we need a new Balzac (is it Houellebecq?) to write about this 21st-century generation gap, in which the children of post-Christian agnostics grow up to be ideologically aggressive posthuman rationalists. It sounds wonderfully dialectical and ironic: your mother’s intellectual permissiveness probably gave you the space in which to develop your rationality, and yet your rationality now turns you against her radical open-mindedness or principle of not believing anything. Extreme agnosticism is not the same as “non-realism”, and she probably rejects your “tautologies” because they seem to come packaged with a lot of other stuff that she wants to reject.
“Extreme agnosticism” sounds mostly accurate. She will doubt as a matter of principle, but she won’t put a probability on that doubt. As for why I believed what I wrote here…
We talked. A lot. It spanned over multiple conversations, for several months, if not over a year. First, I tried to talk about transhumanist things, like mind uploading. She found it impossible sounding, scary, horrible, and sad. We talked about the potential power of science. She seems to think that science isn’t omnipotent (sounds true enough), and some specific things, like the understanding of the human soul, seems definitely out of reach. But I don’t recall she ever stuck her neck out and flatly said that there’s no way science could ever unravel the mysteries of our minds, even in principle (I personally have some doubt, because of the self-referencing involved. But I don’t think these difficulties would prevent us from understanding enough low-level mechanisms to effectively emulate a brain.).
We moved on to more basic things, like reductionism. She often “accuses” me of wanting to control everything with math. So I tried to assert that our world is math all the way down, even if it’s way too complicated for us to actually use accurate math. But she doesn’t seem to make the bridge between the laws of physics and a full human brain. She seems to assert that there is something there that is by nature incomprehensible. But when I call that “magic”, she rejects the term.
At some point, I wrote this (French or English depending on your browser settings). I don’t think very highly of it, but I thought it would at least serve my point: stopped being called intolerant just because I take the logical step from believing something to asserting that someone who doesn’t believe it is mistaken. (Modulo tiny uncertainties.) It didn’t work at all. She just found it juvenile, besides the point, and by the way, the colour of my socks and the existence of God are not the same thing, and should not be reasoned about in the same way. My informal formulation of the Auman’s agreement theorem also fell on deaf ears.
We had some more fruitless debates, where I believe she doesn’t understand me, and where she believe she perfectly understands me, but I cannot perceive her arguments the same way humans can’t perceive ultra-violet, which is why I reject them, the same way some ignorant fool would say “there’s no such thing as invisible light”. This feels very close to saying that I lack some brain circuitry, though I don’t think she would actually say that if I asked. But I do feel like I talk to some mystic who claim to have higher perceptions, and I should not call them hallucinations just because I lack them. (Sounds like Freudian psychoanalysis: If you don’t believe it, something is wrong with you.) Of course, her high status (being my mother and older than me) doesn’t help. Heck, she even said she used to think like me, but got past that. So I’m clearly immature. I suspect she hopes I will understand her when I get older.
So I ended up writing this (French only for now). She hasn’t read it so far, but I told her about the first paragraphs (which I roughly translated in my post here). Then she told me there’s something wrong with this. (But again, she won’t outright contradict me, and say there isn’t a world out there.)
By the way, she thinks I believe all that stuff for some “deep reason”, which I take to mean “something unrelated to the actual accuracy of such beliefs”. She thinks I have some deep fear inside that makes me cling to that. (No kidding: I see small hopes of making a paradise out of our world, and I would give up on them? OK, if it’s impossible after all, let us enjoy our short lives. But if there is a possibility, then missing it is unforgivable.) Strangest of all, she sees a contradiction between my humanistic, left-wing, environmentalist ideas, and my consequentialist, positivist, transhumanist ideas.
Now to her credit, I must note that she probably changed her mind about longevity: without taking into account some problems like world population, she wouldn’t be against doubling our life expectancy, or more. Living forever seems too much yet, but a couple centuries seems like a good idea. (She is closer to death lately: her aunt, which she loves dearly, starts to have health problems that may prove serious in the coming years, if not months.)
One way to test your mother’s attitude to science, explanation, and so on, would be to see what she thinks of theories of the mind which sound like nonreductionistic quantum mysticism to you. What would she think of the theory that qualia are in the quantum-gravity transitions of the microtubule, and the soul is a bose-einstein condensate in the brain? I predict that she would find that sort of theory much more agreeable and plausible. I think she’s not hostile to reality or to understanding, she’s hostile to reductionism that falsifies subjective reality.
People here and elsewhere believe in ordinary reductionist materialism because they think they have to—because they think it is a necessary implication of the scientifically examined world—not because that outlook actually makes sense. For someone who truly believes in an atomistic physical universe, the natural belief is dualism: matter is made of atoms, mind is some other sort of thing. It’s only the belief in the causal closure and causal self-sufficiency of atomistic physics that leads people to come up with all the variations of mental materialism: eliminativism, epiphenomenalism, various “identity theories” such as functionalism. A lot of these so-called materialisms are actually dualisms, but they are property dualism rather than substance dualism: the mind is the brain, but it has properties like “being in a certain state of consciousness”, which are distinct from, yet somehow correlated with, properties like “being made of atoms arranged in a certain way”.
I regard this situation as temporary and contingent. It’s the consequence of the limitations of our current science and currently available concepts. I fully expect that new data from biology, new perspectives in physics, and a revival of rigorous studies of subjectivity like transcendental phenomenology, is eventually going to give us a physically monistic account of what the self is, in which consciousness as it is subjectively experienced is regarded as the primary ontological reality of self-states, and the traditional physical description as just an abstracted account, a mathematical black box which does not concern itself with intrinsic properties, only an abstracted causal model. But abstracted causal models are the whole of natural-scientific ontology at the present time, and materialists try to believe that that is the fundamental nature of reality, and the aspect of reality which we experience more or less directly in subjectivity, is some sort of alien overlay.
The folk opposition to reductionist materialism derives to a large degree from people in touch with the nature of subjective experience—even if they can’t express its nature with the rigor of a philosopher—and who perceive—again, more intuitively than rigorously—how much of reality is lacking in a strictly “mathematical” or “naturalistic” ontology. In rejecting reductionism, they are getting something right, compared to the brash advocates of materialist triumphalism, who think there’s no problem in saying “I’m just a program, and reality is just atoms”.
I know it must sound scandalous or bizarre to hear such sentiments on Less Wrong, but this really is the ultimate problem. The natural-scientific thinkers are trying to make models of the mind, but the intuitive skeptics are keeping them honest, and the situation will not be resolved by anything less than a new ontology, which will look in certain respects very “old” and retro, because it will reinstate into existence everything that was swept under the carpet of consciousness in order to construct the physical/computational paradigm of reality. It is very clear that people with a highly developed capacity for thinking abstractly are capable of blinding themselves to vast tracts of reality, in order to reify their abstractions and assert that these abstractions are the whole of reality. It is one particular form of belief projection to which “rationalists” are especially susceptible. And until the enormous task of perceiving and articulating the true ontology, and the way that it fits into science or that science fits into it, has been done, all that the enemies of premature reification can do is to make suggestive statements like this one, hoping that something will strike a chord and reawaken enough prescientific awareness in the listener for them to detach themselves a little from their constructs and “see” what the intuitives see.
Similarly, some part of the rejection of life extension through uploading comes from a rejection of the metaphysic implied. It looks like the uploader is denying reality. Life extension through rejuvenation is much more acceptable for this reason—though even there, the wisdom of the human race says that striving for literal immortality is unhealthy because it’s surely impossible, and it’s unhealthy to attempt impossibilities because it only sets you up for suffering when the inevitable comes. There are a bunch of other psychological issues here, about how much striving and how much uncertainty is rational, the value of life and the rationality of creating it, and so on, where I think transhumanism is often more in the right than tradition. But I will assert emphatically that the crude reductionisms we have available to us now are radically at odds with the facts of subjective experience, and so therefore they are wrong. It is better to revert to agnosticism about fundamental reality, if that is what it takes to retain awareness of subjectivity, rather than to reify mathematics and develop distorted ideas, so here I do side with your mother.
I upvoted your comment/house because I think it can be looted for valuables, but not because I think it’s sturdy enough to live in.
This is not true. The reductionist claim is that the arrangement of the atoms is entirely sufficient to produce consciousness, and not that there is consciousness and then the atoms. Until you shake this style of thought, you will never be able to see single-level-of-reality reductionism as anything more than a mutated form of dualism, which is not what it is.
No! Of course, if a more accurate map of reality is developed, the reductionists will say that “this is the closest we have to knowing the true base level of reality.” Only strawman-level reductionists will say “this is the most accurate map we have? Okay, that’s base reality.” It could be that the laws of physics do fit in 500 bits, or it could be that they’re just like onion layers and for whatever reason there is no bottom layer or no one ever finds it. But it is not the case that reductionism is the claim that the extent to which we have figured out how our subjectivitity is delusional, that that is The True Reality. But it’s far better than just plucking from the naive intuitions. We know where they came from, after all, and it wasn’t from a deep experimental study into reality.
Also not true! Why, if there was a direct one-to-one correspondence between subjective experience and reality, there would never be any surprising facts, and there would be no need to distinguish the map and the territory. In fact, I confess I have no idea what such a world would look like. What would it be like to be the universe? It is a wrong question, certainly. The subjective delusions arise from experiencing reality imperfectly, or else, once again, we would have already known about atoms and gluons and whatever-mathematics-are-really-down-there.
And I will assert twice as emphatically that the reductionism we have available to us now, while incomplete (and knowing that it is so), is not at odds with subjective experience (they add up to normality, after all) and do more to explain the facts of subjective experience than any dualism, substance or otherwise.
You have said in the past that the computational theory of mind implies dualism. When I first saw this, I was outraged and indignant and did not wish to read any further. Later I discovered that you make much more sense than this initial impression led me to think, so I read more of your work, and yet I never found an argument that supported this claim. Do show me, if you’ve got one.
I will show, however, that even if the computational theory of mind is wrong (as implying dualism would necessarily force it to be), this does not matter for transhumanist realism. For even if you could not copy the brain on a computer, obviously the brain exists, so there is some way of creating them. It can and will be understood so that new brains can be made, even if its substrate isn’t “computations”. (I admit, though, that I have no idea what else it might be doing, that isn’t computable).
Also, curse you for getting me to write in your style of incredibly long comments!
Edit: This comment was upvoted three seconds after I posted it. I don’t know how or why.
There were and are materialisms which explicitly talk about multiple levels of reality. Someone who believes that the brain is made of atoms but that consciousness is “strongly emergent” is still a materialist—at least compared to someone else who believes in a separate soul-substance.
But yes, mostly I am saying that a lot of materialism involves stealth dualism—the materialists are property dualists and don’t realize it.
One place you can see this, is when people talk about consciousness as “how it feels to be an X”, where X is something material (or computational). For example, X may be a certain arrangement of atoms in space. And how it feels to be X is… some detailed specific conjunction of sensations, thoughts, intentions, and so on, that adds up to a single complex experience.
Obviously we could make a 3D plot of where all those atoms are, and zoom around it and into it, view it from different angles, and we’ll still see nothing but a constellation of atoms in space. You won’t “see the experience from the inside” no matter how many such views you try.
“Single level of reality” implies that there is nothing more to those atoms than what can be seen in such a view. Yet the experience is supposed to be there, somewhere. I conclude that a conventional materialist theory of consciousness involves positing that the brain has properties (the “feels” or “qualia” that make up a conscious experience) in addition to the properties already stipulated by physics.
You’ve missed my real point. Yes, a materialist is happy to say that their currently favored model is probably not the whole story. I’m saying that all the available models will suffer from the same deficit.
Consider the argument I just gave, about how the “feels” are nowhere to be seen in the atom plot, yet they are supposed to exist, yet only atoms are supposed to exist. This is a contradiction that will not be affected by adding new atoms or rearranging the old ones. All models of the world as atoms in interaction are “abstracted causal models”, the result of a centuries-long effort to understand the world without talking about so-called secondary properties, which have to be reintroduced once you want to explain consciousness itself. And it’s at that point that these subjective properties form an “overlay”—they have to be added to the physical base.
There’s supposed to be a 1-to-1 correspondence between subjective experience and the physical reality of the part of the brain responsible for being the experience—not a 1-to-1 correspondence between subjective experience and the physical world external to the brain.
I hope it’s now clear that I’m not accusing materialists of identifying their models with reality at that level. It’s the identification of experiences themselves with physical parts of the brain where the problem lies, given the physical ontology we have. Obviously, if physics already posited the existence of entities that could be straightforwardly identified with elementary qualia, the situation would be rather different.
Adding up to normality is a slogan and a (doomed) aspiration here. I believe 2+2=5, I know it sounds strange, but it’s OK because it adds up to normality! Except that normality is 4, not 5. Or in this case, “normality”, i.e. reality, is that experiences exist. Even if we were to take a virtual trip through an atom-plot of a brain, and we arrived somewhere and you pointed at a specific cluster of atoms and said, “There’s part of an experience! That cluster of atoms is one pixel of a visual sensation of red”, I’m still not going to see the redness (or even see the “seeing of redness”) no matter what angle I choose to view that cluster of atoms. If the redness is there, it is there in addition to all the properties that feature in the physics.
Maybe it implies trialism. We end up with three levels here: the level of atoms (i.e. the fundamental physical level), the computational state machine which describes cognition and consciousness, and the experiences which we are supposed to be explaining.
A computational theory of consciousness says that a given state of consciousness just is a particular state in a particular state machine. The argument for dualism here is similar to the argument for dualism I gave for the arrangement of atoms, except that now we’re not dealing with just one arrangement of atoms, we’re dealing with an enormous equivalence class of such arrangements—all those arrangements which instantiate the relevant state machine. Pick any instance, any individual member of that equivalence class, and the previous argument applies: you won’t “see the experience from the inside”, no matter how you examine the physical configuration. The existence of the experience somewhere “in” the configuration implies extra properties beyond the basic physical ones like position and momentum.
Systematically associating conscious states with computational states will allow you to have a systematic property dualism, but it will still be property dualism.
Okay, I understand your position much better. Here’s why it is wrong:
You can look at this arrangement of atoms all you want, but you still won’t “see” the appleness unless you’re some sort of lifeform that has mechanisms that recognize apples easily, like humans.
Presumably consciousness is a lot more complicated than apples, and worse yet is how it isn’t a relatively durable object that humans can experience with all of their senses (indeed, none of the classical ones). So it intuitively feels like it’s different, but that doesn’t make it so.
I will see some aspects of the apple but not others. I will see its shape, because you can make shapes by arranging atoms in space, but I won’t see its color. Then there are attributes like the fact that it grew on a tree, which I will be able to “see” if the atom-plot extends that far in space and time.
Before we go any further, I would like to know if this “counterargument by apple” is something you thought up by yourself, or if you got it from somewhere. I have an interest in knowing how these defensive memes spread.
ETA: I will try to write a little more in the way of rebuttal. But first, I will allow myself one complaint, that I have made before: arguments like this should not even be necessary. It should be obvious that, e.g., if you had a universe consisting of an arrangement of particles in space whose only properties are their relative positions, that nothing in that universe has a color. The property of being colored just does not exist there. And so, if you want to maintain that conscious mental states exist in such a universe, and that they include the experience of color, you are going to have to introduce color as an additional property somehow—a property that exists somewhere inside the assemblages of particles that are supposed to be the experiences.
So what of the attempt to rebut this with “appleness”, as a reductio ad absurdum? Well, we can start by distinguishing between the apple that exists in the external world, the experience of the apple, and the concept of an apple. Before atomism, before neuroscience, human beings are supposedly naive realists who think that what they experience is the thing itself—though if they are grown up just a little, they will already be positing that reality is a little different to their experience, just by supposing that entities continue to exist even when they are out of sight.
But let’s suppose that we have come to believe that the world of experience is somehow just “in our minds” or “in our brains”, and that it is an imperfect image or representation of an external world. This distinction has been understood for centuries. It is presupposed by the further distinction between primary and secondary properties that has been methodologically important for the development of physics: we will develop theories of space, time, shape, and motion, but we won’t worry about color, taste, or smell, because those qualities are in the perceiver only, not in the external world.
So here I sit, I see an apple, and it looks red. The physicist tells me that the apple in the external world is not red in that way. It is a colorless object made of colorless particles, but they have the property of reflecting light at a certain wavelength, and when that arrives in my eye it stimulates my brain to construct the experience of redness with which I am familiar. All right; it may be disorienting to the former naive realist to suppose that the external world doesn’t contain color, that it’s just an arrangement of atoms possessing the property of location but no property of coloredness. But the scientific realist just has to get used to the idea that everything they are seeing is in their head, including the colors.
But wait! Now it’s the era of neuroscience and molecular biology and cognitive science. The inside of your head is now also supposed to be made of colorless atoms. So it now seems like there’s no place left in the universe where you can find an object that is actually colored. Outside your head and inside your head, there is nothing but colorless particles arranged in space. And yet there are the colors, right in front of you. The apple looks as objectively red as it ever did.
Historically, property dualism and strong emergence has been a common response to this situation, among people who thought clearly enough to see the difficulty. For example, see Bertrand Russell writing about two types of space, physical space and subjective space. Physical space is where the atoms are located, subjective space is where the colors and the experienced objects are located.
So why don’t functionalists and other contemporary materialists openly avow property dualism? I think a lot of them just habitually associate experiences and mental activity with “brain states” and “computation”, and don’t actually notice that they are lining up two different things. The attitudes of instinctive programmers towards computers probably also contribute somehow. People get used to attributing semantic states and numerous other properties to what goes on in a computer, and forget, or never even learn, that those attributed properties are not intrinsic properties of the physical computer, no more than the shapes of letters on a page are intrinsically connected to the sounds and the meanings that they represent. The meanings that are associated with those shapes are a product of culture and of the mental intentionality of the person actively interpreting those shapes as symbols. This also applies to just about everything that goes on in a computer. A computer is a universal state machine capable of temporarily instantiating specific state machines which can causally model just about anything. But the computer doesn’t literally contain what it is causally modeling, just as emails don’t literally contain the meanings that people extract from them.
Another confusion that occurs is treating basic sensory properties like categories. There is no reason to believe in a fundamental property of “appleness”. If I identify an object I experience as an apple, it is because it possesses a conjunction of other properties, like shape, color, perhaps taste, perhaps physical context, which lead me to deduce that this thing in front of me is one of those edible objects, grown on a plant, that I have encountered before. But consider the properties on the basis of which that identification is made. Sometimes it is argued that, for example, “red” or “redness” is also just a category, and so if you can show that the brain is a computer which computationally classifies optical stimuli according to wavelength, you have accounted for the existence of colors. It may also be added that different cultures have different color words, whose scope is not the same, so there is no reason to believe in colors above and beyond cognitive and cultural constructs, and wavelengths of light.
But what color categories classify are specific instances of specific shades of color. We can group and regroup the spectrum of shades differently, but in the end the instances of color have an existence independent of, and prior to, the words and categories we use to designate them. And that is the level at which the existence of color refutes any claim to the ontological completeness of a physics of colorless particles. You can organize the motions of particles so that they form state machines undergoing conditional changes of state that can be termed “classification of stimuli”. But you do not thereby magically bring into being the existence of color itself.
Ironically, in a sense, such magic is precisely what a functionalist theory of consciousness (and of the existence of conscious persons) claims: that just the existence of the appropriate state machine is enough to guarantee the existence of the associated experience or the associated person. Since the ontological ingredients of these experiences can be lacking in the computational substrate, the implication is that they come into being when the state machine does, in a type of lawful property dualism where the fundamental laws of psychophysical parallelism refer to computational properties on the physical side.
Now of course, people who believe in mind uploading would viscerally reject the idea that they are saying that nonmaterial qualia or even nonmaterial souls would materialize when their emulation started running on the computers of the post-singularity future. That’s supposed to be a dumb idea reserved perhaps for Hollywood, and writers and an audience whose minds are still half-choked with spiritual delusions about the nature of personhood, and for whom computers and technology are just props for a new type of magic. CGI can show a misty soul congealing around the microprocessors, ghosts of the departed can show up in virtual reality, Neo can have his “matrix vision” even when he’s unplugged and in the real world…
My thesis is that people who believe in standard materialist theories of mind, and who would pride themselves on knowing enough to reject that sort of hokum, are doing exactly the same thing on a higher level. These aren’t childish delusions because they are based on a lot of genuine knowledge. It is actually the case that you can put a chip in someone’s brain and it will restore certain simple neurological functions. It does appear that large tracts of the nervous system truly can be understood as a type of physical computer. But that’s because we are describing unconscious activities, activities that take place “out of sight”—more precisely, out of awareness—so problems like “where is the color” don’t even arise. “Consciousness” or “experience” is the problem, because it is the repository for all the types of Being that we experience, but which are not present in the ontology of the natural sciences.
I assume that by “color”, you mean the subjective experience of colour, not the fact that an object reflects or emits certain kinds of light. Because “reflecting and emitting certain kinds of light” can be explain in terms of “arrangement of particles”, in our universe.
I bet you don’t actually think like that. If it is obvious to you that an “arrangement of particles” universe cannot have subjective experience of colour in it, that’s because in the first place, it is obvious to you that it can’t have subjective experience period.
I do not have the energy to properly respond to your comment. It is simply too long. Instead, at least for now, I will just respond to this:
I came up with it myself. It’s a good question, because that is not true of most of the arguments I wield.
The problem with “strong emergence” is that it can be used to “explain” anything and is thus worthless.
Eliezer Yudkowsky http://lesswrong.com/lw/kr/an_alien_god
Probably someone saw your comment/house analogy and found it very clever and upvoted before reading on.
All cleverness credit goes to steven0461.
I like quantum mind, but despite the unity of superpositions matching the apparent unity of subjective experience, does it really give us much? I think the answer is no, at least until we have a better understanding of the physics of (quantum) computation, a better theory of computation in light of that, and a highly advanced computationalism/monadology in light of that. And even then Leibniz’ solution to the mind-body problem was literally Goddidit. (Which is an intriguing and coherent theory that explains all the evidence, but you’d think there’d be something better. Also Leibniz’ God causally influences monads, which aren’t supposed to be influence-able, so his metaphysic seems sort of broken, even if you can fix that bug with a neat trick or two maybe.) Quantum mind might help us do uploads, but it still wouldn’t have the answer to the mind-body problems, we still wouldn’t know if the uploads were conscious. Or is apparently matching a phenomenological property with a physical property (unity of experience/superposition) somehow a big philosophical step in the right direction?
You know, I do have this nagging doubt: why am I me, and not someone else? I do see a problem with subjective experience. On the one hand, it doesn’t make intuitive sense in a universe that runs on math, but on the other, what could there be beyond the causal stuff? I sense something fishy.
I too view reductionistic materialism as mainly an empirical claim. What I do view as necessary is the mere existence of something. I think, therefore “something” is. Maybe that “something” is limited to my personal experience, but whatever it is, it works somehow, and what I think won’t change it (unless magical thinking works, but then that is how the world runs).
I am not confident mind uploading should work. But I have empirical reasons to believe it may. First, we have cut&paste transportation. I’m confident it works because current physics says so. The universe doesn’t care whether I landed in vulcan by shuttle or by energy beam, it’s the same configuration. Current laws of physics could be mistaken (they’re not even complete, so they are mistaken somewhere), but this “no identity” stuff looks like something that won’t go away.
Second, I imagined this thought experiment: suspend you, restart you in a green room, suspend you again, then restart you in the laboratory. Result: you have the memory of having been in a green room. The other possibility is, suspend you, scan your brain, run the emulation in a simulated green room identical to the real one, pause the em, rewire your brain to match the em end state, restart you in the laboratory. Result: you have the memory of having been in a green room. It’s the same configuration in both case, so no memory is less real than the other. Conclusion: you have been in a green room. It doesn’t matter if it was physically or in uploaded form.
Note that I become much less confident when I think about leaving up my physical brain (edit: I mean, my original protoplasm wetware) for good.
If uploading doesn’t work, it still can be valuable: If I have goals beyond my own existence, a ghost may be better at achieving them rather than nothing at all. It also prevents total oblivion.
Maybe my basic point is that there is more to the “stuff” than just “being causal”. This is why I talk about abstracted causal models as ontologically deficient. Describing yourself or the world as a state machine just says that reality is a merry-go-round of “states” which follow each other according to a certain pattern. It says nothing about the nature of those states, except that they follow the pattern. This is why functionalist theories of mind lead to patternist theories of identity.
But it’s clear that what we can see of reality is made of more than just causality. Causal relations are very important constitutive relations, but then we can ask about the relata themselves, the things connected by causality, and we can also look for connecting relations that aren’t causal relations. Being shaped like a square isn’t a causal relation. It’s a fact that can play a causal role, but it is not itself made of causality.
These are ontological questions, and the fact that we can ask them and even come up with the tentative ontologies that we do, itself must have ontological implications, and then you can attempt an ontological analysis of these implication relations… If you could go down that path, using beyond-Einsteinian intellectual superpowers, you should figure out the true ontology, or as much of it as is accessible to our sort of minds. I consider Husserl to be the person who got the furthest here.
One then wants to correlate this ontology derived from a phenomenological-epistemological circle of reflection, with the world-models produced in physics and biology, but since the latter models just reduce to state-machine models, they cannot in themselves move you beyond ontological hollowness. Eventually you must use an ontology derived from the analysis of conscious experience itself, to interpret the formal ontology employed by natural science. This doesn’t have to imply panpsychism; you may be able to say that some objects really are “things without an inside”, and other objects do “have a subjectivity”, and be able to specify exactly what it is that makes a difference.
This is a little removed from the indexical problem of
That’s a question which probably has no answer, beyond enumerating the causes of what you are. The deep reasons are reserved for why there is something rather than nothing, and why it is the sort of universe it is. But in a universe with many minds, you were always going to be one among many.
If you were to find that the nature of your personal existence looked rather improbable, that would revive the question a little. For example, if we thought electrons were conscious, then being a conscious being at the Avogadro’s-number-of-electrons level of organization, rather than at the single-electron level of organization, might look suspiciously improbable, given the much larger numbers of electrons in the universe. But then the question would be “why am I human, and not just an electron?” which isn’t quite what you asked.
I agree with this part.
I think what you’re saying is that in the present, there’s no difference between your current configuration having resulted from a life lived for 20+ years, and your current configuration having materialized five seconds ago. Well, if by hypothesis the configuration is exactly the same in the two scenarios under consideration, then the configuration is exactly the same. That much is true tautologically or by assumption.
Now if we ask whether it’s “still you” in both cases—one where you live out your life with physical continuity, and one in which you are briefly eradicated and then replaced by a physical duplicate—you do have some freedom of self-definition, so the answer may depend a little on the definition. (For now I will not consider the Yudkowskian possibility that there is a unique correct definition of personal identity to be found by superintelligent extrapolation of human cognitive dispositions, analogous to the CEV theory of how to arrive at a human-correct morality.)
But there are obvious and not-so-obvious problems with just saying “the configuration’s the same, therefore there’s no difference”. An obvious problem: suppose we make more than one copy of you—are they both “you”? Less obvious: what if the history of how the configuration was created does matter, in deciding whether you are the same person as before?
Does “having the memory of being in a green room” really imply “you have been in a green room”? We don’t normally trust memory that absolutely, and here we are talking about “memories” that were copied into the brain from a blueprint, rather than being caused in the usual fashion, by endogenous processing of sensory input. It is reasonable to imagine that you could be that person, whose brain was rewired in that way, and that after reflecting for long enough on the situation and on how the process worked, you concluded that it wasn’t you who was in that room, or even that nobody was in that room.
I’m not even convinced that the unlimited capacity to recreate a whole conscious mind “in midstream”, implied by so many thought-experiments, is necessarily possible. There are dynamical systems where you just can’t get to places deep in the state-space without crossing intermediate territory. If all that matters for identity is having the right ensemble of mesoscopic computational states (i.e. described at a level of coarseness, relative to the exact microphysical description, which would reduce a whole neuron to just a few bits), then it should be possible to create a person in mid-stream. But if the substrate of consciousness is a single quantum Hilbert space, for some coherent physical subsystem of the brain, then it’s much less obvious that you can do that. You might be able to bang together a classical simulation of what goes on in that Hilbert space, in mid-stream, but that’s the whole point of my version of quantum-mind thinking—that substrates matter, and just implementing a state machine doesn’t guarantee consciousness, let alone persistence of identity.
Not to me. For instance, while conciousness is still mysterious to me, it sure has causal power, if only the power to make me think of it —and the causal power to make Chalmers write papers about it.
I think I mean something stronger than that. You may want to re-read the part of the Quantum Physics sequence. The universe actually doesn’t even encode the notion of different particles, so that talking about putting this carbon atom there and that carbon atom here doesn’t even makes sense. When you swap 2 atoms, you’re back to square one in a stronger sense than when you swap 2 numbered (but otherwise indistinguishable) billiard balls. Configuration space is folded on itself, so it really is the same configuration, not a different one that happens to be indistinguishable from the inside.
Err… Let my brain be replaced by a silicon chip. Let’s leave aside the question of personal identity. Is that thing concious ? It will behave the same as me, and write about conciousness the same I do. If you believe that, and believe it still isn’t concious, I guess you believe in PZombies. I don’t. Maybe changing my substrate would kill me, but I strongly believe the result is still concious, and human in the dimensions I care about.
I agree that consciousness has causal power. I’m saying consciousness is not just causal power. It’s “something” that has causal power. The ontological deficiencies of materialist and computational theories of consciousness all lie in what they say about the nature of this “something”. They say it’s a collection of atoms and/or a computational state machine. The “collection of atoms” theory explains neither the brute features of consciousness like color, nor the subtle features like its “unity”. The state machine theory has the same problems and also requires that you reify a particular abstracted description of the physical reality. In both cases, if one were to insist that that really is the ontological basis of everything, property dualism would be necessary, just to accommodate phenomenological (experiential) reality. But since we now have a physics based on Hilbert spaces and exotic algebras, rather than on particles arranged in space, I would hope to find a physical ontology that can explain consciousness without property dualism, and in which the physical description of the brain contained “entities” which really could be identified with the “entities” constituting conscious experience, and not just correlated with them.
The basis for that statement is that when you calculate the transition probability from “particle at x0, particle at y0” to “particle at x1, particle at y1″, you sum over histories where x0 goes to x1 and y0 goes to y1, as well as over histories where x0 goes to y1 and y0 goes to x1. But note that in any individual history, there is persistence of identity.
I suppose the real logic here is something like “I am a particular configuration, and contributions to my amplitude came from histories in which my constituent particles had different origins.” So you ground your identity in the present moment, and deny that you even had a unique previous state.
Pardon me for being skeptical about that claim—that my present moment is either to be regarded as existing timelessly and not actually as one stage in a connected flow of time, or alternatively that it is to be regarded as a confluence of multiple intersecting histories that immediately then diverges into multiple futures rather than a unique one.
The ontological implications of quantum mechanics are far from self-evident. If I truly felt driven to believe in the many worlds interpretation, I would definitely want to start with an ontology of many histories that are self-contained but which are interacting neighbors. In a reality like that, there’s no splitting and joining, there are just inter-world “forces”. For some reason, no-one has even really tried to develop such a model, despite the conservation of probability density flow which allows a formalism like Bohmian mechanics to work.
Returning to the question of identity for particles, another option, which is more in line with my own ideas, is to think of the ontological state as a tensor product of antisymmetrized n-particle states where the size of n is variable both between the tensor factors and during the history of an individual factor. The ontology here is one in which the world isn’t really made of “particles” at all, it’s made of “entities” with a varying number of degrees of freedom, and a “particle” is just an entity with the minimum number of degrees of freedom. The fungibility of “particles” here would only apply to degrees of freedom within a single entity; the appearance of fungibility between different entities would have a dynamical origin. I have no idea whether you can do that in a plausible, uncontrived way; it’s yet another possibility that hasn’t been explored. And there are still more possibilities.
Yes, definitely. Especially if we’re going to talk about imperfect simulations, as has been discussed on one or two recent threads. A spambot, or a smiley face on a stick, is a type of “simulated human being”. We definitely agree, there’s no-one home in either of those situations, right? The intuition that an upload would be conscious arises from the belief that a human brain is conscious, a human brain consists of numerous discrete processors in decentralized communication with each other, and so to be conscious must somehow arise from being a particular sort of computational network. But although we don’t know the precise condition, the universality of computation implies that some sufficiently accurate simulation would be capable of reproducing that network of computation in a new medium, in a way that meets the unknown criterion of consciousness, and so therefore conscious uploads must be possible.
I have argued in a recent comment that functionalism, and also ordinary atomistic materialism, implies property dualism. The constituent properties of consciousness, especially the basic sensory properties, do not exist in standard physical ontology, which historically was constructed explicitly to exclude those sensory properties. So if you want to extend physical ontology to account for consciousness as well, you have to add some new ingredients. Personally I hope for a new physical ontology which doesn’t have to be dualistic, and I even just mentioned a possible mathematical ingredient, namely a division of the world into “multi-particle” tensor factors rather than into single particles. If a single whole conscious experience could be identified with a single tensor factor, that would at least begin to explain the unity of consciousness; you would have elementary degrees of freedom canonically and objectively clustered together into complex unities, whereas in the current ontology, you just have mobs of particles whose edges are a bit fuzzy and arbitrary, something which provides a poor ontological foundation for a theory of objectively existing persons.
Returning to the issue of zombies, suppose for the purposes of argument that people really are sharply defined tensor factors of the wavefunction of the universe, and that conscious states, in our current formalism, would correspond to some of these antisymmetrized n-fermion wavefunctions that I’ve mentioned. The point is that, in this scenario, consciousness is always a property of a single tensor factor, but that you could simulate one of those very-high-dimensional tensor factors by using a large number of low-dimensional tensor factors. This implies that you could simulate consciousness without the simulation being conscious.
I don’t at all insist that this is how things work. The business with the tensor factors would be one of my better ideas, but it’s just a beginning—it’s a long conceptual trek from an n-fermion wavefunction to an intricate state of consciousness such as we experience—and the way things actually work may be very very different. What I do insist is that none of the orthodox materialist theories of mind work. An explicit property dualism, such as David Chalmers has proposed, at least has room in its ontology for consciousness, but it seems contrived to me. So I think the answer is some thing that we haven’t thought of yet, that involves quantum biology, new physical ontology, and revived respect for the ontology of mind.
Your writing is difficult to read for me. I’m tired right now, so I plan to answer properly later, in a few days. Hopefully my brain will do better processing.
I assume by “physical brain” here you mean one made of protoplasm.
What does contemplating the possibility that you aren’t running on such a brain now do to your confidence?
Yes, I meant protoplasm.
If I knew that I am currently running on a silicon chip (Gunm-style), then I would be highly confident that replacing that chip by another, identical one, preserves my identity, because it’s the same configuration. Moreover, replacing my old chip by a newer one, before the physical deterioration significantly affects the actual software processing, probably would work as well.
But if we’re talking about running my software on a different chip through, say, a virtual machine that emulate my original chip, then I would be less confident that it would still be me. As confident as I am that, an EM of my current wetware would still be me. Which is, currently, not confident enough to make the leap.
Ah, and if I do learn that I run on a chip, I won’t turn crazy. I may be worried if I knew my wetware self were still running around, and I may not tell my mother, but besides that I don’t really care. If I knew that my wetware self was “dead”, then I would wonder if I should feel sorry for him, or if I’m actually him. Because I value my life, I know that my wetware self did too. But I’d probably get over it with the knowledge that the rest of the world (including my family) didn’t lose anything, (or at least they wouldn’t suspect a thing).
I’m confident an EM would not be a PZombie.
(nods) Makes sense.
Presumably the reason you have such confidence about the interchangeability of identical chips is because your experience encompasses lots of examples of such chips behaving interchangeably to support a given application. More generally, you’ve learned the lesson through experience that while two instances of the same product coming off similar assembly lines may not be 100% identical, they are reliably close enough along the dimensions we care about to be interchangeable.
And, lacking such experience about hardware/wetware interchangeability, you are properly less certain about the corresponding conclusion.
Presumably, if that sort of experience became commonplace, your confidence would increase.
As I often say; you are not your meat. You are the unique pattern of information-flow that occurs within your meat. The meat is not necessary to the information, but the information does require a substrate.
Consider the following set of statements:
1) “I am my meat.”
2) “I am the unique pattern of information-flow that occurs within my meat.”
3) “I am the class of patterns of information-flow that can occur within meat, of which this unique pattern is one example.”
4) “I am the class of patterns of information-flow that can occur within any substrate, of which this unique pattern is one example.”
5) “I am all the matter and energy in the universe.”
What sorts of experiences would constitute evidence for one of them over the others?
1 v 2 -- is your “meat” persistent over time? (It is not).
2 v 3 are non differentiable -- 2 is 3.
4 is implied by 2⁄3. It is affirmed by physics simulations that have atomic-level precision, and by research like the Blue Brain project.
5 is excluded by the absence of non-local phenomena (‘psychic powers’).
I agree that my meat does not persist over time.
The class of patterns of information-flow that can occur within meat includes the pattern of information-flow that occurs within your meat. 3 therefore asserts that I am you, in addition to being me. 2 does not assert this. They seem like different claims to me, insofar as any of these claims are different from the others.
I’m not really sure what non-local phenomena are, or what they have to do with psychic powers, or what they have to do with the proper referent for “I”.
Missed that about the class. Makes a difference, definitely.
Two options: trust the assertions of those who are sure, or learn of them for yourself. :)
Good point. This is precisely the source of my doubt, and the reason why I’m not sure that changing substrate preserves identity.
The thing is, quantum mechanics makes me confident that if I go from configuration X to configuration Y, through a path that preserves identity, then any path from X to Y preserves my identity. But I am less confident about intermediate states (like the temporary emulation in the simulated green room).
Given your understanding of quantum mechanics, is your identity in this sense preserved from year to year today?
If it weren’t, would you care?
I’m not sure that’s a meaningful question. I undoubtedly change from year to year, so… But there is some kind of continuity, which I’m afraid could be broken by a change of substrate. (But then again, we could change my substrate bit by bit…
If it weren’t, I would not care, because it wouldn’t break anything I value. If preservation of identity doesn’t even happen currently in our mundane world, I would be stupid to value it. And I’ll happily upload, then (modulo the mundane risk of being badly emulated of course).
But first, I must be convinced that either identity wasn’t preserved in the first place, or that uploading preserves identity, or that I was just confused because the world actually works like… who knows.
A change of substrate occurs daily for you. It’s just of a similar class. What beyond simple “yuck factor” gives you cause to believe that a transition from cells to silicon would impact your identity? That it would look different?
No, it doesn’t. You could argue that there’s a renewal of atoms (most notably water), but swapping water atoms doesn’t have physical meaning, so… No. Heck, even cut&paste transportation doesn’t change substrate.
The “yuck factor” I feel cause me to doubt this: If an EM of me would be created during my sleep, what probability would I assign to wake up as silicon, or as wetware? I’m totally not sure I can say 1⁄2.
Actually it’s more complicated than that. Not just water atoms; over time your genetic pattern changes—the composition of cancerous to non-cancerous cells; the composition of senescent to non-senescent cells; the physical structures of the brain itself change.
Neurogenesis does occur in adults—so not even on a cellular level is your brain the same today as it was yesterday.
Furthermore—what makes you confident you are not already in a Matrix? I have no such belief, myself. Too implausible to believe we are in the parent of all universes given physics simulations work.
Note that neither of these developments are generally considered good.
Indeed. But they do demonstrate the principle in question.
The principal you’re trying to demonstrate is that one shouldn’t fear changing one’s substrate since it’s already happening. So, no they don’t.
Yes, they do. And that’s the end of this dialogue.
(EDIT: By end of this dialogue I meant that he and I were at an impasse and unable to adjust our underlying assumptions to a coherent agreement in this discussion. They are too fundamentally divergent for “Aumanning.”)
It would just be an argument over the definition of “I”. Here, tabooing “I” is probably a useful exercise.
OK… what would you replace “I” with, then?
That’s the kind of worldview that got shown invalid in the last century in all sorts of areas. On the quantum level dualism is dead. A electron doesn’t have to be either in place A or in place B. Modern models of the humans brains also describe system properties that are non-dualistic in nature. Dualism is no good paradigm for modelling complex systems.
Just because an atom is usually either in place A or in place B doesn’t mean that the same dualism is true or useful for modelling other parts of our world. There’s nothing inherently truth seeking in using atomistic physics as the central reference.
We are talking about mind-matter dualism: substance dualism, where matter is one type of thing and mind is another type of thing, and also property dualism, where everything is made of matter, but mental states involve material objects with extra properties outside of those usually discussed in physics. You appear to be talking about some other kind of “dualism”.
I think extra properties outside of physics conveys a stronger notion than what this view actually tries to explain. Property dualism, such as emergent materialism or epiphenomenalism, doesn’t really think there are any extra properties other than the standard physical ones, it is just that when those physical properties are arranged and interact in a certain way they manifest what we experience as subjective experience and qualia and those phenomena aren’t further reducible in an explanatory sense, even though they are reducible in the standard sense of being arrangements of atoms.
So, why is that therefore an incomplete understanding? I always thought of qualia as included within the same class of questions as, and let me quote Parfit here, “Why anything, why this?” We may never know why there is something rather than nothing in the deep sense, not just in the sense of Larry Krausse saying ‘because of the relativistic quantum field’, but in ‘why the field in the first place’, even if it is the only logical way for a universe to exist given a final TOE, but that does not hinder our ability to figure out how the universe works from a scientific perspective. I feel it is the same when discussing subjective experience and qualia. The universe is here, it evolves, matter interacts and phenomena emerge, and when that process ends up at neural systems, those systems (maybe just a certain subset of them) experience what we call subjectivity. From this subjective vantage point, we can use science to look back at that evolved process and see how the physical material is architected and understand its dynamics and create similar systems , but there may not be a deeper answer to why or what qualia is other than its correlated emergence from the physical instantiations and interactions. That is not anti-reductionist, and it is not anywhere near the same class of thought as substance dualism.
Robert Hanson wrote recently:
The basic argument structure is that public education either exists for ‘creating patriotic citizens for war’ or it exist for ‘noble purposes’. That’s dualism. People who believe in strong reductionism tend to make arguments that are structured that way.
What do I mean by strong reductionism? Weak reductionism is the the belief that a world is determined by the way it works on the lowest level. Strong reductionism is the belief that you can basically ignore the halting problem and understand how a system works by understanding how it works on the lowest level.
loup-vaillant wants to use dualistic thinking for the way the full human brain works. I sat in a lecture in the Free University of Berlin about how the human brain works the professor told me: “You can’t understand how the human brain works if all you are doing is studying neurons, you actually need to study the full system in action.” Even when the system might be determined by it’s the way neurons work you can’t understand it on that level.
The stuff that you can then say about the human brain doesn’t tend to be either true or false but useful or not useful given a specific purpose. loup-vaillant however wants to convince his mother that it makes dualism works on that level. That it makes sense to distinguish between true and false statements.
Reading your article, I see a possible problem:
There is something like “Agree Denotationally But Object Connotationally” here. Sometimes it is better to be wrong than to be right in a wrong context.
Imagine that a powerful majority of a people share the same opinion. What kind of society would you prefer? One where it is considered OK to believe differently, because personal thoughts are exceptions from public rules? Or one, where the opinion of the majority is considered so important that it is considered OK to attack people who disagree, and there is no good excuse for disagreement?
I have simply replaced “truth” with “opinion of a powerful majority”. Why is this legitimate? Simply, because if someone has an opinion, they consider it truth. And if the agree with each other, the more sure they are. And if they are powerful enough, who dares to openly disagree? Especially if there is a rule that it is OK to attack people do disagree.
Therefore we have a rule that it is OK to have your own opinions about private matters. We have often seen that people who try to break this rule, do it to increase their power, even if their professed goals are noble.
But this situations is different, because unlike those people, you are actually right. Therefore those social rules obviously don’t apply to you. Is there a good reason to follow those rules anyway?
Maybe I didn’t convey the meaning I wanted to. The reason I wrote this article was because I was called intolerant for merely pointing out that, given that I strongly believe X, I also strongly believe those who believe non-X to be mistaken. Merely noticing the link is enough to be called intolerant. This is nuts. Human, I know, but nuts nevertheless. Consistency is not intolerance.
I perfectly understand that I can be mistaken about X (infinite certainty, biases, and all that). I just can’t stand when people disagree and see no problem whatsoever. Then when I point out that there is a problem, I am called intolerant. I suppose people believe I want to force them to my side. Factual opinions are not utility functions, but people keep forgetting that. As if changing your mind meant you lost. Actually, you usually win when you do that.
I do understand that we, as imperfect humans, can agree to disagree. But not on principle. I’m okay with admitting that at present, trying to resolve the disagreement doesn’t seem worth the trouble, but we should at least reckon there is a problem.
The bottom line is, when there is disagreement, and one cares about truth, then there is a problem. This problem may, or may not, be worth solving, but pretending everyone can have contradictory opinions that should never be attacked is just weak.
Of course, we should never attack people.
If she’s arguing from a position of separate magisteria which have to be reasoned about differently, I would probably try this tactic. Point out that we do not automatically gravitate to reasoning correctly about mundane things; you can use examples from Greek philosophers and alchemists and so on. Correct processes of mundane reasoning are something we’ve had to develop over time by refining our methods in situations where would could tell if our conclusions were wrong.
That being the case, how does she know that her different procedure for reasoning about non-mundane things is one that works? If it were simply wrong, how would she be able to tell? If her procedure for reasoning about non-mundane things can be used to draw contradictory conclusions (it almost certainly can,) point out that you have on the one hand a set of confusing apparent contradictions that must somehow all be true, and on the other hand the possibility that the reasoning procedure simply doesn’t work.
From what I read, the procedure for reasoning about non-mundane things is used to avoid drawing any conclusions whatsoever, much less contradictory ones. It’s intellectual cowardice masquerading as deep wisdom. (Sorry for dissing your mom, loup-vaillant.)
I largely agree with Cyan, but with a little more empathy for your mom’s viewpoint. For example, you write:
So you throw out a description and a quantifier, and slap a label on the result. Doesn’t that sound a little similar to naive set theory? Maybe it’s not as straightforward as it looks.
I’m not actually resistant to defining “reality” your way; I think it’s not actually a step toward sets that don’t contain themselves. But it takes some sophistication to see that, and your mom might lack the formal skills to discriminate innocent-looking “logic” that leads to paradox from innocent-looking logic that doesn’t. Note that she needn’t have studied set theory to have run into similar exercises in labeling and deductive argument that subtly lead to insane results.
If that’s the case, she should see a god which really does hate homosexuality, eating pork, and considers working on the sabbath worthy of death, or wants the whole world to live under Sharia law, as equiprobable with one that loves everyone. She most likely behaves as if she had some means of discriminating between supernatural hypotheses even if she disavows being able to.
Have you read What the Tortoise Said to Achilles? It’s reprinted in Gödel, Escher, Bach: an Eternal Golden Braid.
I’m not sure. Naively I would expect most children of post-Christian agnostics to grow up to have some kind of mystical New Age beliefs.
Because they’ve been given space to develop a spiritual worldview and no particular reason not to, but not a framework for it, so they end up adopting a semi-random gaggle of relatively nonthreatening and nontotalizing supernaturalist beliefs? That’s plausible, but it won’t give you anything self-consistent. Maybe aggressive posthuman rationalism is what you get when you try to culture New Age beliefs in someone sensitive to ideological contradictions.
I think you would be just as likely to find them turning to some “strong” religion or even mainstream skepticism (of the kind that treats cryonics and the singularity as supernatural claims).
Yeah, that happens—a fair number of the born-again narratives I’ve come across read like that. But the reason I was thinking of this group in particular is that, for a lot of people on the post-Christian agnostic spectrum, organized religions really are the bad guys: nondenominational Christianity is usually given a pass, but actual churches get blamed for all sorts of stuff. That’s a nontrivial obstacle for someone raised in that milieu.
Dharmic religions don’t seem to count as “organized” in this context, for reasons which are kind of opaque to me but probably have to do with exoticism. So I expect a lot of Western Buddhists and Hindus come out of this sort of space too—n=1, but that’s more or less how my college roommate found Hinduism.
Unfortunately, radical Islam also frequently gets a similar pass on grounds of exoticism, not to mention being a “victim of the crusades and the war on terror”.