A few months ago, Rob Bensinger made a rather long post (that even got curated) in which he expressed his views on several questions related to personal identity and anticipated experiences in the context of potential uploading and emulation. A critical implicit assumption behind the exposition and reasoning he offered was the adoption of what I have described as the “standard LW-computationalist frame.” In response to me highlighting this, Ruben Bloom said the following:
I differ from Rob in that I do think his piece should have flagged the assumption of ~computationalism, but think the assumption is reasonable enough to not have argued for in this piece.
I do think it is interesting philosophical discussion to hash it out, for the sake of rigor and really pushing for clarity. I’m sad that I don’t think I could dive in deep on the topic right now.
However, as I pointed out in that thread, the lack of argumentation or discussion of this particular assumption throughout the history of the site means it’s highly questionable to say that assuming it is “reasonable enough”:
As TAG has written a number of times, the computationalist thesis seems not to have been convincingly (or even concretely) argued for in any LessWrong post or sequence (including Eliezer’s Sequences).
TAG himself made a similar and important point in a different comment on the same post:
Naturalism and reductionism are not sufficient to rigourously prove either form of computationalism—that performing a certain class of computations is sufficient to be conscious in general, or that performing a specific one is sufficient to be a particular conscious individual.
This has been going on for years: most rationalists believe in computationalism, none have a really good reason to.
Arguing down Cartesian dualism (the thing rationalists always do) doesn’t increase the probability of computationalism, because there are further possibilities , including physicalism-without-computationalism (the one rationalists keep overlooking) , and scepticism about consciousness/identity.
One can of course adopt a belief in computationalism, or something else, in the basis of intuitions or probabilities. But then one is very much in the ream of Modest Epistemology, and needs to behave accordingly.
“My issue is not with your conclusion, it’s precisely with your absolute certainty, which imo you support with cyclical argumentation based on weak premises”.
And, indeed (ironically enough), in response to andesoldes’s excellent distillation of Rob’s position and subsequent detailed and concrete explanation of why it seems wrong to have this degree of confidence in his beliefs, Bensinger yet again replied in a manner that seemed to indicate he thought he was arguing against a dualist who thought there was a little ghost inside the machine, an invisible homunculus that violated physicalism:
I agree that “I made a non-destructive software copy of myself and then experienced the future of my physical self rather than the future of my digital copy” is nonzero Bayesian evidence that physical brains have a Cartesian Soul that is responsible for the brain’s phenomenal consciousness; the Cartesian Soul hypothesis does predict that data. But the prior probability of Cartesian Souls is low enough that I don’t think it should matter.
You need some prior reason to believe in this Soul in the first place; the same as if you flipped a coin, it came up heads, and you said “aha, this is perfectly predicted by the existence of an invisible leprechaun who wanted that coin to come up heads!”. Losing a coinflip isn’t a surprising enough outcome to overcome the prior against invisible leprechauns.
But, as andesoldes later ably pointed out:
You’re missing the bigger picture and pattern-matching in the wrong direction. I am not saying the above because I have a need to preserve my “soul” due to misguided intuitions. On the contrary, the reason for my disagreement is that I believe you are not staring into the abyss of physicalism hard enough. When I said I’m agnostic in my previous comment, I said it because physics and empiricism lead me to consider reality as more “unfamiliar” than you do (assuming that my model of your beliefs is accurate). From my perspective, your post and your conclusions are written with an unwarranted degree of certainty, because imo your conception of physics and physicalism is too limited. Your post makes it seem like your conclusions are obvious because “physics” makes them the only option, but they are actually a product of implicit and unacknowledged philosophical assumptions, which (imo) you inherited from intuitions based on classical physics.
More specifically, as I wrote in response to Seth Herd, “[the] standard LW-computationalist frame reads to me as substantively anti-physicalist and mostly unreasonable to believe in” for reasons I gave in my explanation to Bloom:
What has been argued for, over and over again, is physicalism, and then more and more rejections of dualist conceptions of souls.
That’s perfectly fine, but “souls don’t exist and thus consciousness and identity must function on top of a physical substrate” is very different from “the identity of a being is given by the abstract classical computation performed by a particular (and reified) subset of the brain’s electronic circuit,” and the latter has never been given compelling explanations or evidence. [1] This is despite the fact that the particular conclusions that have become part of the ethos of LW about stuff like brain emulation, cryonics etc are necessarily reliant on the latter, not the former.
As a general matter, accepting physicalism as correct would naturally lead one to the conclusion that what runs on top of the physical substrate works on the basis of… what is physically there (which, to the best of our current understanding, can be represented through Quantum Mechanical probability amplitudes), not what conclusions you draw from a mathematical model that abstracts away quantum randomness in favor of a classical picture, the entire brain structure in favor of (a slightly augmented version of) its connectome, and the entire chemical make-up of it in favor of its electrical connections. As I have mentioned, that is a mere model that represents a very lossy compression of what is going on; it is not the same as the real thing, and conflating the two is an error that has been going on here for far too long. Of course, it very well might be the case that Rob and the computationalists are right about these issues, but the explanation up to now should make it clear why it is on them to provide evidence for their conclusion.
The accuracy of this interpretation of the LW-computationalist view seems to have been confirmed by its proponents, implicitly by Bensinger continuing the conversation with andesoldes without mentioning any disagreement when the latter explicitly asked him “First off, would you agree with my model of your beliefs? Would you consider it an accurate description?” and by cousin_it saying that “uploading [going] according to plan” means “the map of your neurons and connections has been copied into a computer”, and explicitly by Seth Herd claiming that “your mind is a pattern instantiated in matter” and by Bloom, who wrote the following:
To answer your question in your other comment. I reckon with some time I could write an explainer for why we should very reasonable assume consciousness is the result of local brain stuff and nothing else (and also not quantum stuff), though I’d be surprised if I could easily write something so rigorous that you’d find it fully satisfactory.
(Emphasis mine.)
When Seth Herd restated computationalist conclusions, once again without much argumentation (“Noncomputational physicalism sounds like it’s just confused. Physics performs computations and can’t be separated from doing that. Dual aspect theory is incoherent because you can’t have our physics without doing computation that can create a being that claims and experiences consciousness like we do”), I summarized a relevant part of my skepticism as follows:
As I read these statements, they fail to contend with a rather basic map-territory distinction that lies at the core of “physics” and “computation.”
The basic concept of computation at issue here is a feature of the map you could use to approximate reality (i.e., the territory) . It is merely part of a mathematical model that, as I’ve described in response to Ruby earlier, represents a very lossy compression of the underlying physical substrate [2]. This is because, in this restricted and epistemically hobbled ontology, what is given inordinate attention is the abstract classical computation performed by a particular subset of the brain’s electronic circuit. This is what makes it anti-physicalist, as I have explained:
[...]
So when you talk about a “pattern instantiated by physics as a pure result of how physics works”, you’re not pointing to anything meaningful in the territory, rather only something that makes sense in the particular ontology you have chosen to use to view it through, a frame that I have explained my skepticism of already.
So, to finish up the exposition and background behind this question, what are the actual arguments in favor of the computationalist thesis? If you agree with the latter philosophy,[1] why do you not think it to be the case that computationalism is anti-physicalist by failing a basic map-territory distinction due to how it reifies ideas like “computation” as being parts of the territory as opposed to mere artifacts of a mathematical model that attempts, imperfectly and lossily, to approximate reality?
- ^
In my current model of this situation, I have some strong suspicions about the reasons why LW converged on this worldview despite the complete lack of solid argumentation in its favor, but I prefer to withhold the psychoanalysis and pathologizing of my interlocutors (at least until after the object-level matters are resolved satisfactorily).
A starting point is self-reports. If I truthfully say “I see my wristwatch”, then, somewhere in the chain of causation that eventually led to me uttering those words, there’s an actual watch, and photons are bouncing off it and entering my eyes then stimulating neurons etc.
So by the same token, if I say “your phenomenal consciousness is a salty yellow substance that smells like bananas and oozes out of your bellybutton”, and then you reply “no it isn’t!”, then let’s talk about how it is that you are so confident about that.
(I’m using “phenomenal consciousness” as an example, but ditto for “my sense of self / identity” or whatever else.)
So here, you uttered a reply (“No it isn’t!”). And we can assume that somewhere in the chain of causation is ‘phenomenal consciousness’ (whatever that is, if anything), and you were somehow introspecting upon it in order to get that information. You can’t know things in any other way—that’s the basic, hopefully-obvious point that I understand Eliezer was trying to make here.
Now, what’s a ‘chain of causation’, in the relevant sense? Let’s start with a passage from Age of Em:
In other words, if your body temperature had been 0.1° colder, or if you were hanging upside down, or whatever, then the atoms in your brain would be configured differently in all kinds of ways … but you would still say “no it isn’t!” in response to my proposal that maybe your phenomenal consciousness is a salty yellow substance that oozes out of your bellybutton. And you would say it for the exact same reason.
This kind of thinking leads to the more general idea that the brain has inputs (e.g. photoreceptor cells), outputs (e.g. motoneurons … also, fun fact, the brain is a gland!), and algorithms connecting them. Those algorithms describe what Hanson’s “degrees of freedom” are doing from moment to moment, and why, and how. Whenever brains systematically do characteristically-brain-ish things—things like uttering grammatical sentences rather than moving mouth muscles randomly—then the explanation of that systematic pattern lies in the brain’s inputs, outputs, and/or algorithms. Yes, there’s randomness in what brains do, but whenever brains do characteristically-brainy-things reliably (e.g. disbelieve, and verbally deny, that your consciousness is a salty yellow substance that oozes out of your bellybutton), those things are evidently not the result of random fluctuations or whatever, but rather they follow from the properties of the algorithms and/or their inputs and outputs.
That doesn’t quite get us all the way to computationalist theories of consciousness or identity. Why not? Well, here are two ways I can think of to be non-computationalist within physicalism:
One could argue that consciousness & sense-of-identity etc. are just confused nonsense reifications of mental models with no referents at all, akin to “pure white” [because white is not pure, it’s a mix of wavelengths]. (Cf. “illusionism”.) I’m very sympathetic to this kind of view. And you could reasonably say “it’s not a computationalist theory of consciousness / identity, but rather a rejection of consciousness / identity altogether!” But I dunno, I think it’s still kinda computationalist in spirit, in the sense that one would presumably instead make the move of choosing to (re)define ‘consciousness’ and ‘sense-of-identity’ in such a way that those words point to things that actually exist at all (which is good), at the expense of being inconsistent with some of our intuitions about what those words are supposed to represent (which is bad). And when you make that move, those terms almost inevitably wind up pointing towards some aspect(s) of brain algorithms.
One could argue that we learn about consciousness & sense-of-identity via inputs to the brain algorithm rather than inherent properties of the algorithm itself—basically the idea that “I self-report about my phenomenal consciousness analogously to how I self-report about my wristwatch”, i.e. my brain perceives my consciousness & identity through some kind of sensory input channel, and maybe also my brain controls my consciousness & identity through some kind of motor or other output channel. If you believe something like that, then you could be physicalist but not a computationalist, I think. But I can’t think of any way to flesh out such a theory that’s remotely plausible.
I’m not a philosopher and am probably misusing technical terms in various ways. (If so, I’m open to corrections!)
(Note, I find these kinds of conversations to be very time-consuming and often not go anywhere, so I’ll read replies but am pretty unlikely to comment further. I hope this is helpful at all. I mostly didn’t read the previous conversation, so I’m sorry if I’m missing the point, answering the wrong question, etc.)
That’s fine. Your answer doesn’t quite address the core of my arguments and confusions, but it’s useful in its own right.
As I understood it, your objection was that computation is an abstraction/compression of the real thing, which is not the same as the real thing. (Is that correct?)
First, let’s check how important is the “compression” part. Imagine that someone would emulate your brain and body without compression—in a huge computer the size of the Moon, faithfully, particle by particle, including whatever quantum effects are necessary (for the sake of thought experiment, let’s assume that it is possible). Would such simulation be you in some sense?
If we get that out of the way, I think that the part about compression was addressed. Lossy compression loses some information, but the argument was that consciousness is implemented in a robust way, and can survive some noise. Too much noise would ruin it. On the other hand, individual neurons die every day, so it seems like a quantitative question: it’s not whether the simulation would be you, but how much would the simulation be you. Maybe simulating 50% of the neurons could still be 99% you, although this is just a speculation.
I think the standard argument that quantum states are not relevant to cognitive processes is The importance of quantum decoherence in brain processes. This is enough to convince me that going through a classical teleporter or copying machine would preserve my identity, and in the case of a copying machine I would experience an equal subjective probability of coming out as the original or the copy. It also seems to strongly imply than mind uploading into some kind of classical artificial machine is possible, since it’s unlikely that all or even most of the classical properties of the brain are essential. I agree that there’s an open question about whether mind emulation on any arbitrary substrate (like, for instance, software running on CMOS computer chips) preserves identity even if it shows the same behavior as the original.
Could you say more about this? Why is this unlikely?
There seems to generally be a ton of arbitrary path-dependent stuff everywhere in biology that evolution hasn’t yet optimized away, and I don’t see a reason to expect the brain’s implementation of consciousness to be an exception.
Agreed about its implementation of awareness, as opposed to being unaware but still existing. What about its implementation of existing, as opposed to nonexistence?
Based on this comment I guess by “existing” you mean phenomenal consciousness and by “awareness” you mean behavior? I think the set of brainlike things that have the same phenomenal consciousness as me is a subset of the brainlike things that have the same behavior as me.
Well I’d put it the other way round. I don’t know what phenomenal consciousness is unless it just means the bare fact of existence. I currently think the thing people call phenomenal consciousness is just “having realityfluid”.
If you have a copying machine that is capable of outputting more than one (identical) copy, and you do the following:
first, copy yourself once
then, immediately afterwards, take that copy and copy it 9 times (for a total of 1 original and 10 copies)
Do you then expect a uniform 9.09% subjective probability of “coming out” of this process as any of the original + copies, or a 50% chance of coming out as the original and a 5% chance of coming out as any given copy?
If it’s immediate enough that all the copies end up indistinguishable, with the same memories of the copying process, then uniform, otherwise not uniform.
I think we should disentangle “consciousness” from “identity” in general and when talking about computationalism in particular.
I don’t think there is any reasonable alternative to computationalism when we are talking about the nature of consciousness. But this doesn’t seem to actually imply that my “identity”, whatever it is, will be necessary preserved during teleportation or uploading. I think at our current state of undertstanding, it’s quite coherent to be computationalist about consciousness and eliminativist towards identity.
Computationalism is an ethical theory, so it is fine for it to be based on high-level abstractions—ethics is arbitrary.