the lack of argumentation or discussion of this particular assumption throughout the history of the site means it’s highly questionable to say that assuming it is “reasonable enough”
While discussion on personal identity has mostly not received a single overarching post focusing solely on arguing all the details, it has been discussed to varying degrees of possible contention points. Thou Art Physics which focuses on getting the idea that you are made up of physics into your head, Identity Isn’t in Specific Atoms which tries to dissolve the common intuition of the specific basic atoms mattering, Timeless Identity which is a culmination of various elements of those posts into the idea that even if you duplicate a person they both are still ‘you’. There is also more, some of which you’ve linked, but I consider it strange to say that there’s a lack of discussion.
The sequence that the posts I’ve linked are a part of have other discussions, though I agree that they are often from the position of arguing against a baseline of dualism, but I believe they have many points that are relevant to an argument for computationalism. I think there is a lack of discussion about the very specific points you have a tendency to raise, but as I’ll discuss, I find myself confused about their relevancy to varying degrees.
There’s also the facet of decision theory posting that LW enjoys, which encourage this class of view. With decision problems like Newcomb’s Paradox or Parfit’s hitchhiker emphasizing the focus of “you can be instantiated inside a simulation to predict your actions, and you should act like that you — roughly — control their actions because of the similarity of your computational implementations”. Of course, this works even without assuming the simulations are conscious, but I do think it has led to clearer consideration because it helps break past people’s intuitions. Those intuitions are not made for the scenarios that we face, or will potentially have to face.
Bensinger yet again replied in a manner that seemed to indicate he thought he was arguing against a dualist who thought there was a little ghost inside the machine, an invisible homunculus that violated physicalism
Because most often the people suggesting such are dualists, or have a lot of the similar ideas even if they are discussed in an “I am uncertain” manner. I agree Rob could’ve given a better reply, but it was a reasonable assumption. (I personally found Andesolde’s argument confused, with the later parts having a focus on first-person subjective experience that I think is not really useful to consider. There is uncertainties in there, but besides the idea that the mind could be importantly quantum in some way, didn’t seem that relevant.)
That’s perfectly fine, but “souls don’t exist and thus consciousness and identity must function on top of a physical substrate” is very different from “the identity of a being is given by the abstract classical computation performed by a particular (and reified) subset of the brain’s electronic circuit,” and the latter has never been given compelling explanations or evidence.
I agree it hasn’t been argued in depth — but there has definitely been arguments about the extent QM affects the brain. Of which, the usual conclusion was that the effect is minor, and/or that we had no evidence for believing it necessary. I would need a decently strong argument that QM is in some way computationally essential.
the entire brain structure in favor of (a slightly augmented version of) its connectome, and the entire chemical make-up of it in favor of its electrical connections.
More than just the electrical signals matter, this is understood by most. There’s plenty of uncertainty about the level of detail needed to simulate/model the brain. Computationalism doesn’t imply that only the electrical signals matter, it implies that whatever makes up the computation matters, which can be done via tiny molecules & electrons, water pipes, or circuitry. Simplifying a full molecular simulation to the functional implications of it is just one example of how far we can simplify, which I believe should extend pretty far.
“your mind is a pattern instantiated in matter”
I agree that people shouldn’t assume that just neurons/connections are enough, but I doubt that is a strongly held belief; nor is it a required sub-belief of computationalism.
You assume too much about Bensinger’s reply when he didn’t respond, especially as he was responding to subargument in the whole chain.
As well, the quoted sentence by Herd is very general — allowing both the neuron connections and molecular behavior.
(There’s also the fact that people often handwave over the specifics of what part of the brain you’re extracting, because they’re talking about the general idea through some specific example that people often think about. Such as a worm’s neurons.)
For example, for two calculators, wouldn’t you agree with a description of them as having the same ‘pattern’ even if all the atoms aren’t in the same position relative to a table? You agree-reacted on one of dirk’s comments:
Would the idea that a calculator has some pattern, some logical rules that it is implementing via matter, thus be non-physicalist about calculators? A brain follows the rules of reality, with many implications about how certain molecules constrain movement, how these neuron spikes cause hunger, etcetera. There is a logical/computational core to this that can be reimplemented.
The basic concept of computation at issue here is a feature of the map you could use to approximate reality (i.e., the territory) . It is merely part of a mathematical model that, as I’ve described in response to Ruby earlier, represents a very lossy compression of the underlying physical substrate
Why shouldn’t we decide based on a model/category? Just as there’s presumably edge-cases to what counts as a ‘human’ or ‘person’. There very well may be strange setups which we can’t reasonably determine to our liking whether we consider it computably implementing a person, a chihuahah, or the weather of Jupiter.
We could try to develop a theory of identity down to the last atom, still operating on a model but at least an extremely specific model, which would presumably force us to narrow in on confusing edge-cases. This would be interesting to do once we have the technology, though I expect there to be edge-cases no matter what, where our values aren’t perfectly defined, which might mean preserving option value.
I’m also skeptical that most methods present a very lossy compression even if we assume classical circuits. Why would it? (Or, if you’re going to raise the idea of only getting some specific sub-class of neuron information, then sure, that probably isn’t enough, but I don’t care about that)
From this angle where you believe that computation is not fundamental or entirely well-defined, you can simplify the computationalist proposal as “merely” applying in a very large class of cases. Teleporters have no effect on personal identity due to similarity in atomic makeup up to some small allowance for noise (whether simple noise, or because we can’t exactly copy all the quantum parts; I don’t care if my lip atoms are slightly adjusted). Cloning does not have a strictly defined “you” and “not-you”. Awakening from cryogenics counts as a continuation of you. A simulation implementing all the atomic interactions of your mind is very very likely to be you, and a simulation that has simplified many aspects of that down is also still very likely to be you.
Though there are definitely people who believe that the universe can fundamentally be considered computation, which I find plausible, especially due to a lack of other lenses that aren’t just “reality is”. Of which, your objection does not work without further argumentation with them.
Going back to the calculator example, you would need to provide argumentation for why the essential parts of the brain can’t be implemented computationally.
What I value about me is the pattern of beliefs, memories, and values.
The attempted mind-reading of others is (justifiably) seen as rude in conversations over the Internet, but I must nonetheless express very serious skepticism about this claim, as it’s currently written.
For one, I do not believe that “beliefs” and “values” ultimately make sense as distinct, coherent concepts that carve reality at the joints. This topic has been talked about before on LW a number of times, but I still fully endorse Charlie Steiner’s distillation of it in his excellently-written Reducing Goodhart sequence
Concepts can still be useful categorizations even if they aren’t hard and fast. Beliefs are often distinct from values in humans. They are vague and intertwine with each other, a belief forming a piece of value that doesn’t fade away even once the belief is proven false, a value endorsing a belief for no reason… They are still not one and the same. I also don’t see what this has relevance to in the statement.
I agree with what they said. I value my pattern of beliefs, memories, and values. I don’t care about my specific spatial position for identity (except insofar as I don’t want to be in a star), or if I’m solely in baseline reality.
They are vague and intertwine with each other, but they do behave differently. Your objections to CEV also seem to me to follow a similar pattern as this, where you go “this does not have a perfect foundational backing” to thus imply “it has no meaning, and there’s nothing to be said about it”. The consideration of path-dependency in CEV has been raised before, and it is an area that would be great to understand more.
My values would say that I meta-value my beliefs to be closer to the truth. There are ambiguities in this area. What about beliefs affecting my values? There’s more uncertainty in that region of what I wish to allow.
In any case, the rather abstract “beliefs, memories and values” you solely purport to value fit the category of professed ego-syntonic morals much more so than the category of what actually motivates and generates human behavior, as Steven Byrnes explained in an expectedly outstanding way:
I’d need a whole extra long comment to respond to all the various other parts of your comment chain. Such as indexicality, or the part which does the lines of saying “professed values are not real”. Which seems decently false, overly cynical, and also not what Byrnes’ linked post tries to imply. I’d say, professed values are often what you tend towards, but that your basic drives are often strong enough to stall out methods like “spend long hours solving some problem” due to many small opportunities. If you were given a big button to do something you profess to value, then you’d press it.
This also raises the question of: Why should I care that the human motivational system has certain basic drives driving it forward? Give me a big button and I’d alter my basic drives to be more in-line with my professed values. The basic drives are short-sighted.
(Well, I’d prefer to wait until superintelligent help, because there’s lots of ways to mess that up)
Of course, that I don’t have the big button has practical implications, but I’m primarily arguing against the cynical denial of having any other values than what these basic drives allow.
(I don’t entirely like my comment, it could be better. I’d suggest breaking the parent question-post up into a dozen smaller questions if you want discussion, as the many facets could have long comments dedicated to each. Which is part of why there’s no single post! You’re touching on everything from theory of how the universe works, to how much the preferences we say are real, to whether our models of reality are useful enough for theories of identity, indexicality, whether it makes sense to talk about a logical pattern, etc. Then there’s things like andesolde’s posts that you cite, but I’m not sure I rely on, where I’d have various objections to their idea of reality as subjective-first. I’ll probably find more I dislike about my comment, or realize that I could have worded or explained better once I come around to reading back over it with fresh eyes.)
While discussion on personal identity has mostly not received a single overarching post focusing solely on arguing all the details, it has been discussed to varying degrees of possible contention points. Thou Art Physics which focuses on getting the idea that you are made up of physics into your head, Identity Isn’t in Specific Atoms which tries to dissolve the common intuition of the specific basic atoms mattering, Timeless Identity which is a culmination of various elements of those posts into the idea that even if you duplicate a person they both are still ‘you’. There is also more, some of which you’ve linked, but I consider it strange to say that there’s a lack of discussion.
I appreciate you linking these posts (which I have read and almost entirely agree with), but what they are doing (as you mentioned) is arguing against dualism, or in favor of physicalism, or against view classical (non-QM) entities like atoms have their own identity and are changed when copied (in a manner that can influence the fundamental identity of a being like a human).
What there has been a lack of discussion of is “having already accepted physicalism (and reductionism etc), why expect computationalism to be the correct theory?” None of those posts argue directly for computationalism; you can say they argue indirectly for it (and thus provide Bayesian evidence in its favor) by arguing against common opposing views, but I have already been convinced that those views are wrong.
And, as I have written before, physicalism-without-computationalism seems much more faithful to the core of physicalism (and to the reasons that convinced me of it in the first place) than computationalism does.
There’s also the facet of decision theory posting that LW enjoys, which encourage this class of view. With decision problems like Newcomb’s Paradox or Parfit’s hitchhiker emphasizing the focus of “you can be instantiated inside a simulation to predict your actions, and you should act like that you — roughly — control their actions because of the similarity of your computational implementations”. Of course, this works even without assuming the simulations are conscious, but I do think it has led to clearer consideration because it helps break past people’s intuitions. Those intuitions are not made for the scenarios that we face, or will potentially have to face.
One man’s modus ponens is another man’s modus tollens. I agree that the LW-style decision theory posting encourages this type of thinking, and you seem to infer that the high-quality reasoning in the decision theory posts implies that they give good intuitions about the philosophy of identity.
I draw the opposite conclusion from this: the fact that the decision theory posts seem to work on the basis of a computationalist theory of identity makes me think worse of the decision-theory posts.
I agree it hasn’t been argued in depth — but there has definitely been arguments about the extent QM affects the brain. Of which, the usual conclusion was that the effect is minor, and/or that we had no evidence for believing it necessary.
Can you link to some of these? I do not recall seeing anything like this here.
it implies that whatever makes up the computation matters
What is “the computation”? Can we try to taboo that word? My comment to Seth Herd is relevant here (“The basic concept of computation at issue here is a feature of the map you could use to approximate reality (i.e., the territory) . It is merely part of a mathematical model that, as I’ve described in response to Ruby earlier, represents a very lossy compression of the underlying physical substrate. [...] So when you talk about a “pattern instantiated by physics as a pure result of how physics works”, you’re not pointing to anything meaningful in the territory, rather only something that makes sense in the particular ontology you have chosen to use to view it through, a frame that I have explained my skepticism of already.) You seem to be thinking about computation as being some sort of ontologically irreducible feature of reality that can exist independently of a necessarily lossy and reductive mathematical model that tries to represent it, which doesn’t make much sense to me.
I don’t know if this will be helpful to you or not in terms of clarifying my thinking here, but I see this point here by you (asking “what makes up the computation”) as being absolutely analogous to asking “what makes up causality,” to which my response is, as Dagon said, that at the most basic level, I suspect “there’s no such thing as causation, and maybe not even time and change. Everything was determined in the initial configuration of quantum waveforms in the distant past of your lightcone. The experience of time and change is just a side-effect of your embeddedness in this giant static many-dimensional universe.”
Why shouldn’t we decide based on a model/category?
Well, we can, but as I tried to explain above, I see this model as being very lossy and unjustifiably privileging the idea of computation, which does not seem to make sense to me as a feature of the territory as opposed to the map.
Your objections to CEV also seem to me to follow a similar pattern as this, where you go “this does not have a perfect foundational backing” to thus imply “it has no meaning, and there’s nothing to be said about it”.
I completely disagree with this, and I am confused as to what made you think I believe that “there’s nothing to be said about [CEV].” I absolutely believe there is a lot to be said about CEV, namely that (for the reasons I gave in some of my previous comments that you are referencing and that I hope I can compile into one large post soon) CEV is theoretically unsound, conceptually incoherent, practically unviable, and should not be the target of any attempt to bring about a great future using AGI (regardless of whether it’s on the first try or not).
That seems to me like the complete opposite of me thinking that there’s nothing to be said about CEV.
Would the idea that a calculator has some pattern, some logical rules that it is implementing via matter, thus be non-physicalist about calculators? A brain follows the rules of reality, with many implications about how certain molecules constrain movement, how these neuron spikes cause hunger, etcetera. There is a logical/computational core to this that can be reimplemented.
I think it would be non-physicalist if (to slightly modify the analogy, for illustrative purposes) you say that a computer program I run on my laptop can be identified with the Python code it implements, because it is not actually what happens.
We can see this as a result of stuff like single-event upsets, i.e., for example, situations in which stray cosmic rays modify the bits in a transistor in the physical entity that runs the code (i.e., the laptop) in such a manner that it fundamentally changes the output of the program. So the running of the program (instantiated and embedded in the real, physical world just like a human is) works not on the basis of the lossy model that only takes into account the “software” part, but rather on the “hardware” itself.
You can of course expand the idea of “computation” to say that, actually, it takes into account the stray cosmic rays as well, and in fact it takes into account everything that can affect the output, at which point “computation” stops being a subset of “what happens” and becomes the entirety of it. So if you want to say that the computation necessarily involves the entirety of what is physically there, then I believe I agree, at which point this is no longer the computationalist thesis argued for by Rob, Ruben etc (for example, the corolaries about WBE preserving identity when only an augmented part of the brain’s connectome is scanned no longer hold).
One man’s modus ponens is another man’s modus tollens. I agree that the LW-style decision theory posting encourages this type of thinking, and you seem to infer that the high-quality reasoning in the decision theory posts implies that they give good intuitions about the philosophy of identity.
I draw the opposite conclusion from this: the fact that the decision theory posts seem to work on the basis of a computationalist theory of identity makes me think worse of the decision-theory posts.
I draw the opposite conclusion from this: the fact that the decision theory posts seem to work on the basis of a computationalist theory of identity makes me think worse of the decision-theory posts.
Why? If I try to guess, I’d point at not often considering indexicality as a consideration, merely thinking of it as having a single utility function which simplifies coordination. (But still, a lot of decision theory doesn’t need to take into account indexicality..)
I see the decision theory posts as less as giving new intuitions, and more breaking old ones that are ill-adapted, though that’s partially framing/semantics.
Can you link to some of these? I do not recall seeing anything like this here.
I’ll try to find some, but they’re more likely to be side parts of comment chains rather than posts, which does make them more challenging to search for. I doubt they’re as in-depth as we’d like, I think there is work done there, even if I do think the assumption of QM not mattering much is likely.
The basic idea is what would it give you? If the brain uses it for a random component, why can’t that be replaced with something pseudorandom? Which is fine from an angle of not seeing determinism as a problem. If the brain utilizes entangled atoms/neurons/whatever for efficiency, why can’t those be replaced with another method — possibly impractically inefficient? Does the brain functionally depend on an arbitrary precision Real for a calculation, why would it, and what would be the matter if it was cut off to N digits?
Scott Aaronson on Free Will About more than just FW, though he’s arguing against the LW position, but I don’t consider it a strong argument, see the comments for a bit of discussion.
There’s certainly more, but finding specific comments I’ve read over the years is a challenge.
Everything was determined in the initial configuration of quantum waveforms in the distant past of your lightcone. The experience of time and change is just a side-effect of your embeddedness in this giant static many-dimensional universe.”
I’m not sure I understand the distinction. Even if the true universe is a bunch of freeze-frame slices, time and change still functionally act the same. Given that I don’t remember random nonsense in my past, there’s some form of selection about which freeze-frames are constructed. Or, rather, with differing measure. Thus most of my ‘future’ measure is concentrated on freeze-frames that are consistent with what I’ve observed, as that has held true in the past.
Like, what you seem to be saying is Timeless Physics, of which I’d agree more with this statement:
An unchanging quantum mist hangs over the configuration space, not churning, not flowing.
But the mist has internal structure, internal relations; and these contain time implicitly.
The dynamics of physics—falling apples and rotating galaxies—is now embodied within the unchanging mist in the unchanging configuration space.
So I’d agree that computation only makes sense with some notion of time. That there has to be some way it is being stepped forward.
(To me this is an argument in favor of not privileging spatial position in the common teleportation example, but we’ve seemed to move down a level to whether the brain can be implemented at all)
(bits about CEV)
conceptually incoherent
I misworded what I say, sorry. I more meant that you consider it to say/imply nothing meaningful, but you can certainly still argue against it (such as arguing that it isn’t coherent).
I think it would be non-physicalist if (to slightly modify the analogy, for illustrative purposes) you say that a computer program I run on my laptop can be identified with the Python code it implements, because it is not actually what happens.
I would say the that the computer program running can be considered as an implementation of the abstract python code.
I agree that this model is missing details. Such as the exact behavior of the transistor, how fast it switches, the exact positions of the atoms, etcetera. That is dependent on the mind considering it, I agree.
The cosmic ray event would make so it is no longer an implementation of the abstract python program. You could expand the consideration to include more of the universe. Just as you could expand your model to consider the computer program as an implementation of the python program with some constraints: that if this specific transistor gets flipped one too many times it will fry, that there’s a slight possibility of a race condition that we didn’t consider at all in our abstract implementation, there’s a limit to the speed and heat it can operate at, a cosmic ray could come from these areas of space and hit it with 0.0x% probability thus disrupting functionality...
It still seems quite reasonable to say it is an implementation of the python program. I’m open to the argument that there isn’t a completely natural privileged point of consideration from which the computer is implementing the same pattern as another computer, and that the pattern is this python program. But as I said before, even if this is ultimately some amount of purely subjective, it still seems to capture quite a lot of the possible ideas?
Like in mathematics, I can have an abstract implementation of a sorting algorithm and prove that a python program for a more complicated algorithm (bubblesort, whatever) is equivalent. This is missing a lot of details, but that same sort of move is what I’m gesturing at.
It is merely part of a mathematical model that, as I’ve described in response to Ruby earlier, represents a very lossy compression of the underlying physical substrate
I can understand why you think that just the neurons / connections is too lossy, but I’m very skeptical of the idea that we’d need all of the amplitudes related to the brain/mind. Apriori that seems unlikely whatwith how little fundamentally turns on the specifics of QM, and those that do can all be implemented specially. As I discussed above some.
(That also reminds me of another reason why people sometimes just mentions neurons/connections which I forgot in my first reply: because they assume you’ve gotten the basic brain architecture that is shared and just need to plug in the components that vary)
I disagree that this distinction between our model and reality has been lost, merely that it has been deemed not too significant, or as something you’d study in-depth when actually performing brain uploads.
What is “the computation”? Can we try to taboo that word?
As I said in my previous comment, and earlier in this one, I’m open to the idea of computation being subjective instead of a purely natural concept. Though I’d expect that there’s not that many free variables in pinning down the meaning.
As for tabooing, I think that is kind of hard, as one very simple way of viewing computation is “doing things according to rules”.
You have an expression 5∗3. This is in your mind and relies on subjective interpretations of what the symbols mean.
You implement that abstract program (that abstract doing-things, a chain of rules of inference, a way that things interact) into a computer. The transistors were utilized because they matched the conceptual idea of how switches should function, but they have more complexities than the abstract switch, which introduces design constraints throughout the entire chip.
The chip’s ALU implements this through a bunch of transistors, which are more fundamentally made up of silicon in specific ways that regulate how electricity moves. There’s layers and layers of complexities even as it processes the specific binary representations of the two numbers and shifts them in the right way.
But, despite all this, all that fundamental behavior, all the quantum effects like tunneling which restrict size and positioning, it is computing the answer.
You see the result, 15, and are pretty confident that no differences between your simple model of the computer and reality occurred.
This is where I think arguments about subjectivity of computation can be made. Introduce a person who is talking about a different abstract concept, they encode it as binary because that’s what you do, and they have an operation that looks like multiplication and produces the same answer for that binary encoding. Then, the interpretation of that final binary output is dependent on the mind, because the mind has a different idea of what they’re computing. (But with the abstract idea being different, even if those parts match up)
But I think a lot of those cases are non-natural, which is part of why I think even if computation doesn’t make sense as a fundamental thing or a completely natural concept, it still covers a wide area of concern and is a useful tool. Similar to how the distinction of values and beliefs is a useful tool even when strictly discussing humans, but even moreso. So then, the two calculators are implementing the same abstract algorithm in their silicon, and then we fall back to two questions 1) is the mind within the edge-cases such that it is not entirely meaningful to talk about an abstract program that it is implementing 2) okay, but even if they share the same computation, what does that imply.
I think there could and should be more discussion of the complications around computation, with the easy to confuse interaction between levels of ‘completely abstract idea’ (platonism?), ‘abstract idea represented in the mind’ (what I’m talking about with abstract; subjective), ‘the physical way that all the parts of this structure behave’ (excessive detail but as accurate as possible; objective), ‘the way these rules do a specific abstract idea’ (chosen because of abstract ideas like a transistor is chosen because it functions like a switch, and the computer program is compiled in such a way because it matches the textual code you wrote which matches the abstract idea in your own mind; objective in that it is behaving in such a way, possibly subjective interpretation of the implications of that behavior).
We could also view computation through the lens of Turing Machines, but then that raises the argument of “what about all these quantum shenanigans, those are not computable by a turing machine”. I’d say that finite approximations get you almost all of what you want. Then there’s the objection of “turing machines aren’t available as a fundamental thing”, which is true, and “turing machines assume a privileged encoding”, which is part of what I was trying to discuss above.
(I got kinda rambly in this last section, hopefully I haven’t left any facets of the conversation with a branch I forgot to jump back to in order to complete)
We could also view computation through the lens of Turing Machines, but then that raises the argument of “what about all these quantum shenanigans, those are not computable by a turing machine”.
I enjoyed reading your comment, but just wanted to point out that a quantum algorithm can be implemented by a classical computer, just with a possibly exponential slow down. The thing that breaks down is that any O(f(n)) algorithm on any classical computer is at worst O(f(n)^2) on a Turing machine; for quantum algorithms on quantum computers with f(n) runtime, the same decision problem can be decided in (I think) O(2^{(f(n)}) runtime on a Turing machine
While discussion on personal identity has mostly not received a single overarching post focusing solely on arguing all the details, it has been discussed to varying degrees of possible contention points. Thou Art Physics which focuses on getting the idea that you are made up of physics into your head, Identity Isn’t in Specific Atoms which tries to dissolve the common intuition of the specific basic atoms mattering, Timeless Identity which is a culmination of various elements of those posts into the idea that even if you duplicate a person they both are still ‘you’. There is also more, some of which you’ve linked, but I consider it strange to say that there’s a lack of discussion. The sequence that the posts I’ve linked are a part of have other discussions, though I agree that they are often from the position of arguing against a baseline of dualism, but I believe they have many points that are relevant to an argument for computationalism. I think there is a lack of discussion about the very specific points you have a tendency to raise, but as I’ll discuss, I find myself confused about their relevancy to varying degrees.
There’s also the facet of decision theory posting that LW enjoys, which encourage this class of view. With decision problems like Newcomb’s Paradox or Parfit’s hitchhiker emphasizing the focus of “you can be instantiated inside a simulation to predict your actions, and you should act like that you — roughly — control their actions because of the similarity of your computational implementations”. Of course, this works even without assuming the simulations are conscious, but I do think it has led to clearer consideration because it helps break past people’s intuitions. Those intuitions are not made for the scenarios that we face, or will potentially have to face.
Because most often the people suggesting such are dualists, or have a lot of the similar ideas even if they are discussed in an “I am uncertain” manner. I agree Rob could’ve given a better reply, but it was a reasonable assumption. (I personally found Andesolde’s argument confused, with the later parts having a focus on first-person subjective experience that I think is not really useful to consider. There is uncertainties in there, but besides the idea that the mind could be importantly quantum in some way, didn’t seem that relevant.)
I agree it hasn’t been argued in depth — but there has definitely been arguments about the extent QM affects the brain. Of which, the usual conclusion was that the effect is minor, and/or that we had no evidence for believing it necessary. I would need a decently strong argument that QM is in some way computationally essential.
More than just the electrical signals matter, this is understood by most. There’s plenty of uncertainty about the level of detail needed to simulate/model the brain. Computationalism doesn’t imply that only the electrical signals matter, it implies that whatever makes up the computation matters, which can be done via tiny molecules & electrons, water pipes, or circuitry. Simplifying a full molecular simulation to the functional implications of it is just one example of how far we can simplify, which I believe should extend pretty far.
I agree that people shouldn’t assume that just neurons/connections are enough, but I doubt that is a strongly held belief; nor is it a required sub-belief of computationalism. You assume too much about Bensinger’s reply when he didn’t respond, especially as he was responding to subargument in the whole chain.
As well, the quoted sentence by Herd is very general — allowing both the neuron connections and molecular behavior. (There’s also the fact that people often handwave over the specifics of what part of the brain you’re extracting, because they’re talking about the general idea through some specific example that people often think about. Such as a worm’s neurons.)
For example, for two calculators, wouldn’t you agree with a description of them as having the same ‘pattern’ even if all the atoms aren’t in the same position relative to a table? You agree-reacted on one of dirk’s comments:
Would the idea that a calculator has some pattern, some logical rules that it is implementing via matter, thus be non-physicalist about calculators? A brain follows the rules of reality, with many implications about how certain molecules constrain movement, how these neuron spikes cause hunger, etcetera. There is a logical/computational core to this that can be reimplemented.
Why shouldn’t we decide based on a model/category? Just as there’s presumably edge-cases to what counts as a ‘human’ or ‘person’. There very well may be strange setups which we can’t reasonably determine to our liking whether we consider it computably implementing a person, a chihuahah, or the weather of Jupiter.
We could try to develop a theory of identity down to the last atom, still operating on a model but at least an extremely specific model, which would presumably force us to narrow in on confusing edge-cases. This would be interesting to do once we have the technology, though I expect there to be edge-cases no matter what, where our values aren’t perfectly defined, which might mean preserving option value. I’m also skeptical that most methods present a very lossy compression even if we assume classical circuits. Why would it? (Or, if you’re going to raise the idea of only getting some specific sub-class of neuron information, then sure, that probably isn’t enough, but I don’t care about that)
From this angle where you believe that computation is not fundamental or entirely well-defined, you can simplify the computationalist proposal as “merely” applying in a very large class of cases. Teleporters have no effect on personal identity due to similarity in atomic makeup up to some small allowance for noise (whether simple noise, or because we can’t exactly copy all the quantum parts; I don’t care if my lip atoms are slightly adjusted). Cloning does not have a strictly defined “you” and “not-you”. Awakening from cryogenics counts as a continuation of you. A simulation implementing all the atomic interactions of your mind is very very likely to be you, and a simulation that has simplified many aspects of that down is also still very likely to be you.
Though there are definitely people who believe that the universe can fundamentally be considered computation, which I find plausible, especially due to a lack of other lenses that aren’t just “reality is”. Of which, your objection does not work without further argumentation with them.
Going back to the calculator example, you would need to provide argumentation for why the essential parts of the brain can’t be implemented computationally.
Concepts can still be useful categorizations even if they aren’t hard and fast. Beliefs are often distinct from values in humans. They are vague and intertwine with each other, a belief forming a piece of value that doesn’t fade away even once the belief is proven false, a value endorsing a belief for no reason… They are still not one and the same. I also don’t see what this has relevance to in the statement. I agree with what they said. I value my pattern of beliefs, memories, and values. I don’t care about my specific spatial position for identity (except insofar as I don’t want to be in a star), or if I’m solely in baseline reality. They are vague and intertwine with each other, but they do behave differently. Your objections to CEV also seem to me to follow a similar pattern as this, where you go “this does not have a perfect foundational backing” to thus imply “it has no meaning, and there’s nothing to be said about it”. The consideration of path-dependency in CEV has been raised before, and it is an area that would be great to understand more. My values would say that I meta-value my beliefs to be closer to the truth. There are ambiguities in this area. What about beliefs affecting my values? There’s more uncertainty in that region of what I wish to allow.
I’d need a whole extra long comment to respond to all the various other parts of your comment chain. Such as indexicality, or the part which does the lines of saying “professed values are not real”. Which seems decently false, overly cynical, and also not what Byrnes’ linked post tries to imply. I’d say, professed values are often what you tend towards, but that your basic drives are often strong enough to stall out methods like “spend long hours solving some problem” due to many small opportunities. If you were given a big button to do something you profess to value, then you’d press it.
This also raises the question of: Why should I care that the human motivational system has certain basic drives driving it forward? Give me a big button and I’d alter my basic drives to be more in-line with my professed values. The basic drives are short-sighted. (Well, I’d prefer to wait until superintelligent help, because there’s lots of ways to mess that up) Of course, that I don’t have the big button has practical implications, but I’m primarily arguing against the cynical denial of having any other values than what these basic drives allow.
(I don’t entirely like my comment, it could be better. I’d suggest breaking the parent question-post up into a dozen smaller questions if you want discussion, as the many facets could have long comments dedicated to each. Which is part of why there’s no single post! You’re touching on everything from theory of how the universe works, to how much the preferences we say are real, to whether our models of reality are useful enough for theories of identity, indexicality, whether it makes sense to talk about a logical pattern, etc. Then there’s things like andesolde’s posts that you cite, but I’m not sure I rely on, where I’d have various objections to their idea of reality as subjective-first. I’ll probably find more I dislike about my comment, or realize that I could have worded or explained better once I come around to reading back over it with fresh eyes.)
I appreciate you linking these posts (which I have read and almost entirely agree with), but what they are doing (as you mentioned) is arguing against dualism, or in favor of physicalism, or against view classical (non-QM) entities like atoms have their own identity and are changed when copied (in a manner that can influence the fundamental identity of a being like a human).
What there has been a lack of discussion of is “having already accepted physicalism (and reductionism etc), why expect computationalism to be the correct theory?” None of those posts argue directly for computationalism; you can say they argue indirectly for it (and thus provide Bayesian evidence in its favor) by arguing against common opposing views, but I have already been convinced that those views are wrong.
And, as I have written before, physicalism-without-computationalism seems much more faithful to the core of physicalism (and to the reasons that convinced me of it in the first place) than computationalism does.
One man’s modus ponens is another man’s modus tollens. I agree that the LW-style decision theory posting encourages this type of thinking, and you seem to infer that the high-quality reasoning in the decision theory posts implies that they give good intuitions about the philosophy of identity.
I draw the opposite conclusion from this: the fact that the decision theory posts seem to work on the basis of a computationalist theory of identity makes me think worse of the decision-theory posts.
Can you link to some of these? I do not recall seeing anything like this here.
What is “the computation”? Can we try to taboo that word? My comment to Seth Herd is relevant here (“The basic concept of computation at issue here is a feature of the map you could use to approximate reality (i.e., the territory) . It is merely part of a mathematical model that, as I’ve described in response to Ruby earlier, represents a very lossy compression of the underlying physical substrate. [...] So when you talk about a “pattern instantiated by physics as a pure result of how physics works”, you’re not pointing to anything meaningful in the territory, rather only something that makes sense in the particular ontology you have chosen to use to view it through, a frame that I have explained my skepticism of already.) You seem to be thinking about computation as being some sort of ontologically irreducible feature of reality that can exist independently of a necessarily lossy and reductive mathematical model that tries to represent it, which doesn’t make much sense to me.
I don’t know if this will be helpful to you or not in terms of clarifying my thinking here, but I see this point here by you (asking “what makes up the computation”) as being absolutely analogous to asking “what makes up causality,” to which my response is, as Dagon said, that at the most basic level, I suspect “there’s no such thing as causation, and maybe not even time and change. Everything was determined in the initial configuration of quantum waveforms in the distant past of your lightcone. The experience of time and change is just a side-effect of your embeddedness in this giant static many-dimensional universe.”
Well, we can, but as I tried to explain above, I see this model as being very lossy and unjustifiably privileging the idea of computation, which does not seem to make sense to me as a feature of the territory as opposed to the map.
I completely disagree with this, and I am confused as to what made you think I believe that “there’s nothing to be said about [CEV].” I absolutely believe there is a lot to be said about CEV, namely that (for the reasons I gave in some of my previous comments that you are referencing and that I hope I can compile into one large post soon) CEV is theoretically unsound, conceptually incoherent, practically unviable, and should not be the target of any attempt to bring about a great future using AGI (regardless of whether it’s on the first try or not).
That seems to me like the complete opposite of me thinking that there’s nothing to be said about CEV.
I think it would be non-physicalist if (to slightly modify the analogy, for illustrative purposes) you say that a computer program I run on my laptop can be identified with the Python code it implements, because it is not actually what happens.
We can see this as a result of stuff like single-event upsets, i.e., for example, situations in which stray cosmic rays modify the bits in a transistor in the physical entity that runs the code (i.e., the laptop) in such a manner that it fundamentally changes the output of the program. So the running of the program (instantiated and embedded in the real, physical world just like a human is) works not on the basis of the lossy model that only takes into account the “software” part, but rather on the “hardware” itself.
You can of course expand the idea of “computation” to say that, actually, it takes into account the stray cosmic rays as well, and in fact it takes into account everything that can affect the output, at which point “computation” stops being a subset of “what happens” and becomes the entirety of it. So if you want to say that the computation necessarily involves the entirety of what is physically there, then I believe I agree, at which point this is no longer the computationalist thesis argued for by Rob, Ruben etc (for example, the corolaries about WBE preserving identity when only an augmented part of the brain’s connectome is scanned no longer hold).
Strongly seconding this.
Why? If I try to guess, I’d point at not often considering indexicality as a consideration, merely thinking of it as having a single utility function which simplifies coordination. (But still, a lot of decision theory doesn’t need to take into account indexicality..)
I see the decision theory posts as less as giving new intuitions, and more breaking old ones that are ill-adapted, though that’s partially framing/semantics.
I’ll try to find some, but they’re more likely to be side parts of comment chains rather than posts, which does make them more challenging to search for. I doubt they’re as in-depth as we’d like, I think there is work done there, even if I do think the assumption of QM not mattering much is likely.
The basic idea is what would it give you? If the brain uses it for a random component, why can’t that be replaced with something pseudorandom? Which is fine from an angle of not seeing determinism as a problem. If the brain utilizes entangled atoms/neurons/whatever for efficiency, why can’t those be replaced with another method — possibly impractically inefficient? Does the brain functionally depend on an arbitrary precision Real for a calculation, why would it, and what would be the matter if it was cut off to N digits?
Somewhat Eliezer’s Comment Here and some of the other pieces
Does davidad’s uploading moonshot work which has more specifics about what davidad thinks is relevant to uploading
With this as also a good article to read as a reply
QM Has nothing to do with consciousness meh
Scott Aaronson on Free Will About more than just FW, though he’s arguing against the LW position, but I don’t consider it a strong argument, see the comments for a bit of discussion.
Quotes and Notes on Scott Aaronson’s has more positive leaning commentary
There’s certainly more, but finding specific comments I’ve read over the years is a challenge.
I’m not sure I understand the distinction. Even if the true universe is a bunch of freeze-frame slices, time and change still functionally act the same. Given that I don’t remember random nonsense in my past, there’s some form of selection about which freeze-frames are constructed. Or, rather, with differing measure. Thus most of my ‘future’ measure is concentrated on freeze-frames that are consistent with what I’ve observed, as that has held true in the past.
Like, what you seem to be saying is Timeless Physics, of which I’d agree more with this statement:
So I’d agree that computation only makes sense with some notion of time. That there has to be some way it is being stepped forward. (To me this is an argument in favor of not privileging spatial position in the common teleportation example, but we’ve seemed to move down a level to whether the brain can be implemented at all)
I misworded what I say, sorry. I more meant that you consider it to say/imply nothing meaningful, but you can certainly still argue against it (such as arguing that it isn’t coherent).
I would say the that the computer program running can be considered as an implementation of the abstract python code. I agree that this model is missing details. Such as the exact behavior of the transistor, how fast it switches, the exact positions of the atoms, etcetera. That is dependent on the mind considering it, I agree. The cosmic ray event would make so it is no longer an implementation of the abstract python program. You could expand the consideration to include more of the universe. Just as you could expand your model to consider the computer program as an implementation of the python program with some constraints: that if this specific transistor gets flipped one too many times it will fry, that there’s a slight possibility of a race condition that we didn’t consider at all in our abstract implementation, there’s a limit to the speed and heat it can operate at, a cosmic ray could come from these areas of space and hit it with 0.0x% probability thus disrupting functionality...
It still seems quite reasonable to say it is an implementation of the python program. I’m open to the argument that there isn’t a completely natural privileged point of consideration from which the computer is implementing the same pattern as another computer, and that the pattern is this python program. But as I said before, even if this is ultimately some amount of purely subjective, it still seems to capture quite a lot of the possible ideas?
Like in mathematics, I can have an abstract implementation of a sorting algorithm and prove that a python program for a more complicated algorithm (bubblesort, whatever) is equivalent. This is missing a lot of details, but that same sort of move is what I’m gesturing at.
I can understand why you think that just the neurons / connections is too lossy, but I’m very skeptical of the idea that we’d need all of the amplitudes related to the brain/mind. Apriori that seems unlikely whatwith how little fundamentally turns on the specifics of QM, and those that do can all be implemented specially. As I discussed above some.
(That also reminds me of another reason why people sometimes just mentions neurons/connections which I forgot in my first reply: because they assume you’ve gotten the basic brain architecture that is shared and just need to plug in the components that vary)
I disagree that this distinction between our model and reality has been lost, merely that it has been deemed not too significant, or as something you’d study in-depth when actually performing brain uploads.
As I said in my previous comment, and earlier in this one, I’m open to the idea of computation being subjective instead of a purely natural concept. Though I’d expect that there’s not that many free variables in pinning down the meaning. As for tabooing, I think that is kind of hard, as one very simple way of viewing computation is “doing things according to rules”.
You have an expression 5∗3. This is in your mind and relies on subjective interpretations of what the symbols mean. You implement that abstract program (that abstract doing-things, a chain of rules of inference, a way that things interact) into a computer. The transistors were utilized because they matched the conceptual idea of how switches should function, but they have more complexities than the abstract switch, which introduces design constraints throughout the entire chip. The chip’s ALU implements this through a bunch of transistors, which are more fundamentally made up of silicon in specific ways that regulate how electricity moves. There’s layers and layers of complexities even as it processes the specific binary representations of the two numbers and shifts them in the right way. But, despite all this, all that fundamental behavior, all the quantum effects like tunneling which restrict size and positioning, it is computing the answer. You see the result, 15, and are pretty confident that no differences between your simple model of the computer and reality occurred.
This is where I think arguments about subjectivity of computation can be made. Introduce a person who is talking about a different abstract concept, they encode it as binary because that’s what you do, and they have an operation that looks like multiplication and produces the same answer for that binary encoding. Then, the interpretation of that final binary output is dependent on the mind, because the mind has a different idea of what they’re computing. (But with the abstract idea being different, even if those parts match up) But I think a lot of those cases are non-natural, which is part of why I think even if computation doesn’t make sense as a fundamental thing or a completely natural concept, it still covers a wide area of concern and is a useful tool. Similar to how the distinction of values and beliefs is a useful tool even when strictly discussing humans, but even moreso. So then, the two calculators are implementing the same abstract algorithm in their silicon, and then we fall back to two questions 1) is the mind within the edge-cases such that it is not entirely meaningful to talk about an abstract program that it is implementing 2) okay, but even if they share the same computation, what does that imply. I think there could and should be more discussion of the complications around computation, with the easy to confuse interaction between levels of ‘completely abstract idea’ (platonism?), ‘abstract idea represented in the mind’ (what I’m talking about with abstract; subjective), ‘the physical way that all the parts of this structure behave’ (excessive detail but as accurate as possible; objective), ‘the way these rules do a specific abstract idea’ (chosen because of abstract ideas like a transistor is chosen because it functions like a switch, and the computer program is compiled in such a way because it matches the textual code you wrote which matches the abstract idea in your own mind; objective in that it is behaving in such a way, possibly subjective interpretation of the implications of that behavior).
We could also view computation through the lens of Turing Machines, but then that raises the argument of “what about all these quantum shenanigans, those are not computable by a turing machine”. I’d say that finite approximations get you almost all of what you want. Then there’s the objection of “turing machines aren’t available as a fundamental thing”, which is true, and “turing machines assume a privileged encoding”, which is part of what I was trying to discuss above.
(I got kinda rambly in this last section, hopefully I haven’t left any facets of the conversation with a branch I forgot to jump back to in order to complete)
I enjoyed reading your comment, but just wanted to point out that a quantum algorithm can be implemented by a classical computer, just with a possibly exponential slow down. The thing that breaks down is that any O(f(n)) algorithm on any classical computer is at worst O(f(n)^2) on a Turing machine; for quantum algorithms on quantum computers with f(n) runtime, the same decision problem can be decided in (I think) O(2^{(f(n)}) runtime on a Turing machine