While discussion on personal identity has mostly not received a single overarching post focusing solely on arguing all the details, it has been discussed to varying degrees of possible contention points. Thou Art Physics which focuses on getting the idea that you are made up of physics into your head, Identity Isn’t in Specific Atoms which tries to dissolve the common intuition of the specific basic atoms mattering, Timeless Identity which is a culmination of various elements of those posts into the idea that even if you duplicate a person they both are still ‘you’. There is also more, some of which you’ve linked, but I consider it strange to say that there’s a lack of discussion.
I appreciate you linking these posts (which I have read and almost entirely agree with), but what they are doing (as you mentioned) is arguing against dualism, or in favor of physicalism, or against view classical (non-QM) entities like atoms have their own identity and are changed when copied (in a manner that can influence the fundamental identity of a being like a human).
What there has been a lack of discussion of is “having already accepted physicalism (and reductionism etc), why expect computationalism to be the correct theory?” None of those posts argue directly for computationalism; you can say they argue indirectly for it (and thus provide Bayesian evidence in its favor) by arguing against common opposing views, but I have already been convinced that those views are wrong.
And, as I have written before, physicalism-without-computationalism seems much more faithful to the core of physicalism (and to the reasons that convinced me of it in the first place) than computationalism does.
There’s also the facet of decision theory posting that LW enjoys, which encourage this class of view. With decision problems like Newcomb’s Paradox or Parfit’s hitchhiker emphasizing the focus of “you can be instantiated inside a simulation to predict your actions, and you should act like that you — roughly — control their actions because of the similarity of your computational implementations”. Of course, this works even without assuming the simulations are conscious, but I do think it has led to clearer consideration because it helps break past people’s intuitions. Those intuitions are not made for the scenarios that we face, or will potentially have to face.
One man’s modus ponens is another man’s modus tollens. I agree that the LW-style decision theory posting encourages this type of thinking, and you seem to infer that the high-quality reasoning in the decision theory posts implies that they give good intuitions about the philosophy of identity.
I draw the opposite conclusion from this: the fact that the decision theory posts seem to work on the basis of a computationalist theory of identity makes me think worse of the decision-theory posts.
I agree it hasn’t been argued in depth — but there has definitely been arguments about the extent QM affects the brain. Of which, the usual conclusion was that the effect is minor, and/or that we had no evidence for believing it necessary.
Can you link to some of these? I do not recall seeing anything like this here.
it implies that whatever makes up the computation matters
What is “the computation”? Can we try to taboo that word? My comment to Seth Herd is relevant here (“The basic concept of computation at issue here is a feature of the map you could use to approximate reality (i.e., the territory) . It is merely part of a mathematical model that, as I’ve described in response to Ruby earlier, represents a very lossy compression of the underlying physical substrate. [...] So when you talk about a “pattern instantiated by physics as a pure result of how physics works”, you’re not pointing to anything meaningful in the territory, rather only something that makes sense in the particular ontology you have chosen to use to view it through, a frame that I have explained my skepticism of already.) You seem to be thinking about computation as being some sort of ontologically irreducible feature of reality that can exist independently of a necessarily lossy and reductive mathematical model that tries to represent it, which doesn’t make much sense to me.
I don’t know if this will be helpful to you or not in terms of clarifying my thinking here, but I see this point here by you (asking “what makes up the computation”) as being absolutely analogous to asking “what makes up causality,” to which my response is, as Dagon said, that at the most basic level, I suspect “there’s no such thing as causation, and maybe not even time and change. Everything was determined in the initial configuration of quantum waveforms in the distant past of your lightcone. The experience of time and change is just a side-effect of your embeddedness in this giant static many-dimensional universe.”
Why shouldn’t we decide based on a model/category?
Well, we can, but as I tried to explain above, I see this model as being very lossy and unjustifiably privileging the idea of computation, which does not seem to make sense to me as a feature of the territory as opposed to the map.
Your objections to CEV also seem to me to follow a similar pattern as this, where you go “this does not have a perfect foundational backing” to thus imply “it has no meaning, and there’s nothing to be said about it”.
I completely disagree with this, and I am confused as to what made you think I believe that “there’s nothing to be said about [CEV].” I absolutely believe there is a lot to be said about CEV, namely that (for the reasons I gave in some of my previous comments that you are referencing and that I hope I can compile into one large post soon) CEV is theoretically unsound, conceptually incoherent, practically unviable, and should not be the target of any attempt to bring about a great future using AGI (regardless of whether it’s on the first try or not).
That seems to me like the complete opposite of me thinking that there’s nothing to be said about CEV.
Would the idea that a calculator has some pattern, some logical rules that it is implementing via matter, thus be non-physicalist about calculators? A brain follows the rules of reality, with many implications about how certain molecules constrain movement, how these neuron spikes cause hunger, etcetera. There is a logical/computational core to this that can be reimplemented.
I think it would be non-physicalist if (to slightly modify the analogy, for illustrative purposes) you say that a computer program I run on my laptop can be identified with the Python code it implements, because it is not actually what happens.
We can see this as a result of stuff like single-event upsets, i.e., for example, situations in which stray cosmic rays modify the bits in a transistor in the physical entity that runs the code (i.e., the laptop) in such a manner that it fundamentally changes the output of the program. So the running of the program (instantiated and embedded in the real, physical world just like a human is) works not on the basis of the lossy model that only takes into account the “software” part, but rather on the “hardware” itself.
You can of course expand the idea of “computation” to say that, actually, it takes into account the stray cosmic rays as well, and in fact it takes into account everything that can affect the output, at which point “computation” stops being a subset of “what happens” and becomes the entirety of it. So if you want to say that the computation necessarily involves the entirety of what is physically there, then I believe I agree, at which point this is no longer the computationalist thesis argued for by Rob, Ruben etc (for example, the corolaries about WBE preserving identity when only an augmented part of the brain’s connectome is scanned no longer hold).
One man’s modus ponens is another man’s modus tollens. I agree that the LW-style decision theory posting encourages this type of thinking, and you seem to infer that the high-quality reasoning in the decision theory posts implies that they give good intuitions about the philosophy of identity.
I draw the opposite conclusion from this: the fact that the decision theory posts seem to work on the basis of a computationalist theory of identity makes me think worse of the decision-theory posts.
I draw the opposite conclusion from this: the fact that the decision theory posts seem to work on the basis of a computationalist theory of identity makes me think worse of the decision-theory posts.
Why? If I try to guess, I’d point at not often considering indexicality as a consideration, merely thinking of it as having a single utility function which simplifies coordination. (But still, a lot of decision theory doesn’t need to take into account indexicality..)
I see the decision theory posts as less as giving new intuitions, and more breaking old ones that are ill-adapted, though that’s partially framing/semantics.
Can you link to some of these? I do not recall seeing anything like this here.
I’ll try to find some, but they’re more likely to be side parts of comment chains rather than posts, which does make them more challenging to search for. I doubt they’re as in-depth as we’d like, I think there is work done there, even if I do think the assumption of QM not mattering much is likely.
The basic idea is what would it give you? If the brain uses it for a random component, why can’t that be replaced with something pseudorandom? Which is fine from an angle of not seeing determinism as a problem. If the brain utilizes entangled atoms/neurons/whatever for efficiency, why can’t those be replaced with another method — possibly impractically inefficient? Does the brain functionally depend on an arbitrary precision Real for a calculation, why would it, and what would be the matter if it was cut off to N digits?
Scott Aaronson on Free Will About more than just FW, though he’s arguing against the LW position, but I don’t consider it a strong argument, see the comments for a bit of discussion.
There’s certainly more, but finding specific comments I’ve read over the years is a challenge.
Everything was determined in the initial configuration of quantum waveforms in the distant past of your lightcone. The experience of time and change is just a side-effect of your embeddedness in this giant static many-dimensional universe.”
I’m not sure I understand the distinction. Even if the true universe is a bunch of freeze-frame slices, time and change still functionally act the same. Given that I don’t remember random nonsense in my past, there’s some form of selection about which freeze-frames are constructed. Or, rather, with differing measure. Thus most of my ‘future’ measure is concentrated on freeze-frames that are consistent with what I’ve observed, as that has held true in the past.
Like, what you seem to be saying is Timeless Physics, of which I’d agree more with this statement:
An unchanging quantum mist hangs over the configuration space, not churning, not flowing.
But the mist has internal structure, internal relations; and these contain time implicitly.
The dynamics of physics—falling apples and rotating galaxies—is now embodied within the unchanging mist in the unchanging configuration space.
So I’d agree that computation only makes sense with some notion of time. That there has to be some way it is being stepped forward.
(To me this is an argument in favor of not privileging spatial position in the common teleportation example, but we’ve seemed to move down a level to whether the brain can be implemented at all)
(bits about CEV)
conceptually incoherent
I misworded what I say, sorry. I more meant that you consider it to say/imply nothing meaningful, but you can certainly still argue against it (such as arguing that it isn’t coherent).
I think it would be non-physicalist if (to slightly modify the analogy, for illustrative purposes) you say that a computer program I run on my laptop can be identified with the Python code it implements, because it is not actually what happens.
I would say the that the computer program running can be considered as an implementation of the abstract python code.
I agree that this model is missing details. Such as the exact behavior of the transistor, how fast it switches, the exact positions of the atoms, etcetera. That is dependent on the mind considering it, I agree.
The cosmic ray event would make so it is no longer an implementation of the abstract python program. You could expand the consideration to include more of the universe. Just as you could expand your model to consider the computer program as an implementation of the python program with some constraints: that if this specific transistor gets flipped one too many times it will fry, that there’s a slight possibility of a race condition that we didn’t consider at all in our abstract implementation, there’s a limit to the speed and heat it can operate at, a cosmic ray could come from these areas of space and hit it with 0.0x% probability thus disrupting functionality...
It still seems quite reasonable to say it is an implementation of the python program. I’m open to the argument that there isn’t a completely natural privileged point of consideration from which the computer is implementing the same pattern as another computer, and that the pattern is this python program. But as I said before, even if this is ultimately some amount of purely subjective, it still seems to capture quite a lot of the possible ideas?
Like in mathematics, I can have an abstract implementation of a sorting algorithm and prove that a python program for a more complicated algorithm (bubblesort, whatever) is equivalent. This is missing a lot of details, but that same sort of move is what I’m gesturing at.
It is merely part of a mathematical model that, as I’ve described in response to Ruby earlier, represents a very lossy compression of the underlying physical substrate
I can understand why you think that just the neurons / connections is too lossy, but I’m very skeptical of the idea that we’d need all of the amplitudes related to the brain/mind. Apriori that seems unlikely whatwith how little fundamentally turns on the specifics of QM, and those that do can all be implemented specially. As I discussed above some.
(That also reminds me of another reason why people sometimes just mentions neurons/connections which I forgot in my first reply: because they assume you’ve gotten the basic brain architecture that is shared and just need to plug in the components that vary)
I disagree that this distinction between our model and reality has been lost, merely that it has been deemed not too significant, or as something you’d study in-depth when actually performing brain uploads.
What is “the computation”? Can we try to taboo that word?
As I said in my previous comment, and earlier in this one, I’m open to the idea of computation being subjective instead of a purely natural concept. Though I’d expect that there’s not that many free variables in pinning down the meaning.
As for tabooing, I think that is kind of hard, as one very simple way of viewing computation is “doing things according to rules”.
You have an expression 5∗3. This is in your mind and relies on subjective interpretations of what the symbols mean.
You implement that abstract program (that abstract doing-things, a chain of rules of inference, a way that things interact) into a computer. The transistors were utilized because they matched the conceptual idea of how switches should function, but they have more complexities than the abstract switch, which introduces design constraints throughout the entire chip.
The chip’s ALU implements this through a bunch of transistors, which are more fundamentally made up of silicon in specific ways that regulate how electricity moves. There’s layers and layers of complexities even as it processes the specific binary representations of the two numbers and shifts them in the right way.
But, despite all this, all that fundamental behavior, all the quantum effects like tunneling which restrict size and positioning, it is computing the answer.
You see the result, 15, and are pretty confident that no differences between your simple model of the computer and reality occurred.
This is where I think arguments about subjectivity of computation can be made. Introduce a person who is talking about a different abstract concept, they encode it as binary because that’s what you do, and they have an operation that looks like multiplication and produces the same answer for that binary encoding. Then, the interpretation of that final binary output is dependent on the mind, because the mind has a different idea of what they’re computing. (But with the abstract idea being different, even if those parts match up)
But I think a lot of those cases are non-natural, which is part of why I think even if computation doesn’t make sense as a fundamental thing or a completely natural concept, it still covers a wide area of concern and is a useful tool. Similar to how the distinction of values and beliefs is a useful tool even when strictly discussing humans, but even moreso. So then, the two calculators are implementing the same abstract algorithm in their silicon, and then we fall back to two questions 1) is the mind within the edge-cases such that it is not entirely meaningful to talk about an abstract program that it is implementing 2) okay, but even if they share the same computation, what does that imply.
I think there could and should be more discussion of the complications around computation, with the easy to confuse interaction between levels of ‘completely abstract idea’ (platonism?), ‘abstract idea represented in the mind’ (what I’m talking about with abstract; subjective), ‘the physical way that all the parts of this structure behave’ (excessive detail but as accurate as possible; objective), ‘the way these rules do a specific abstract idea’ (chosen because of abstract ideas like a transistor is chosen because it functions like a switch, and the computer program is compiled in such a way because it matches the textual code you wrote which matches the abstract idea in your own mind; objective in that it is behaving in such a way, possibly subjective interpretation of the implications of that behavior).
We could also view computation through the lens of Turing Machines, but then that raises the argument of “what about all these quantum shenanigans, those are not computable by a turing machine”. I’d say that finite approximations get you almost all of what you want. Then there’s the objection of “turing machines aren’t available as a fundamental thing”, which is true, and “turing machines assume a privileged encoding”, which is part of what I was trying to discuss above.
(I got kinda rambly in this last section, hopefully I haven’t left any facets of the conversation with a branch I forgot to jump back to in order to complete)
We could also view computation through the lens of Turing Machines, but then that raises the argument of “what about all these quantum shenanigans, those are not computable by a turing machine”.
I enjoyed reading your comment, but just wanted to point out that a quantum algorithm can be implemented by a classical computer, just with a possibly exponential slow down. The thing that breaks down is that any O(f(n)) algorithm on any classical computer is at worst O(f(n)^2) on a Turing machine; for quantum algorithms on quantum computers with f(n) runtime, the same decision problem can be decided in (I think) O(2^{(f(n)}) runtime on a Turing machine
I appreciate you linking these posts (which I have read and almost entirely agree with), but what they are doing (as you mentioned) is arguing against dualism, or in favor of physicalism, or against view classical (non-QM) entities like atoms have their own identity and are changed when copied (in a manner that can influence the fundamental identity of a being like a human).
What there has been a lack of discussion of is “having already accepted physicalism (and reductionism etc), why expect computationalism to be the correct theory?” None of those posts argue directly for computationalism; you can say they argue indirectly for it (and thus provide Bayesian evidence in its favor) by arguing against common opposing views, but I have already been convinced that those views are wrong.
And, as I have written before, physicalism-without-computationalism seems much more faithful to the core of physicalism (and to the reasons that convinced me of it in the first place) than computationalism does.
One man’s modus ponens is another man’s modus tollens. I agree that the LW-style decision theory posting encourages this type of thinking, and you seem to infer that the high-quality reasoning in the decision theory posts implies that they give good intuitions about the philosophy of identity.
I draw the opposite conclusion from this: the fact that the decision theory posts seem to work on the basis of a computationalist theory of identity makes me think worse of the decision-theory posts.
Can you link to some of these? I do not recall seeing anything like this here.
What is “the computation”? Can we try to taboo that word? My comment to Seth Herd is relevant here (“The basic concept of computation at issue here is a feature of the map you could use to approximate reality (i.e., the territory) . It is merely part of a mathematical model that, as I’ve described in response to Ruby earlier, represents a very lossy compression of the underlying physical substrate. [...] So when you talk about a “pattern instantiated by physics as a pure result of how physics works”, you’re not pointing to anything meaningful in the territory, rather only something that makes sense in the particular ontology you have chosen to use to view it through, a frame that I have explained my skepticism of already.) You seem to be thinking about computation as being some sort of ontologically irreducible feature of reality that can exist independently of a necessarily lossy and reductive mathematical model that tries to represent it, which doesn’t make much sense to me.
I don’t know if this will be helpful to you or not in terms of clarifying my thinking here, but I see this point here by you (asking “what makes up the computation”) as being absolutely analogous to asking “what makes up causality,” to which my response is, as Dagon said, that at the most basic level, I suspect “there’s no such thing as causation, and maybe not even time and change. Everything was determined in the initial configuration of quantum waveforms in the distant past of your lightcone. The experience of time and change is just a side-effect of your embeddedness in this giant static many-dimensional universe.”
Well, we can, but as I tried to explain above, I see this model as being very lossy and unjustifiably privileging the idea of computation, which does not seem to make sense to me as a feature of the territory as opposed to the map.
I completely disagree with this, and I am confused as to what made you think I believe that “there’s nothing to be said about [CEV].” I absolutely believe there is a lot to be said about CEV, namely that (for the reasons I gave in some of my previous comments that you are referencing and that I hope I can compile into one large post soon) CEV is theoretically unsound, conceptually incoherent, practically unviable, and should not be the target of any attempt to bring about a great future using AGI (regardless of whether it’s on the first try or not).
That seems to me like the complete opposite of me thinking that there’s nothing to be said about CEV.
I think it would be non-physicalist if (to slightly modify the analogy, for illustrative purposes) you say that a computer program I run on my laptop can be identified with the Python code it implements, because it is not actually what happens.
We can see this as a result of stuff like single-event upsets, i.e., for example, situations in which stray cosmic rays modify the bits in a transistor in the physical entity that runs the code (i.e., the laptop) in such a manner that it fundamentally changes the output of the program. So the running of the program (instantiated and embedded in the real, physical world just like a human is) works not on the basis of the lossy model that only takes into account the “software” part, but rather on the “hardware” itself.
You can of course expand the idea of “computation” to say that, actually, it takes into account the stray cosmic rays as well, and in fact it takes into account everything that can affect the output, at which point “computation” stops being a subset of “what happens” and becomes the entirety of it. So if you want to say that the computation necessarily involves the entirety of what is physically there, then I believe I agree, at which point this is no longer the computationalist thesis argued for by Rob, Ruben etc (for example, the corolaries about WBE preserving identity when only an augmented part of the brain’s connectome is scanned no longer hold).
Strongly seconding this.
Why? If I try to guess, I’d point at not often considering indexicality as a consideration, merely thinking of it as having a single utility function which simplifies coordination. (But still, a lot of decision theory doesn’t need to take into account indexicality..)
I see the decision theory posts as less as giving new intuitions, and more breaking old ones that are ill-adapted, though that’s partially framing/semantics.
I’ll try to find some, but they’re more likely to be side parts of comment chains rather than posts, which does make them more challenging to search for. I doubt they’re as in-depth as we’d like, I think there is work done there, even if I do think the assumption of QM not mattering much is likely.
The basic idea is what would it give you? If the brain uses it for a random component, why can’t that be replaced with something pseudorandom? Which is fine from an angle of not seeing determinism as a problem. If the brain utilizes entangled atoms/neurons/whatever for efficiency, why can’t those be replaced with another method — possibly impractically inefficient? Does the brain functionally depend on an arbitrary precision Real for a calculation, why would it, and what would be the matter if it was cut off to N digits?
Somewhat Eliezer’s Comment Here and some of the other pieces
Does davidad’s uploading moonshot work which has more specifics about what davidad thinks is relevant to uploading
With this as also a good article to read as a reply
QM Has nothing to do with consciousness meh
Scott Aaronson on Free Will About more than just FW, though he’s arguing against the LW position, but I don’t consider it a strong argument, see the comments for a bit of discussion.
Quotes and Notes on Scott Aaronson’s has more positive leaning commentary
There’s certainly more, but finding specific comments I’ve read over the years is a challenge.
I’m not sure I understand the distinction. Even if the true universe is a bunch of freeze-frame slices, time and change still functionally act the same. Given that I don’t remember random nonsense in my past, there’s some form of selection about which freeze-frames are constructed. Or, rather, with differing measure. Thus most of my ‘future’ measure is concentrated on freeze-frames that are consistent with what I’ve observed, as that has held true in the past.
Like, what you seem to be saying is Timeless Physics, of which I’d agree more with this statement:
So I’d agree that computation only makes sense with some notion of time. That there has to be some way it is being stepped forward. (To me this is an argument in favor of not privileging spatial position in the common teleportation example, but we’ve seemed to move down a level to whether the brain can be implemented at all)
I misworded what I say, sorry. I more meant that you consider it to say/imply nothing meaningful, but you can certainly still argue against it (such as arguing that it isn’t coherent).
I would say the that the computer program running can be considered as an implementation of the abstract python code. I agree that this model is missing details. Such as the exact behavior of the transistor, how fast it switches, the exact positions of the atoms, etcetera. That is dependent on the mind considering it, I agree. The cosmic ray event would make so it is no longer an implementation of the abstract python program. You could expand the consideration to include more of the universe. Just as you could expand your model to consider the computer program as an implementation of the python program with some constraints: that if this specific transistor gets flipped one too many times it will fry, that there’s a slight possibility of a race condition that we didn’t consider at all in our abstract implementation, there’s a limit to the speed and heat it can operate at, a cosmic ray could come from these areas of space and hit it with 0.0x% probability thus disrupting functionality...
It still seems quite reasonable to say it is an implementation of the python program. I’m open to the argument that there isn’t a completely natural privileged point of consideration from which the computer is implementing the same pattern as another computer, and that the pattern is this python program. But as I said before, even if this is ultimately some amount of purely subjective, it still seems to capture quite a lot of the possible ideas?
Like in mathematics, I can have an abstract implementation of a sorting algorithm and prove that a python program for a more complicated algorithm (bubblesort, whatever) is equivalent. This is missing a lot of details, but that same sort of move is what I’m gesturing at.
I can understand why you think that just the neurons / connections is too lossy, but I’m very skeptical of the idea that we’d need all of the amplitudes related to the brain/mind. Apriori that seems unlikely whatwith how little fundamentally turns on the specifics of QM, and those that do can all be implemented specially. As I discussed above some.
(That also reminds me of another reason why people sometimes just mentions neurons/connections which I forgot in my first reply: because they assume you’ve gotten the basic brain architecture that is shared and just need to plug in the components that vary)
I disagree that this distinction between our model and reality has been lost, merely that it has been deemed not too significant, or as something you’d study in-depth when actually performing brain uploads.
As I said in my previous comment, and earlier in this one, I’m open to the idea of computation being subjective instead of a purely natural concept. Though I’d expect that there’s not that many free variables in pinning down the meaning. As for tabooing, I think that is kind of hard, as one very simple way of viewing computation is “doing things according to rules”.
You have an expression 5∗3. This is in your mind and relies on subjective interpretations of what the symbols mean. You implement that abstract program (that abstract doing-things, a chain of rules of inference, a way that things interact) into a computer. The transistors were utilized because they matched the conceptual idea of how switches should function, but they have more complexities than the abstract switch, which introduces design constraints throughout the entire chip. The chip’s ALU implements this through a bunch of transistors, which are more fundamentally made up of silicon in specific ways that regulate how electricity moves. There’s layers and layers of complexities even as it processes the specific binary representations of the two numbers and shifts them in the right way. But, despite all this, all that fundamental behavior, all the quantum effects like tunneling which restrict size and positioning, it is computing the answer. You see the result, 15, and are pretty confident that no differences between your simple model of the computer and reality occurred.
This is where I think arguments about subjectivity of computation can be made. Introduce a person who is talking about a different abstract concept, they encode it as binary because that’s what you do, and they have an operation that looks like multiplication and produces the same answer for that binary encoding. Then, the interpretation of that final binary output is dependent on the mind, because the mind has a different idea of what they’re computing. (But with the abstract idea being different, even if those parts match up) But I think a lot of those cases are non-natural, which is part of why I think even if computation doesn’t make sense as a fundamental thing or a completely natural concept, it still covers a wide area of concern and is a useful tool. Similar to how the distinction of values and beliefs is a useful tool even when strictly discussing humans, but even moreso. So then, the two calculators are implementing the same abstract algorithm in their silicon, and then we fall back to two questions 1) is the mind within the edge-cases such that it is not entirely meaningful to talk about an abstract program that it is implementing 2) okay, but even if they share the same computation, what does that imply. I think there could and should be more discussion of the complications around computation, with the easy to confuse interaction between levels of ‘completely abstract idea’ (platonism?), ‘abstract idea represented in the mind’ (what I’m talking about with abstract; subjective), ‘the physical way that all the parts of this structure behave’ (excessive detail but as accurate as possible; objective), ‘the way these rules do a specific abstract idea’ (chosen because of abstract ideas like a transistor is chosen because it functions like a switch, and the computer program is compiled in such a way because it matches the textual code you wrote which matches the abstract idea in your own mind; objective in that it is behaving in such a way, possibly subjective interpretation of the implications of that behavior).
We could also view computation through the lens of Turing Machines, but then that raises the argument of “what about all these quantum shenanigans, those are not computable by a turing machine”. I’d say that finite approximations get you almost all of what you want. Then there’s the objection of “turing machines aren’t available as a fundamental thing”, which is true, and “turing machines assume a privileged encoding”, which is part of what I was trying to discuss above.
(I got kinda rambly in this last section, hopefully I haven’t left any facets of the conversation with a branch I forgot to jump back to in order to complete)
I enjoyed reading your comment, but just wanted to point out that a quantum algorithm can be implemented by a classical computer, just with a possibly exponential slow down. The thing that breaks down is that any O(f(n)) algorithm on any classical computer is at worst O(f(n)^2) on a Turing machine; for quantum algorithms on quantum computers with f(n) runtime, the same decision problem can be decided in (I think) O(2^{(f(n)}) runtime on a Turing machine