Do you think there are edge cases where I ask “Is such-and-such system running the Miller-Rabin primality test algorithm?”, and the answer is not a clear yes or no, but rather “Well, umm, kinda…”?
(Not rhetorical! I haven’t thought about it much.)
I think there’s a practically infinite number of edge cases. For a system to run the algorithm, it would have to perform a sequence of operations on natural numbers. If we simplify this a bit, we could just look at the values of the variables in the program (like a, x, y; I don’t actually know the algorithm, I’m just looking at the pseudo-code on Wikipedia). If the algorithm is running, then each variable goes through a particular sequence, so we could just use this as a criterion and say the system runs the algorithm iff one of these particular sequences is instantiated.
Even in this simplified setting, figuring this out requires a mapping of physical states to numbers. If you start agreeing on a fixed mapping (like with a computer, we agree that this set of voltages at this location corresponds to the numbers 0-255), then that’s possible to verify. But in general you don’t know, which means you have to check whether there exists at least one mapping that does represent these sequences. Considered very literally, this is probably always true since you could have really absurd and discontinuous mappings (if this pebble here has mass between 0.5g and 0.51g it represents the number 723; if it’s between 0.51g and 0.52g it represents 911...) -- actually you have infinitely many mappings even after you agree on how the system partitions into objects, which is also debatable.
So without any assumptions, you start with a completely intractable problem and then have to figure out how to deal with this (… allow only reasonable mappings? but what’s reasonable? …), which in practice doesn’t seem like something anyone has been able to do. So even if I just show you a bunch of sand trickling through someone’s hands, it’s already a hard problem to prove that this doesn’t represent the Miller-Rabin test algorithm. It probably represents some sequence of numbers in a not too absurd way. There are some philosophers who have just bitten the bullet and concluded that any and all physical systems compute, which is called panpcomputationalism. The only actually formal rule for figuring out a mapping that I know is from IIT, which is famously hated on LW (and also has conclusions that most people find absurd, such that digital computers can’t be conscious at all bc the criterion ends up caring more about the hardware than the software). The thing is that most of the particles inside a computer don’t actually change all that much depending on the program, it’s really only a few specific locations where electrons move around, which is enough if we decide that our mapping only cares about those locations but not so much if you start with a rule applicable to arbitrary physical systems.
That all said, I think illusionists have the pretty easy out of just saying that computation is frame-dependent, i.e., that the answer to “what is this system computing” depends on the frame of reference, specifically the mapping from physical states to mathematical objects. It’s really only a problem you must solve if you both think that (a) consciousness is well-defined, frame-invariant, camp #2 style, etc., and also (b) the consciousness of a system depends on what it computes.
For me the answer is yes. There’s some way of interpreting the colors of grains of sands on the beach as they swirl in the wind that would perfectly implement the miller robin primality test algorithm. So is the wind + sand computing the algorithm?
Well it probably is computing insofar it’s the wind bringing in actual bits of information, not you while searching for a specific pattern instantiation. The test: consider whether if original grains were moved around to form another prime number, would the wind still process them a similar way and yield correct answer?
This seems arbitrary to me. I’m bringing in bits of information on multiple layers when I write a computer program to calculate the thing and then read out the result from the screen
Consider, if the transistors on the computer chip were moved around, would it still process the data in the same way and wield the correct answer?
Yes under some interpretation, but no from my perspective, because the right answer is about the relationship between what I consider computation and how I interpret the results in getting
But the real question for me is—under a computational perspective of consciousness, are there features of this computation that actually correlate to strength of consciousness? Does any interpretation of computation get equal weight? We could nail down a precise definition of what we mean by consciousness that we agreed on that didn’t have the issues mentioned above, but who knows whether that would be the definition that actually maps to the territory of consciousness?
I recently came across unsupervised machine translation here. It’s not directly applicable, but it opens the possibility that, given enough information about “something”, you can pin down what it’s encoding in your own language.
So let’s say now that we have a computer that simulates a human brain in a manner that we understand. Perhaps there really could be a sense in which it simulates a human brain that is independent of our interpretation of it. I’m having some trouble formulating this precisely.
Right, and per the second part of my comment—insofar as consciousness is a real phenomenon, there’s an empirical question of if whatever frame invariant definition of computation you’re using is the correct one.
Do you think there are edge cases where I ask “Is such-and-such system running the Miller-Rabin primality test algorithm?”, and the answer is not a clear yes or no, but rather “Well, umm, kinda…”?
(Not rhetorical! I haven’t thought about it much.)
I think there’s a practically infinite number of edge cases. For a system to run the algorithm, it would have to perform a sequence of operations on natural numbers. If we simplify this a bit, we could just look at the values of the variables in the program (like
a
,x
,y
; I don’t actually know the algorithm, I’m just looking at the pseudo-code on Wikipedia). If the algorithm is running, then each variable goes through a particular sequence, so we could just use this as a criterion and say the system runs the algorithm iff one of these particular sequences is instantiated.Even in this simplified setting, figuring this out requires a mapping of physical states to numbers. If you start agreeing on a fixed mapping (like with a computer, we agree that this set of voltages at this location corresponds to the numbers 0-255), then that’s possible to verify. But in general you don’t know, which means you have to check whether there exists at least one mapping that does represent these sequences. Considered very literally, this is probably always true since you could have really absurd and discontinuous mappings (if this pebble here has mass between 0.5g and 0.51g it represents the number 723; if it’s between 0.51g and 0.52g it represents 911...) -- actually you have infinitely many mappings even after you agree on how the system partitions into objects, which is also debatable.
So without any assumptions, you start with a completely intractable problem and then have to figure out how to deal with this (… allow only reasonable mappings? but what’s reasonable? …), which in practice doesn’t seem like something anyone has been able to do. So even if I just show you a bunch of sand trickling through someone’s hands, it’s already a hard problem to prove that this doesn’t represent the Miller-Rabin test algorithm. It probably represents some sequence of numbers in a not too absurd way. There are some philosophers who have just bitten the bullet and concluded that any and all physical systems compute, which is called panpcomputationalism. The only actually formal rule for figuring out a mapping that I know is from IIT, which is famously hated on LW (and also has conclusions that most people find absurd, such that digital computers can’t be conscious at all bc the criterion ends up caring more about the hardware than the software). The thing is that most of the particles inside a computer don’t actually change all that much depending on the program, it’s really only a few specific locations where electrons move around, which is enough if we decide that our mapping only cares about those locations but not so much if you start with a rule applicable to arbitrary physical systems.
That all said, I think illusionists have the pretty easy out of just saying that computation is frame-dependent, i.e., that the answer to “what is this system computing” depends on the frame of reference, specifically the mapping from physical states to mathematical objects. It’s really only a problem you must solve if you both think that (a) consciousness is well-defined, frame-invariant, camp #2 style, etc., and also (b) the consciousness of a system depends on what it computes.
For me the answer is yes. There’s some way of interpreting the colors of grains of sands on the beach as they swirl in the wind that would perfectly implement the miller robin primality test algorithm. So is the wind + sand computing the algorithm?
Well it probably is computing insofar it’s the wind bringing in actual bits of information, not you while searching for a specific pattern instantiation. The test: consider whether if original grains were moved around to form another prime number, would the wind still process them a similar way and yield correct answer?
This seems arbitrary to me. I’m bringing in bits of information on multiple layers when I write a computer program to calculate the thing and then read out the result from the screen
Consider, if the transistors on the computer chip were moved around, would it still process the data in the same way and wield the correct answer?
Yes under some interpretation, but no from my perspective, because the right answer is about the relationship between what I consider computation and how I interpret the results in getting
But the real question for me is—under a computational perspective of consciousness, are there features of this computation that actually correlate to strength of consciousness? Does any interpretation of computation get equal weight? We could nail down a precise definition of what we mean by consciousness that we agreed on that didn’t have the issues mentioned above, but who knows whether that would be the definition that actually maps to the territory of consciousness?
I recently came across unsupervised machine translation here. It’s not directly applicable, but it opens the possibility that, given enough information about “something”, you can pin down what it’s encoding in your own language.
So let’s say now that we have a computer that simulates a human brain in a manner that we understand. Perhaps there really could be a sense in which it simulates a human brain that is independent of our interpretation of it. I’m having some trouble formulating this precisely.
Right, and per the second part of my comment—insofar as consciousness is a real phenomenon, there’s an empirical question of if whatever frame invariant definition of computation you’re using is the correct one.