And here is the question: does that sentence describe an actual possibility or not?
What if you were a big database that simply stores an answer to every question I can ask you? Can you seriously consider the possibility that you are merely a database that does this purely mechanical operation? This database does not think, it just answers. For all I know you might be such a database, but I am pretty sure that I am not such a database nor would I want to be replaced with such a database.
Or let’s consider two programs that take a string and always return zero. One runs a WBE twice, letting it input a number into a textbox, then returns the difference of those numbers (which is zero). Other just plain returns zero. Mathematically they are identical, physically they are distinct physical processes, if we are to proclaim that they are subjectively distinct (you could be living in one of them right now, but not in the other), then we consider two different physical systems that implement same mathematical function to be very distinct as far as being those systems goes.
Which of course makes problematic any arguments which argue that WBE must be same as biological brains based on some mathematical equivalence, as even within the WBEs, mathematical equivalence does not guarantee subjective equivalence.
(I for one think that brain simulators are physically similar enough to biological brains that I wouldn’t mind being replaced by a brain simulation of me, but it’s not because of some mathematical equivalence, it’s because they are physically quite similar, unlike a database of every possible answer which would be physically very distinct. I’d be wary of doing extensive optimization of a brain simulation of me into something mathematically equivalent but simpler).
I’ve lost you here. What does “a choice of being” mean? What is this mapping that includes some… beings… and not others?
Well, your “if I were a WBE” is an example of you choosing a WBE for example purposes.
OK, I understand your position now. You’re saying (correct me if I’m wrong) that when I have uncertainty about what is implementing “me” in the physical world—whether e.g. I’m a natural human, or a WBE whose inputs lie to it, or a completely different kind of simulated human—then if I rule out certain kinds of processes from being my implementations, that is called not believing these processes could be “conscious”.
Could I be a WBE whose inputs are remotely connected to the biological body I see when I look down? (Ignoring the many reasons this would be improbable in the actual observed world, where WBEs are not known to exist.) I haven’t looked inside my head to check, after all. (Actually, I’ve done CT scans, but the doctors may be in on the plot.)
I don’t see a reason why I shouldn’t be able to be a WBE. Take the scenario where a human is converted into a WBE by replacing one neuron at a time with a remotely controlled IO device, connected wirelessly to a computer emulating that neuron. And it’s then possible to switch the connections to link with a physically different, though similar, body.
I see no reason to suppose that, if I underwent such a process, I would stop being “conscious”, either gradually or suddenly.
What if you were a big database that simply stores an answer to every question I can ask you? Can you seriously consider the possibility that you are merely a database that does this purely mechanical operation? This database does not think, it just answers.
That I’m less certain about. The brain’s internal state and implementation details might be relevant. But that is exactly why I have a much higher prior of a WBE being “conscious”, than any other black-box-equivalent functional equivalent to a brain to be conscious.
Your neurons (ETA: individually or collectively) do not think, they just operate ligand-gated ion channels (among assorted other things, you get the point).
One runs a WBE twice, letting it input a number into a textbox, then returns the difference of those numbers (which is zero). Other just plain returns zero. Mathematically they are identical, physically they are distinct physical processes, if we are to proclaim that they are subjectively distinct (you could be living in one of them right now, but not in the other), then we consider two different physical systems that implement same mathematical function to be very distinct as far as being those systems goes.
That example deserves a post of its own, excellent. Nearly any kind of WBE would rely on optimizing (while maintaining functional equivalence) / translating to a different substrate. The resulting WBE would still proclaim itself to be conscious, and for most people that would be enough to think it so.
However, how do we know which of the many redundancies we could get rid of, and which are instrumental to actually experiencing consciousness? If output behavior is strictly sufficient, then main() { printf(“I’m conscious right now”); } and a human saying the same line would both be conscious, at that moment?
If output behavior isn’t strictly sufficient, how will we ever encode neural patterns in silico, if the one parameter we can measure (how the system behaves) isn’t trustworthy?
In the thought experiment, the database is the entirety of the replacement, which is why the analogy to a single neuron is inappropriate. (Unless I’ve misunderstood the point of your analogy. Anyway, it’s useless to point to neurons as a example of a thing that also doesn’t think, because a neuron by itself also doesn’t have consciousness. It’s the entire brain that is capable of computing anything.)
I disagree that it’s just the entire brain that is capable of computing anything, and I didn’t mean to compare to a single neuron (hence the plural “s”).
However, I highlighted the simplicity of the actions which are available to single neurons to counteract “a database just does lookups, surely it cannot be conscious”. Why should (the totality) of neurons just opening and closing simple structures be conscious, and a database not be? Both rely on simple operations as atomic actions, and simple structures as a physical substrate. Yet unless one denies consciousness altogether, we do ascribe consciousness to (a large number of) neurons (each with their basic functionality), why not to a large number of capacitators (on which a database is stored)?
I.e. the point was to put them in a similar class, or at least to show that we cannot trivially put databases in a different class than neural networks.
Yet unless one denies consciousness altogether, we do ascribe consciousness to (a large number of) neurons (each with their basic functionality), why not to a large number of capacitators (on which a database is stored)?
The problem is this argument applies equally well to “why not consider rocks (which, like brains, are made of a large number of atoms) conscious”. Simply noting that they’re made of simple parts leaves high level structure unexamined.
Well, I just imagined a bunch of things—a rubik’s cube spinning, a piece of code I worked on today, some of my friends, a cat… There’s patterns of activations of neurons in my head, which correspond to those things. Perhaps somewhere there’s even an actual distorted image.
Where in the database is the image of that cat, again?
By the way there’s a lot of subjectively distinct ways that can produce the above string as well. I could simply have memorized the whole paragraph, and memorized that I must say it at such date and time. That’s clearly distinct from actually imagining those things.
One could picture an optimization on WBEs that would wipe out entirely the ability to mentally visualize things and perceive them, with or without an extra hack so that the WBE acts as if it did visualize it (e.g. it could instead use some CAD/CAM tool without ever producing a subjective experience of seeing an image from that tool. One could argue that this tool did mentally visualize things, yet there are different ways to integrate such tools and some involve you actually seeing the output from the tool, and some don’t; absent an extra censorship hack, you would be able to tell us which one you’re using; present such hack you would be unable to tell us so, but the hack may be so structured that we are very assured it doesn’t alter any internal experiences but only external ones).
edit: bottom line is, we all know that different subjective experiences can produce same objective output. When you are first doing some skilful work, you feel yourself think about it, a lot. When you do it long enough, your neural networks optimize, and the outcome is basically the same, but internally, you no longer feel how you do it, it’s done on instinct.
And here is the question: does that sentence describe an actual possibility or not?
What if you were a big database that simply stores an answer to every question I can ask you? Can you seriously consider the possibility that you are merely a database that does this purely mechanical operation? This database does not think, it just answers. For all I know you might be such a database, but I am pretty sure that I am not such a database nor would I want to be replaced with such a database.
Or let’s consider two programs that take a string and always return zero. One runs a WBE twice, letting it input a number into a textbox, then returns the difference of those numbers (which is zero). Other just plain returns zero. Mathematically they are identical, physically they are distinct physical processes, if we are to proclaim that they are subjectively distinct (you could be living in one of them right now, but not in the other), then we consider two different physical systems that implement same mathematical function to be very distinct as far as being those systems goes.
Which of course makes problematic any arguments which argue that WBE must be same as biological brains based on some mathematical equivalence, as even within the WBEs, mathematical equivalence does not guarantee subjective equivalence.
(I for one think that brain simulators are physically similar enough to biological brains that I wouldn’t mind being replaced by a brain simulation of me, but it’s not because of some mathematical equivalence, it’s because they are physically quite similar, unlike a database of every possible answer which would be physically very distinct. I’d be wary of doing extensive optimization of a brain simulation of me into something mathematically equivalent but simpler).
Well, your “if I were a WBE” is an example of you choosing a WBE for example purposes.
OK, I understand your position now. You’re saying (correct me if I’m wrong) that when I have uncertainty about what is implementing “me” in the physical world—whether e.g. I’m a natural human, or a WBE whose inputs lie to it, or a completely different kind of simulated human—then if I rule out certain kinds of processes from being my implementations, that is called not believing these processes could be “conscious”.
Could I be a WBE whose inputs are remotely connected to the biological body I see when I look down? (Ignoring the many reasons this would be improbable in the actual observed world, where WBEs are not known to exist.) I haven’t looked inside my head to check, after all. (Actually, I’ve done CT scans, but the doctors may be in on the plot.)
I don’t see a reason why I shouldn’t be able to be a WBE. Take the scenario where a human is converted into a WBE by replacing one neuron at a time with a remotely controlled IO device, connected wirelessly to a computer emulating that neuron. And it’s then possible to switch the connections to link with a physically different, though similar, body.
I see no reason to suppose that, if I underwent such a process, I would stop being “conscious”, either gradually or suddenly.
That I’m less certain about. The brain’s internal state and implementation details might be relevant. But that is exactly why I have a much higher prior of a WBE being “conscious”, than any other black-box-equivalent functional equivalent to a brain to be conscious.
Your neurons (ETA: individually or collectively) do not think, they just operate ligand-gated ion channels (among assorted other things, you get the point).
That example deserves a post of its own, excellent. Nearly any kind of WBE would rely on optimizing (while maintaining functional equivalence) / translating to a different substrate. The resulting WBE would still proclaim itself to be conscious, and for most people that would be enough to think it so.
However, how do we know which of the many redundancies we could get rid of, and which are instrumental to actually experiencing consciousness? If output behavior is strictly sufficient, then main() { printf(“I’m conscious right now”); } and a human saying the same line would both be conscious, at that moment?
If output behavior isn’t strictly sufficient, how will we ever encode neural patterns in silico, if the one parameter we can measure (how the system behaves) isn’t trustworthy?
One should do well not to confuse the parts with the whole. After all, transistors do not solve chess problems.
Yes, which is why I used that as a reductio for “This database does not think, it just answers.”
In the thought experiment, the database is the entirety of the replacement, which is why the analogy to a single neuron is inappropriate. (Unless I’ve misunderstood the point of your analogy. Anyway, it’s useless to point to neurons as a example of a thing that also doesn’t think, because a neuron by itself also doesn’t have consciousness. It’s the entire brain that is capable of computing anything.)
I disagree that it’s just the entire brain that is capable of computing anything, and I didn’t mean to compare to a single neuron (hence the plural “s”).
However, I highlighted the simplicity of the actions which are available to single neurons to counteract “a database just does lookups, surely it cannot be conscious”. Why should (the totality) of neurons just opening and closing simple structures be conscious, and a database not be? Both rely on simple operations as atomic actions, and simple structures as a physical substrate. Yet unless one denies consciousness altogether, we do ascribe consciousness to (a large number of) neurons (each with their basic functionality), why not to a large number of capacitators (on which a database is stored)?
I.e. the point was to put them in a similar class, or at least to show that we cannot trivially put databases in a different class than neural networks.
The problem is this argument applies equally well to “why not consider rocks (which, like brains, are made of a large number of atoms) conscious”. Simply noting that they’re made of simple parts leaves high level structure unexamined.
Well, I just imagined a bunch of things—a rubik’s cube spinning, a piece of code I worked on today, some of my friends, a cat… There’s patterns of activations of neurons in my head, which correspond to those things. Perhaps somewhere there’s even an actual distorted image.
Where in the database is the image of that cat, again?
By the way there’s a lot of subjectively distinct ways that can produce the above string as well. I could simply have memorized the whole paragraph, and memorized that I must say it at such date and time. That’s clearly distinct from actually imagining those things.
One could picture an optimization on WBEs that would wipe out entirely the ability to mentally visualize things and perceive them, with or without an extra hack so that the WBE acts as if it did visualize it (e.g. it could instead use some CAD/CAM tool without ever producing a subjective experience of seeing an image from that tool. One could argue that this tool did mentally visualize things, yet there are different ways to integrate such tools and some involve you actually seeing the output from the tool, and some don’t; absent an extra censorship hack, you would be able to tell us which one you’re using; present such hack you would be unable to tell us so, but the hack may be so structured that we are very assured it doesn’t alter any internal experiences but only external ones).
edit: bottom line is, we all know that different subjective experiences can produce same objective output. When you are first doing some skilful work, you feel yourself think about it, a lot. When you do it long enough, your neural networks optimize, and the outcome is basically the same, but internally, you no longer feel how you do it, it’s done on instinct.