I disagree that it’s just the entire brain that is capable of computing anything, and I didn’t mean to compare to a single neuron (hence the plural “s”).
However, I highlighted the simplicity of the actions which are available to single neurons to counteract “a database just does lookups, surely it cannot be conscious”. Why should (the totality) of neurons just opening and closing simple structures be conscious, and a database not be? Both rely on simple operations as atomic actions, and simple structures as a physical substrate. Yet unless one denies consciousness altogether, we do ascribe consciousness to (a large number of) neurons (each with their basic functionality), why not to a large number of capacitators (on which a database is stored)?
I.e. the point was to put them in a similar class, or at least to show that we cannot trivially put databases in a different class than neural networks.
Yet unless one denies consciousness altogether, we do ascribe consciousness to (a large number of) neurons (each with their basic functionality), why not to a large number of capacitators (on which a database is stored)?
The problem is this argument applies equally well to “why not consider rocks (which, like brains, are made of a large number of atoms) conscious”. Simply noting that they’re made of simple parts leaves high level structure unexamined.
Well, I just imagined a bunch of things—a rubik’s cube spinning, a piece of code I worked on today, some of my friends, a cat… There’s patterns of activations of neurons in my head, which correspond to those things. Perhaps somewhere there’s even an actual distorted image.
Where in the database is the image of that cat, again?
By the way there’s a lot of subjectively distinct ways that can produce the above string as well. I could simply have memorized the whole paragraph, and memorized that I must say it at such date and time. That’s clearly distinct from actually imagining those things.
One could picture an optimization on WBEs that would wipe out entirely the ability to mentally visualize things and perceive them, with or without an extra hack so that the WBE acts as if it did visualize it (e.g. it could instead use some CAD/CAM tool without ever producing a subjective experience of seeing an image from that tool. One could argue that this tool did mentally visualize things, yet there are different ways to integrate such tools and some involve you actually seeing the output from the tool, and some don’t; absent an extra censorship hack, you would be able to tell us which one you’re using; present such hack you would be unable to tell us so, but the hack may be so structured that we are very assured it doesn’t alter any internal experiences but only external ones).
edit: bottom line is, we all know that different subjective experiences can produce same objective output. When you are first doing some skilful work, you feel yourself think about it, a lot. When you do it long enough, your neural networks optimize, and the outcome is basically the same, but internally, you no longer feel how you do it, it’s done on instinct.
I disagree that it’s just the entire brain that is capable of computing anything, and I didn’t mean to compare to a single neuron (hence the plural “s”).
However, I highlighted the simplicity of the actions which are available to single neurons to counteract “a database just does lookups, surely it cannot be conscious”. Why should (the totality) of neurons just opening and closing simple structures be conscious, and a database not be? Both rely on simple operations as atomic actions, and simple structures as a physical substrate. Yet unless one denies consciousness altogether, we do ascribe consciousness to (a large number of) neurons (each with their basic functionality), why not to a large number of capacitators (on which a database is stored)?
I.e. the point was to put them in a similar class, or at least to show that we cannot trivially put databases in a different class than neural networks.
The problem is this argument applies equally well to “why not consider rocks (which, like brains, are made of a large number of atoms) conscious”. Simply noting that they’re made of simple parts leaves high level structure unexamined.
Well, I just imagined a bunch of things—a rubik’s cube spinning, a piece of code I worked on today, some of my friends, a cat… There’s patterns of activations of neurons in my head, which correspond to those things. Perhaps somewhere there’s even an actual distorted image.
Where in the database is the image of that cat, again?
By the way there’s a lot of subjectively distinct ways that can produce the above string as well. I could simply have memorized the whole paragraph, and memorized that I must say it at such date and time. That’s clearly distinct from actually imagining those things.
One could picture an optimization on WBEs that would wipe out entirely the ability to mentally visualize things and perceive them, with or without an extra hack so that the WBE acts as if it did visualize it (e.g. it could instead use some CAD/CAM tool without ever producing a subjective experience of seeing an image from that tool. One could argue that this tool did mentally visualize things, yet there are different ways to integrate such tools and some involve you actually seeing the output from the tool, and some don’t; absent an extra censorship hack, you would be able to tell us which one you’re using; present such hack you would be unable to tell us so, but the hack may be so structured that we are very assured it doesn’t alter any internal experiences but only external ones).
edit: bottom line is, we all know that different subjective experiences can produce same objective output. When you are first doing some skilful work, you feel yourself think about it, a lot. When you do it long enough, your neural networks optimize, and the outcome is basically the same, but internally, you no longer feel how you do it, it’s done on instinct.