Very sure. The biological view just seems to be a tacked on requirement to reject emulations by definition. Anyone who would hold the biological view should answer the questions in this though experiment.
A new technology is created to extend the life of the human brain. If any brain cell dies it is immediately replaced with a cybernetic replacement. This cybernetic replacement fully emulates all interactions that it can have with any neighboring cells including any changes in those interactions based on inputs received and time passed, but is not biological. Over time the subject’s whole brain is replaced, cell by cell. Consider the resulting brain. Either it perfectly emulates a human mind or it doesn’t. If it doesn’t, then what is there to the human mind besides the interactions of brain cells? Either it is conscious or it isn’t. If it isn’t then how was consciousness lost and at what point in the process?
Very sure. The biological view just seems to be a tacked on requirement to reject emulations by definition.
Note that I specifically said in the OP that I’m not much concerned about the biological view being right, but about some third possibility nobody’s thought about yet.
Anyone who would hold the biological view should answer the questions in this though experiment.
A new technology is created to extend the life of the human brain. If any brain cell dies it is immediately replaced with a cybernetic replacement. This cybernetic replacement fully emulates all interactions that it can have with any neighboring cells including any changes in those interactions based on inputs received and time passed, but is not biological. Over time the subject’s whole brain is replaced, cell by cell. Consider the resulting brain. Either it perfectly emulates a human mind or it doesn’t. If it doesn’t, then what is there to the human mind besides the interactions of brain cells? Either it is conscious or it isn’t. If it isn’t then how was consciousness lost and at what point in the process?
This is similar to an argument Charlmers gives. My worry here is that it seems like brain damage can do weird, non-intuitive things to a person’s state of consciousness, so one-by-one replacement of neurons might to similar weird things, perhaps slowly causing you to lose consciousness without realizing what was happening.
That is probably the best answer. It has the weird aspect of putting consciousness on a continuum, and one that isn’t easy to quantify. If someone with 50% cyber brain cells is 50% conscious, but their behavior is the same as as a 100% biological, 100% conscious brain it’s a little strange.
Also, it means that consciousness isn’t a binary variable. For this to make sense consciousness must be a continuum. That is an important point to make regardless of the definition we use.
It has the weird aspect of putting consciousness on a continuum,
I find I feel less confused about consciousness when thinking of it as a continuum. I’m reminded of this, from Heinlein:
“Am not going to argue whether a machine can ‘really’ be alive, ‘really’ be self-aware. Is a virus self-aware? Nyet. How about oyster? I doubt it. A cat? Almost certainly. A human? Don’t know about you, tovarishch, but I am.”
This cybernetic replacement fully emulates all interactions that it can have with any neighboring cells including any changes in those interactions based on inputs received and time passed, but is not biological.
Why would that be possible? Neurons have to process biochemicals. A full replacement would have to as well. How could it do
that without being at least partly biological?
It might be the case that an adequate replacement—not a full replacment—could be non-biological. But it might not.
It’s a thought experiment. It’s not meant to be a practical path to artificial consciousness or even brain emulation. It’s a conceptually possible scenario that raises interesting questions.
Very sure. The biological view just seems to be a tacked on requirement to reject emulations by definition. Anyone who would hold the biological view should answer the questions in this though experiment.
A new technology is created to extend the life of the human brain. If any brain cell dies it is immediately replaced with a cybernetic replacement. This cybernetic replacement fully emulates all interactions that it can have with any neighboring cells including any changes in those interactions based on inputs received and time passed, but is not biological. Over time the subject’s whole brain is replaced, cell by cell. Consider the resulting brain. Either it perfectly emulates a human mind or it doesn’t. If it doesn’t, then what is there to the human mind besides the interactions of brain cells? Either it is conscious or it isn’t. If it isn’t then how was consciousness lost and at what point in the process?
Note that I specifically said in the OP that I’m not much concerned about the biological view being right, but about some third possibility nobody’s thought about yet.
This is similar to an argument Charlmers gives. My worry here is that it seems like brain damage can do weird, non-intuitive things to a person’s state of consciousness, so one-by-one replacement of neurons might to similar weird things, perhaps slowly causing you to lose consciousness without realizing what was happening.
That is probably the best answer. It has the weird aspect of putting consciousness on a continuum, and one that isn’t easy to quantify. If someone with 50% cyber brain cells is 50% conscious, but their behavior is the same as as a 100% biological, 100% conscious brain it’s a little strange.
Also, it means that consciousness isn’t a binary variable. For this to make sense consciousness must be a continuum. That is an important point to make regardless of the definition we use.
I find I feel less confused about consciousness when thinking of it as a continuum. I’m reminded of this, from Heinlein:
Absolutely. I do too. I just realized that the continuum provides another interesting question.
Is the following scale of consciousness correct?
Human > Chimp > Dog > Toad > Any possible AI with no biological components
The biological requirement seems to imply this. It seems wrong to me.
Why would that be possible? Neurons have to process biochemicals. A full replacement would have to as well. How could it do that without being at least partly biological?
It might be the case that an adequate replacement—not a full replacment—could be non-biological. But it might not.
It’s a thought experiment. It’s not meant to be a practical path to artificial consciousness or even brain emulation. It’s a conceptually possible scenario that raises interesting questions.
I am saying it is not conceptually possible to have something that precisely mimics a biological entity without being biological.
It will need to have a biological interface, but its insides could be nonbiological.