Very sure. The biological view just seems to be a tacked on requirement to reject emulations by definition.
Note that I specifically said in the OP that I’m not much concerned about the biological view being right, but about some third possibility nobody’s thought about yet.
Anyone who would hold the biological view should answer the questions in this though experiment.
A new technology is created to extend the life of the human brain. If any brain cell dies it is immediately replaced with a cybernetic replacement. This cybernetic replacement fully emulates all interactions that it can have with any neighboring cells including any changes in those interactions based on inputs received and time passed, but is not biological. Over time the subject’s whole brain is replaced, cell by cell. Consider the resulting brain. Either it perfectly emulates a human mind or it doesn’t. If it doesn’t, then what is there to the human mind besides the interactions of brain cells? Either it is conscious or it isn’t. If it isn’t then how was consciousness lost and at what point in the process?
This is similar to an argument Charlmers gives. My worry here is that it seems like brain damage can do weird, non-intuitive things to a person’s state of consciousness, so one-by-one replacement of neurons might to similar weird things, perhaps slowly causing you to lose consciousness without realizing what was happening.
That is probably the best answer. It has the weird aspect of putting consciousness on a continuum, and one that isn’t easy to quantify. If someone with 50% cyber brain cells is 50% conscious, but their behavior is the same as as a 100% biological, 100% conscious brain it’s a little strange.
Also, it means that consciousness isn’t a binary variable. For this to make sense consciousness must be a continuum. That is an important point to make regardless of the definition we use.
It has the weird aspect of putting consciousness on a continuum,
I find I feel less confused about consciousness when thinking of it as a continuum. I’m reminded of this, from Heinlein:
“Am not going to argue whether a machine can ‘really’ be alive, ‘really’ be self-aware. Is a virus self-aware? Nyet. How about oyster? I doubt it. A cat? Almost certainly. A human? Don’t know about you, tovarishch, but I am.”
Note that I specifically said in the OP that I’m not much concerned about the biological view being right, but about some third possibility nobody’s thought about yet.
This is similar to an argument Charlmers gives. My worry here is that it seems like brain damage can do weird, non-intuitive things to a person’s state of consciousness, so one-by-one replacement of neurons might to similar weird things, perhaps slowly causing you to lose consciousness without realizing what was happening.
That is probably the best answer. It has the weird aspect of putting consciousness on a continuum, and one that isn’t easy to quantify. If someone with 50% cyber brain cells is 50% conscious, but their behavior is the same as as a 100% biological, 100% conscious brain it’s a little strange.
Also, it means that consciousness isn’t a binary variable. For this to make sense consciousness must be a continuum. That is an important point to make regardless of the definition we use.
I find I feel less confused about consciousness when thinking of it as a continuum. I’m reminded of this, from Heinlein:
Absolutely. I do too. I just realized that the continuum provides another interesting question.
Is the following scale of consciousness correct?
Human > Chimp > Dog > Toad > Any possible AI with no biological components
The biological requirement seems to imply this. It seems wrong to me.