That’s a difficult question to answer, so I’ll give you the first thing I can think of. It’s still me, just a lower percentage of me. I’m not that confident that it can be put to a linear scale, though.
That is one of the reasons why I think binary-consciousness models are likely to be wrong.
There are many differences between brains and computers; they have different structure, different purpose, different properties; I’m pretty confident (>90%) that my computer isn’t conscious now, and the consciousness phenomenon may have specific qualities which are absent in its image in your analogy. My objection to using such analogies is that you can miss important details. However, they are often useful to illustrate one’s beliefs.
There are many differences between brains and computers; they have different structure, different purpose, different properties; I’m pretty confident (>90%) that my computer isn’t conscious now, and the consciousness phenomenon may have specific qualities which are absent in its image in your analogy. My objection to using such analogies is that you can miss important details. However, they are often useful to illustrate one’s beliefs.
Do you have any of these qualities in mind? It seems strange to reject something because “maybe” it has a quality that distinguishes it from another case. Can you point to any of these details that’s relevant?
I don’t think it’s strange. Firstly, it does have distinguishing qualities, the question is whether they are relevant or not. So, you choose an analogy which shares the qualities you currently think are relevant; then you do some analysis of your analogy, and come to certain conclusions, but it is easy to overlook a step in the analysis which happens to sufficiently depend on a property that you previously thought was insufficient in the original model, and you can fail to see it, because it is absent in the analogy. So I think that double-checking results provided by analogy thinking is a necessary safety measure.
As for specific examples: something like quantum consciousness by Penrose (although I don’t actually believe it it). Or any other reason why consciousness (not intelligence!) can’t be reproduced in our computer devices (I don’t actually believe it either).
Firstly, it does have distinguishing qualities, the question is whether they are relevant or not. So, you choose an analogy which shares the qualities you currently think are relevant; then you do some analysis of your analogy, and come to certain conclusions, but it is easy to overlook a step in the analysis which happens to sufficiently depend on a property that you previously thought was insufficient in the original model, and you can fail to see it, because it is absent in the analogy. So I think that double-checking results provided by analogy thinking is a necessary safety measure.
I’m not saying not to double check them. My problem was that you seemed to have come to a conclusion that requires there to be a relevant difference, but didn’t identify any.
As for specific examples: something like quantum consciousness by Penrose (although I don’t actually believe it it). Or any other reason why consciousness (not intelligence!) can’t be reproduced in our computer devices (I don’t actually believe it either).
Even repeating the thought experiment with a quantum computer doesn’t seem to change my intuition.
That is one of the reasons why I think binary-consciousness models are likely to be wrong.
There are many differences between brains and computers; they have different structure, different purpose, different properties; I’m pretty confident (>90%) that my computer isn’t conscious now, and the consciousness phenomenon may have specific qualities which are absent in its image in your analogy. My objection to using such analogies is that you can miss important details. However, they are often useful to illustrate one’s beliefs.
Do you have any of these qualities in mind? It seems strange to reject something because “maybe” it has a quality that distinguishes it from another case. Can you point to any of these details that’s relevant?
I don’t think it’s strange. Firstly, it does have distinguishing qualities, the question is whether they are relevant or not. So, you choose an analogy which shares the qualities you currently think are relevant; then you do some analysis of your analogy, and come to certain conclusions, but it is easy to overlook a step in the analysis which happens to sufficiently depend on a property that you previously thought was insufficient in the original model, and you can fail to see it, because it is absent in the analogy. So I think that double-checking results provided by analogy thinking is a necessary safety measure.
As for specific examples: something like quantum consciousness by Penrose (although I don’t actually believe it it). Or any other reason why consciousness (not intelligence!) can’t be reproduced in our computer devices (I don’t actually believe it either).
I’m not saying not to double check them. My problem was that you seemed to have come to a conclusion that requires there to be a relevant difference, but didn’t identify any.
Even repeating the thought experiment with a quantum computer doesn’t seem to change my intuition.