OK, I understand your position now. You’re saying (correct me if I’m wrong) that when I have uncertainty about what is implementing “me” in the physical world—whether e.g. I’m a natural human, or a WBE whose inputs lie to it, or a completely different kind of simulated human—then if I rule out certain kinds of processes from being my implementations, that is called not believing these processes could be “conscious”.
Could I be a WBE whose inputs are remotely connected to the biological body I see when I look down? (Ignoring the many reasons this would be improbable in the actual observed world, where WBEs are not known to exist.) I haven’t looked inside my head to check, after all. (Actually, I’ve done CT scans, but the doctors may be in on the plot.)
I don’t see a reason why I shouldn’t be able to be a WBE. Take the scenario where a human is converted into a WBE by replacing one neuron at a time with a remotely controlled IO device, connected wirelessly to a computer emulating that neuron. And it’s then possible to switch the connections to link with a physically different, though similar, body.
I see no reason to suppose that, if I underwent such a process, I would stop being “conscious”, either gradually or suddenly.
What if you were a big database that simply stores an answer to every question I can ask you? Can you seriously consider the possibility that you are merely a database that does this purely mechanical operation? This database does not think, it just answers.
That I’m less certain about. The brain’s internal state and implementation details might be relevant. But that is exactly why I have a much higher prior of a WBE being “conscious”, than any other black-box-equivalent functional equivalent to a brain to be conscious.
OK, I understand your position now. You’re saying (correct me if I’m wrong) that when I have uncertainty about what is implementing “me” in the physical world—whether e.g. I’m a natural human, or a WBE whose inputs lie to it, or a completely different kind of simulated human—then if I rule out certain kinds of processes from being my implementations, that is called not believing these processes could be “conscious”.
Could I be a WBE whose inputs are remotely connected to the biological body I see when I look down? (Ignoring the many reasons this would be improbable in the actual observed world, where WBEs are not known to exist.) I haven’t looked inside my head to check, after all. (Actually, I’ve done CT scans, but the doctors may be in on the plot.)
I don’t see a reason why I shouldn’t be able to be a WBE. Take the scenario where a human is converted into a WBE by replacing one neuron at a time with a remotely controlled IO device, connected wirelessly to a computer emulating that neuron. And it’s then possible to switch the connections to link with a physically different, though similar, body.
I see no reason to suppose that, if I underwent such a process, I would stop being “conscious”, either gradually or suddenly.
That I’m less certain about. The brain’s internal state and implementation details might be relevant. But that is exactly why I have a much higher prior of a WBE being “conscious”, than any other black-box-equivalent functional equivalent to a brain to be conscious.