In other words, so long as we talk simply about computation, there is nothing at all to inherently make an AI insist that its “experience” can’t be made of physical entities. It’s just a matter of ontological presuppositions.
Is there anything to inherently prevent it from insisting that? Should we accept our ontological presuppositions at face value?
See next section.
I would be able to tell the difference between an ontology in which they exist, and an ontology in which they don’t.
No you wouldn’t. People can’t tell the difference between ontologies any more then math changes if you print its theorems in a different color. People can tell the difference between different mathematical laws of physics, or different arrangements of stuff within those laws. What you notice is that you have a specific class of gensyms that can’t have relations of reduction for other symbols, or something else computational. Facts about ontology are totally orthogonal to facts about things that influence what words you type.
We are talking at cross-purposes here. I am talking about an ontology which is presented explicitly to my conscious understanding. You seem to be talking about ontologies at the level of code—whatever that corresponds to, in a human being.
If someone tells me that the universe is made of nothing but love, and I observe that hate exists and that this falsifies their theory, then I’ve made a judgement about an ontology both at a logical and an empirical level. That’s what I was talking about, when I said that if you swapped and , I couldn’t detect the swap, but I’d still know empirically that color is real, and I’d still be able to make logical judgements about whether an ontology (like current physical ontology) contains such an entity.
Your sentence about gensyms is interesting as a proposition about the computational side of consciousness, but…
A neuron is a glob of trillions of atoms doing inconceivably many things at once. You’re focusing on a few of the simple differential sub-perceptions which make up the experience of looking at that image, associating them in your mind with certain gross changes of state in that glob of atoms, and proposing that the experience is identical to a set of several such simultaneous changes occurring in a few neurons. In doing so, you’re neglecting both the bulk of the physical events occurring elsewhere in the neurons, and the fundamental dissimilarity between “staring at a few homogeneous patches of color” and “billions of ions cascading through a membrane”.
My consciousness is a computation based mainly or entirely on regularities the size of a single neuron or bigger, much like the browser I’m typing in is based on regularities the size of a transistor. I wouldn’t expect to notice if my images were, really, fundamentally, completely different. I wouldn’t expect to notice if something physical happened—the number of ions was cut by a factor of a million and made the opposite charge, but it the functions from impulses to impulses computed by neurons were the same.
… if gensyms only exist on that scale, and if changes like those which you describe make no difference to experience, then you ought to be a dualist, because clearly the experience is not identical to the physical in this scenario. It is instead correlated with certain physical properties at the neuronal scale.
It’s more like the difference between night and day. It is possible to attain a higher perspective which unifies them, but you don’t get there by saying that day is just night by another name.
Uniform color and edgeness are as different as night and day.
They are, but I was actually talking about the difference between colorness/edgeness and neuronness.
(part 2 of reply)
See next section.
We are talking at cross-purposes here. I am talking about an ontology which is presented explicitly to my conscious understanding. You seem to be talking about ontologies at the level of code—whatever that corresponds to, in a human being.
If someone tells me that the universe is made of nothing but love, and I observe that hate exists and that this falsifies their theory, then I’ve made a judgement about an ontology both at a logical and an empirical level. That’s what I was talking about, when I said that if you swapped and , I couldn’t detect the swap, but I’d still know empirically that color is real, and I’d still be able to make logical judgements about whether an ontology (like current physical ontology) contains such an entity.
Your sentence about gensyms is interesting as a proposition about the computational side of consciousness, but…
… if gensyms only exist on that scale, and if changes like those which you describe make no difference to experience, then you ought to be a dualist, because clearly the experience is not identical to the physical in this scenario. It is instead correlated with certain physical properties at the neuronal scale.
They are, but I was actually talking about the difference between colorness/edgeness and neuronness.