Even if you are willing to consider a computer program with many falsifiable groundings to be conscious, you still needn’t worry about how you treat that computer program. Because you can’t be nice to it no matter how hard you try. It’s pointless to treat an agent as having rights if it doesn’t have a stable symbol-grounding, because what is desirable to it at one moment might cause it indescribable agony in the next. Even if you are nice to the consciousness with the grounding intended by the system’s designer, you will be causing misery to an astronomical number of equally-real alternately-grounded consciousnesses.
I disagree. If multiple consciousnesses are instantiated in a single physical system, you should figure out what each of them is, and be nice to as many of them as you can. The existence of an astronomical number of alternatively real beings is no excuse to throw up your hands and declare it impossible to figure out what they all want; the number of humans is already pretty big, but I’m not about to give up on pleasing them.
The alternatively real beings you’re talking about are probably pretty similar to each other, so you’re unlikely to cause agony to one while pleasing another.
For example, I have a bowl containing 180 white Go stones, and I don’t know anything about any of them that I don’t know about the others. Thus there are at least 180! possible groundings for my knowledge base. Regardless of which grounding you choose, my preferences are about the same.
Also, as the above example demonstrates, probably all humans have multiple groundings for their knowledge, and therefore lack consciousness according to your criterion!
What you’re saying is essentially correct, but I didn’t deal with it in the short version. I haven’t worked out how to incorporate this into the math. It may change the results drastically.
In the particular case of the go stones, you have this difficulty only because you’re using an extensional grounding rather than an intensional grounding. I didn’t want to get into the extensional/intensional debate, but my opinion is that extensional grounding is simply wrong. And I don’t think I have enough space to get into the issue of representations of indistinguishable objects.
In the particular case of the go stones, you have this difficulty only because you’re using an extensional grounding rather than an intensional grounding.
An extensional grounding is one in which things in the representation map 1-1 to things in the world. This doesn’t work for intelligent agents. They often have inaccurate information about the world, and don’t know about the existence of objects, or don’t know that 2 things are really the same object, or that 2 things they think are different are really the same object, or can’t distinguish between objects (such as go stones). They also can reason about hypothetical entities.
In an intensional grounding, you can’t have separate representations for different extensional things (like go stones) that you think about in exactly the same way. Some people claim that they are distinguished only contextually, so that you can have a representation of “the stone at G7”, or “the leftmost stone”, but they all collapse into a single mental object when you put them back in the bag.
I disagree. If multiple consciousnesses are instantiated in a single physical system, you should figure out what each of them is, and be nice to as many of them as you can. The existence of an astronomical number of alternatively real beings is no excuse to throw up your hands and declare it impossible to figure out what they all want; the number of humans is already pretty big, but I’m not about to give up on pleasing them.
The alternatively real beings you’re talking about are probably pretty similar to each other, so you’re unlikely to cause agony to one while pleasing another.
For example, I have a bowl containing 180 white Go stones, and I don’t know anything about any of them that I don’t know about the others. Thus there are at least 180! possible groundings for my knowledge base. Regardless of which grounding you choose, my preferences are about the same.
Also, as the above example demonstrates, probably all humans have multiple groundings for their knowledge, and therefore lack consciousness according to your criterion!
What you’re saying is essentially correct, but I didn’t deal with it in the short version. I haven’t worked out how to incorporate this into the math. It may change the results drastically.
In the particular case of the go stones, you have this difficulty only because you’re using an extensional grounding rather than an intensional grounding. I didn’t want to get into the extensional/intensional debate, but my opinion is that extensional grounding is simply wrong. And I don’t think I have enough space to get into the issue of representations of indistinguishable objects.
Can you explain this?
An extensional grounding is one in which things in the representation map 1-1 to things in the world. This doesn’t work for intelligent agents. They often have inaccurate information about the world, and don’t know about the existence of objects, or don’t know that 2 things are really the same object, or that 2 things they think are different are really the same object, or can’t distinguish between objects (such as go stones). They also can reason about hypothetical entities.
In an intensional grounding, you can’t have separate representations for different extensional things (like go stones) that you think about in exactly the same way. Some people claim that they are distinguished only contextually, so that you can have a representation of “the stone at G7”, or “the leftmost stone”, but they all collapse into a single mental object when you put them back in the bag.