It is impossible to program this in, or in any way assemble such familiarity.
Familiarity with the world is just a certain pattern of bits inside the AI’s hard drive. Any pattern of bits can, in principle, be programmed into the AI. Doing this may be difficult, but if we’re just talking about experience of the world here, you could just copy that experience from a human brain (assuming the technology, etc.).
Familiarity with the world is just a certain pattern of bits inside the AI’s hard drive.
Well, we could only recognize these bits as being thoughts pertaining to a shared world by actually sharing that world. So even if we try to program familiarity with the world into the machine, it could only ‘count’ for the purposes of our ability to recognize thoughts once the AI has spent time operating in our world. The upshot of this being that nothing can come off the assembly line as a thinker. Thinking is something it has to develop. This places no restrictions on how fast it would develop intelligence, just restrictions on the extent to which we can assemble intelligence before experience.
So even if we try to program familiarity with the world into the machine, it could only ‘count’ for the purposes of our ability to recognize thoughts once the AI has spent time operating in our world. The upshot of this being that nothing can come off the assembly line as a thinker.
I don’t see how this could be right. Suppose I have an AI that’s spent a long time being and thinking in the world. It’s been trained. Next, I copy it, seven thousand times. We can copy software precisely, so the copies will be indistinguishable from the original, and therefore qualify as thinking. But they will also be factory fresh.
You might want to say “a new copy of an old AI is an old AI”. But there are lots of tweaks we can make to a program+data that will differentiate it, in small ways that don’t materially affect its behavior. Would that not qualify as a new AI?
That’s a good argument, and I don’t want to say that ‘a new copy of an AI is an old AI’, I do think I should say that your only strong evidence for the intelligence of your copies would be their resemblance to your educated original. You’d have to see them operating to see them as intelligent. And I take it that it’s a corollary of the idea of uploading one’s neural-activity that the ‘copy’ isn’t a new being with no experiences or education.
Familiarity with the world is just a certain pattern of bits inside the AI’s hard drive. Any pattern of bits can, in principle, be programmed into the AI. Doing this may be difficult, but if we’re just talking about experience of the world here, you could just copy that experience from a human brain (assuming the technology, etc.).
Well, we could only recognize these bits as being thoughts pertaining to a shared world by actually sharing that world. So even if we try to program familiarity with the world into the machine, it could only ‘count’ for the purposes of our ability to recognize thoughts once the AI has spent time operating in our world. The upshot of this being that nothing can come off the assembly line as a thinker. Thinking is something it has to develop. This places no restrictions on how fast it would develop intelligence, just restrictions on the extent to which we can assemble intelligence before experience.
I don’t see how this could be right. Suppose I have an AI that’s spent a long time being and thinking in the world. It’s been trained. Next, I copy it, seven thousand times. We can copy software precisely, so the copies will be indistinguishable from the original, and therefore qualify as thinking. But they will also be factory fresh.
You might want to say “a new copy of an old AI is an old AI”. But there are lots of tweaks we can make to a program+data that will differentiate it, in small ways that don’t materially affect its behavior. Would that not qualify as a new AI?
That’s a good argument, and I don’t want to say that ‘a new copy of an AI is an old AI’, I do think I should say that your only strong evidence for the intelligence of your copies would be their resemblance to your educated original. You’d have to see them operating to see them as intelligent. And I take it that it’s a corollary of the idea of uploading one’s neural-activity that the ‘copy’ isn’t a new being with no experiences or education.
We know these thoughts pertain to a shared would before we program them from the AI because of where we got them from.