Focus on whether the AGI is intelligent, not if it is recognizable as intelligent.
I take it that the latter question settles the former, in the sense that it would be empirically meaningless to talk about intelligence over and above our ability to recognize it as such.
but perhaps by seeing it successfully rearrange the universe, assuming we survive the process.
Is this a sign of intelligence? It seems to me we could easily imagine (or discover, and perhaps we have already discovered) something which has rearranged our world without being intelligent. Ice-ages, for example.
I take it that the latter question settles the former, in the sense that it would be empirically meaningless to talk about intelligence over and above our ability to recognize it as such.
The point is that your particular example of how to recognize intelligence is not exaustive, and that you are just opening yourself up to confusion by introducing an unnecesary layer of indirection.
Is this a sign of intelligence?
It is certainly positive evidence for intelligence. It is very strong evidence if the universe is rearrange in a manner that maximizes a simple utility function.
But if we had a proto AGI ready to activate that could predict would rearange the universe in a particular way, would considerations of if it is “intelligent” have any bearing on whether we should turn it on, once we know what it would do? (Though we would have used an understanding of its intelligence to predict what it would do. The question is: does it matter whether the processes so analyzed are really “intelligence”?)
(Though we would have used an understanding of its intelligence to predict what it would do. The question is: does it matter whether the processes so analyzed are really “intelligence”?)
It matters for the purposes of my argument, but not for yours. So point taken. I’m exclusively discussing real intelligence and something we can recognize as such. An AI such as you describe would seem to be to be a more powerful version of something that exists presently, however.
It matters for the purposes of my argument, but not for yours.
Given that our arguments are meant to describe the same reality, it should matter the same for both of them. How is your notion of “real intelligence” actually important?
Well, I’m assuming the project of FAI is to produce an artificial person who is ethical. If the project is described in weaker terms, say that of creating a machine that behaves in some predictable way, then my argument may just not be relevant.
I take it that the latter question settles the former, in the sense that it would be empirically meaningless to talk about intelligence over and above our ability to recognize it as such.
Is this a sign of intelligence? It seems to me we could easily imagine (or discover, and perhaps we have already discovered) something which has rearranged our world without being intelligent. Ice-ages, for example.
The point is that your particular example of how to recognize intelligence is not exaustive, and that you are just opening yourself up to confusion by introducing an unnecesary layer of indirection.
It is certainly positive evidence for intelligence. It is very strong evidence if the universe is rearrange in a manner that maximizes a simple utility function.
But if we had a proto AGI ready to activate that could predict would rearange the universe in a particular way, would considerations of if it is “intelligent” have any bearing on whether we should turn it on, once we know what it would do? (Though we would have used an understanding of its intelligence to predict what it would do. The question is: does it matter whether the processes so analyzed are really “intelligence”?)
It matters for the purposes of my argument, but not for yours. So point taken. I’m exclusively discussing real intelligence and something we can recognize as such. An AI such as you describe would seem to be to be a more powerful version of something that exists presently, however.
Given that our arguments are meant to describe the same reality, it should matter the same for both of them. How is your notion of “real intelligence” actually important?
Well, I’m assuming the project of FAI is to produce an artificial person who is ethical. If the project is described in weaker terms, say that of creating a machine that behaves in some predictable way, then my argument may just not be relevant.
That assumption is incorrect.
Ah! This, is the article I needed to read, thanks for pointing me to it.