Why does AI design need to have anything to do with the brain? (Third Alternative: ab initio development based on a formal normative theory of general intelligence, not a descriptive theory of human intelligence, comprehensible even to us to say nothing of itself once it gets smart enough.)
(Edit: Also, it’s a huge leap from “no one is coming up with simple theories of the brain yet” to “we may well never understand intelligence”.)
A specific AI design need be nothing like the design of the brain. However the brain is the only object we know of in mind space, so having difficulty understanding it is evidence, although very weak, that we may have difficulty understanding minds in general.
We might expect it to be a special case as we are trying to understand methods of understanding, so we are being somewhat self-referential.
If you read my comment you’ll see I only raised it as a possibility, something to try and estimate the probability of, rather than necessarily the most likely case.
What would you estimate the probability of this scenario being, and why?
There might be formal proofs, but they probably are reliant on the definition of things like what understanding is, I’ve been trying to think of mathematical formalisms to explore this question, but I haven’t come up with a satisfactory one yet.
It is trivial to say one AIXI can’t comprehend another instance of AIXI, if by comprehend you mean form an accurate model.
AIXI expects the environment to be computable and is itself incomputable. So if one AIXI comes across another, it won’t be able to form a true model of it.
However I am not sure of the value of this argument as we expect intelligence to be computable.
Why does AI design need to have anything to do with the brain? (Third Alternative: ab initio development based on a formal normative theory of general intelligence, not a descriptive theory of human intelligence, comprehensible even to us to say nothing of itself once it gets smart enough.)
(Edit: Also, it’s a huge leap from “no one is coming up with simple theories of the brain yet” to “we may well never understand intelligence”.)
A specific AI design need be nothing like the design of the brain. However the brain is the only object we know of in mind space, so having difficulty understanding it is evidence, although very weak, that we may have difficulty understanding minds in general.
We might expect it to be a special case as we are trying to understand methods of understanding, so we are being somewhat self-referential.
If you read my comment you’ll see I only raised it as a possibility, something to try and estimate the probability of, rather than necessarily the most likely case.
What would you estimate the probability of this scenario being, and why?
There might be formal proofs, but they probably are reliant on the definition of things like what understanding is, I’ve been trying to think of mathematical formalisms to explore this question, but I haven’t come up with a satisfactory one yet.
Have you looked at AIXI?
It is trivial to say one AIXI can’t comprehend another instance of AIXI, if by comprehend you mean form an accurate model.
AIXI expects the environment to be computable and is itself incomputable. So if one AIXI comes across another, it won’t be able to form a true model of it.
However I am not sure of the value of this argument as we expect intelligence to be computable.