Is there some kind of timescale assumption you are making? Atomic vapor has proven that it can form human-level intelligence, and human intelligence has shown that it can create smarter human intelligence. Creating an intelligence that runs on radically different hardware on a short timeframe is the only possibility that hasn’t already been proven.
Yes, I am making a timescale assumption. The thing is, the required timescale might be huge, much bigger than the age of universe as far as I know.
Atomic vapor might have cheated. Imagine that evolution had an a priori miniscule probability of creating human-level intelligence. Of course the probability cannot be literally 0: even apes will type Shakespeare with some probability. Now, assuming the universe is infinite (e.g. eternal inflation scenario), human-level intelligence still appears in an infinite number of places with probability 1. We happen to be in one of these places courtesy of anthropic principle.
In other words, there might be a complexity-theoretic barrier to creating human-level intelligence. That is, theoretically it is possible, but it’s impossible to do with a realistic amount of computing resources in a “short” time span, similarly to solving the traveling salesman problem for some random graph with 10^14 vertices.
Would an AI using it’s own source code to write a better AI also not qualify?
Qualify for what? I’m saying that we don’t know whether any of the following are within human ability:
Creating a human-level AI without “stealing” the design of h. sapiens
Creating a far-superhuman AI by any method The process of “creating” is allowed to involve writing an AI which writes a better AI and so on
Is there some kind of timescale assumption you are making? Atomic vapor has proven that it can form human-level intelligence, and human intelligence has shown that it can create smarter human intelligence. Creating an intelligence that runs on radically different hardware on a short timeframe is the only possibility that hasn’t already been proven.
Yes, I am making a timescale assumption. The thing is, the required timescale might be huge, much bigger than the age of universe as far as I know. Atomic vapor might have cheated. Imagine that evolution had an a priori miniscule probability of creating human-level intelligence. Of course the probability cannot be literally 0: even apes will type Shakespeare with some probability. Now, assuming the universe is infinite (e.g. eternal inflation scenario), human-level intelligence still appears in an infinite number of places with probability 1. We happen to be in one of these places courtesy of anthropic principle. In other words, there might be a complexity-theoretic barrier to creating human-level intelligence. That is, theoretically it is possible, but it’s impossible to do with a realistic amount of computing resources in a “short” time span, similarly to solving the traveling salesman problem for some random graph with 10^14 vertices.