Assuming the worst case on the algorithmic side, a standstill, the computational cost—even that of a combinatorial explosion—remains constant. The gap can only narrow down. That makes it a question of how many doubling cycles it would take to close it. We’re not necessarily talking desktop computers here (disregarding their goal predictions).
Exponential growth with such a short doubling time with some unknown goal threshold to be reached is enough to make any provably optimal approach work eventually. If it continues.
There is probably not enough computational power in the entire visible universe (assuming maximal theoretical efficiency) to power a reasonable AIXI-like algorithm. A few steps of combinatorial growth makes mere exponential growth look like standing very very still.
Changing the topic slightly, I always interpreted the Godel argument as saying there weren’t good reasons to expect faster algorithms—thus, no super-human AI.
As you implied, the argument that Godel-ian issues prevent human-level intelligence is obviously disprove by the existence of actual humans.
Assuming the worst case on the algorithmic side, a standstill, the computational cost—even that of a combinatorial explosion—remains constant. The gap can only narrow down. That makes it a question of how many doubling cycles it would take to close it. We’re not necessarily talking desktop computers here (disregarding their goal predictions).
Exponential growth with such a short doubling time with some unknown goal threshold to be reached is enough to make any provably optimal approach work eventually. If it continues.
There is probably not enough computational power in the entire visible universe (assuming maximal theoretical efficiency) to power a reasonable AIXI-like algorithm. A few steps of combinatorial growth makes mere exponential growth look like standing very very still.
Changing the topic slightly, I always interpreted the Godel argument as saying there weren’t good reasons to expect faster algorithms—thus, no super-human AI.
As you implied, the argument that Godel-ian issues prevent human-level intelligence is obviously disprove by the existence of actual humans.
Who would you re-interpret as making this argument?
It’s my own position—I’m not aware of anyone in the literature making this argument (I’m not exactly up on the literature).
Then why write “I...interpreted the Godel argument” when you were not interpreting others, and had in mind an argument that is unrelated to Godel?