Changing the topic slightly, I always interpreted the Godel argument as saying there weren’t good reasons to expect faster algorithms—thus, no super-human AI.
As you implied, the argument that Godel-ian issues prevent human-level intelligence is obviously disprove by the existence of actual humans.
Changing the topic slightly, I always interpreted the Godel argument as saying there weren’t good reasons to expect faster algorithms—thus, no super-human AI.
As you implied, the argument that Godel-ian issues prevent human-level intelligence is obviously disprove by the existence of actual humans.
Who would you re-interpret as making this argument?
It’s my own position—I’m not aware of anyone in the literature making this argument (I’m not exactly up on the literature).
Then why write “I...interpreted the Godel argument” when you were not interpreting others, and had in mind an argument that is unrelated to Godel?