No, but the complete lack of results do constitute reasonably strong evidence, even if they’re not proof. Given that my prior on that is very low(seriously, why would we believe that it’s at all likely an algorithm so simple a human can understand it could produce an AGI?), my posterior probability is so low as to be utterly negligible.
Humans can understand some pretty complicated things. I’m not saying that the algorithm ought to fit on a napkin. I’m saying that with years of study one can understand every element of the algorithm, with the remaining black-boxes being things that are inessential and can be understood by contract (e.g. transistor design, list sorting, floating point number specifications)
Do you think a human can understand the algorithms used by the human brain to the same level you’re assuming that they can understand a silicon brain to?
Evolution is another one of those impersonal forces I’d consider a superhuman intelligence without much prodding. Again, myopic as hell, but it does good work—such good work, in fact, that considering it superhuman was essentially universal until the modern era.
On that note, I’d put very high odds on the first AGI being designed by an evolutionary algorithm of some sort—I simply don’t think humans can design one directly, we need to conscript Azathoth to do another job like his last one.
No, but the complete lack of results do constitute reasonably strong evidence, even if they’re not proof. Given that my prior on that is very low(seriously, why would we believe that it’s at all likely an algorithm so simple a human can understand it could produce an AGI?), my posterior probability is so low as to be utterly negligible.
Humans can understand some pretty complicated things. I’m not saying that the algorithm ought to fit on a napkin. I’m saying that with years of study one can understand every element of the algorithm, with the remaining black-boxes being things that are inessential and can be understood by contract (e.g. transistor design, list sorting, floating point number specifications)
Do you think a human can understand the algorithms used by the human brain to the same level you’re assuming that they can understand a silicon brain to?
Quite likely not, since we’re evolved. Humans have taken a distressingly large amount of time to understand FPGA-evolved addition gates.
Evolution is another one of those impersonal forces I’d consider a superhuman intelligence without much prodding. Again, myopic as hell, but it does good work—such good work, in fact, that considering it superhuman was essentially universal until the modern era.
On that note, I’d put very high odds on the first AGI being designed by an evolutionary algorithm of some sort—I simply don’t think humans can design one directly, we need to conscript Azathoth to do another job like his last one.