It depends what level of understanding you’re referring to. I mean, in a sense we understand the human brain extremely well. We know when, why, and how neurons fire, but that level of understanding is completely worthless when it comes time to predict how someone is going to behave. That level of understanding we’ll certainly have for AIs. I just don’t consider that sufficient to really say that we understand the AI.
We don’t have that degree of understanding of the human brain, no. Sure, we know physics, but we don’t know the initial conditions, even.
There are several layers of abstraction one could cram between our knowledge and conscious thoughts.
No, what I’m referring to is an algorithm that you completely grok, but whose execution is just too big. A bit like how you could completely specify the solution to the towers of Hanoi puzzle with 64 plates, but actually doing it is simply beyond your powers.
It’s theoretically possible that an AI could result from that, but it seems vanishingly unlikely to me. I don’t think an AI is going to come from someone hacking together an intelligence in their basement—if it was simple enough for a single human to grok, 50 years of AI research probably would have come up with it already. Simple algorithms can produce complex results, yes, but they very rarely solve complex problems.
No, but the complete lack of results do constitute reasonably strong evidence, even if they’re not proof. Given that my prior on that is very low(seriously, why would we believe that it’s at all likely an algorithm so simple a human can understand it could produce an AGI?), my posterior probability is so low as to be utterly negligible.
Humans can understand some pretty complicated things. I’m not saying that the algorithm ought to fit on a napkin. I’m saying that with years of study one can understand every element of the algorithm, with the remaining black-boxes being things that are inessential and can be understood by contract (e.g. transistor design, list sorting, floating point number specifications)
Do you think a human can understand the algorithms used by the human brain to the same level you’re assuming that they can understand a silicon brain to?
Evolution is another one of those impersonal forces I’d consider a superhuman intelligence without much prodding. Again, myopic as hell, but it does good work—such good work, in fact, that considering it superhuman was essentially universal until the modern era.
On that note, I’d put very high odds on the first AGI being designed by an evolutionary algorithm of some sort—I simply don’t think humans can design one directly, we need to conscript Azathoth to do another job like his last one.
It depends what level of understanding you’re referring to. I mean, in a sense we understand the human brain extremely well. We know when, why, and how neurons fire, but that level of understanding is completely worthless when it comes time to predict how someone is going to behave. That level of understanding we’ll certainly have for AIs. I just don’t consider that sufficient to really say that we understand the AI.
We don’t have that degree of understanding of the human brain, no. Sure, we know physics, but we don’t know the initial conditions, even.
There are several layers of abstraction one could cram between our knowledge and conscious thoughts.
No, what I’m referring to is an algorithm that you completely grok, but whose execution is just too big. A bit like how you could completely specify the solution to the towers of Hanoi puzzle with 64 plates, but actually doing it is simply beyond your powers.
It’s theoretically possible that an AI could result from that, but it seems vanishingly unlikely to me. I don’t think an AI is going to come from someone hacking together an intelligence in their basement—if it was simple enough for a single human to grok, 50 years of AI research probably would have come up with it already. Simple algorithms can produce complex results, yes, but they very rarely solve complex problems.
We have hardly saturated the likely parts of the space of human-comprehensible algorithms, even with our search power turned way up.
No, but the complete lack of results do constitute reasonably strong evidence, even if they’re not proof. Given that my prior on that is very low(seriously, why would we believe that it’s at all likely an algorithm so simple a human can understand it could produce an AGI?), my posterior probability is so low as to be utterly negligible.
Humans can understand some pretty complicated things. I’m not saying that the algorithm ought to fit on a napkin. I’m saying that with years of study one can understand every element of the algorithm, with the remaining black-boxes being things that are inessential and can be understood by contract (e.g. transistor design, list sorting, floating point number specifications)
Do you think a human can understand the algorithms used by the human brain to the same level you’re assuming that they can understand a silicon brain to?
Quite likely not, since we’re evolved. Humans have taken a distressingly large amount of time to understand FPGA-evolved addition gates.
Evolution is another one of those impersonal forces I’d consider a superhuman intelligence without much prodding. Again, myopic as hell, but it does good work—such good work, in fact, that considering it superhuman was essentially universal until the modern era.
On that note, I’d put very high odds on the first AGI being designed by an evolutionary algorithm of some sort—I simply don’t think humans can design one directly, we need to conscript Azathoth to do another job like his last one.