But more generally, AI should use whatever works. If that happens to be “scruffy” methods, then so be it.
This seems like a bizarre statement if we care about knowable AI safety. Near as I can tell, you just called for the rapid creation of AGI that we can’t prove non-genocidal.
I don’t believe Houshalter was referring to proving Friendliness (or something along those lines); my impression is that he was talking about implementing an AI, in which case neural networks, while “scruffy”, should be considered a legitimate approach. (Of course, the “scruffiness” of NN’s could very well affect certain aspects of Friendliness research; my relatively uninformed impression is that it’s very difficult to prove results about NN’s.)
If you can prove anything interesting about a system, that system is too simple to be interesting. Logic can’t handle uncertainty, and doesn’t scale at all to describing/modelling systems as complex as societies, brains, AIs, etc.
AIXI is simple, and if our universe happened to allow turing machines to calculate endlessly behind cartesian barriers, it could be interesting in the sense of actually working.
This seems like a bizarre statement if we care about knowable AI safety. Near as I can tell, you just called for the rapid creation of AGI that we can’t prove non-genocidal.
I don’t believe Houshalter was referring to proving Friendliness (or something along those lines); my impression is that he was talking about implementing an AI, in which case neural networks, while “scruffy”, should be considered a legitimate approach. (Of course, the “scruffiness” of NN’s could very well affect certain aspects of Friendliness research; my relatively uninformed impression is that it’s very difficult to prove results about NN’s.)
If you can prove anything interesting about a system, that system is too simple to be interesting. Logic can’t handle uncertainty, and doesn’t scale at all to describing/modelling systems as complex as societies, brains, AIs, etc.
AIXI is simple, and if our universe happened to allow turing machines to calculate endlessly behind cartesian barriers, it could be interesting in the sense of actually working.
We have wildly different definitions of interesting, at least in the context of my original statement. :)