I don’t think general intelligence will look anything like CAIS under the current path. For example, Demis Hassibis was recently on Lex Friedman’s podcast. Demis said that the experience of ML experts is that end-to-end training always ends up working better, since the machine is better at figuring out the constraints of any problem than humans. I think the dangerous type of general intelligence looks like a single large sparse model trained with reinforcement learning in many domains, not a bunch of different models stitched together by human engineers in an ad-hoc way. The latter doesn’t even sound plausibly buildable. (Maybe I’m misunderstanding CAIS though?)
I don’t think general intelligence will look anything like CAIS under the current path. For example, Demis Hassibis was recently on Lex Friedman’s podcast. Demis said that the experience of ML experts is that end-to-end training always ends up working better, since the machine is better at figuring out the constraints of any problem than humans. I think the dangerous type of general intelligence looks like a single large sparse model trained with reinforcement learning in many domains, not a bunch of different models stitched together by human engineers in an ad-hoc way. The latter doesn’t even sound plausibly buildable. (Maybe I’m misunderstanding CAIS though?)
Drexler wrote his QNR paper in part to address this issue. I’m trying to write a blog post about QNR.