Wait, you think your prosaic story doesn’t involve blind search over a super-broad space of models??
No, not prosaic, that particular comment was referring to the “brain-like AGI” story in my head...
Like, I tend to emphasize the overlap between my brain-like AGI story and prosaic AI. There is plenty of overlap. Like they both involve “neural nets”, and (something like) gradient descent, and RL, etc.
By contrast, I haven’t written quite as much about the ways that my (current) brain-like AGI story is non-prosaic. And a big one is that I’m thinking that there would be a hardcoded (by humans) inference algorithm that looks like (some more complicated cousin of) PGM belief propagation.
In that case, yes there’s a search over a model space, because we need to find the (more complicated cousin of a) PGM world-model. But I don’t think that model space affords the same opportunities for mischief that you would get in, say, a 100-layer DNN. Not having thought about it too hard… :-P
No, not prosaic, that particular comment was referring to the “brain-like AGI” story in my head...
Ah, ok. It sounds like I have been systematically mis-perceiving you in this respect.
By contrast, I haven’t written quite as much about the ways that my (current) brain-like AGI story is non-prosaic. And a big one is that I’m thinking that there would be a hardcoded (by humans) inference algorithm that looks like (some more complicated cousin of) PGM belief propagation.
I would have been much more interested in your posts in the past if you had emphasized this aspect more ;p But perhaps you held back on that to avoid contributing to capabilities research.
In that case, yes there’s a search over a model space, because we need to find the (more complicated cousin of a) PGM world-model. But I don’t think that model space affords the same opportunities for mischief that you would get in, say, a 100-layer DNN. Not having thought about it too hard… :-P
No, not prosaic, that particular comment was referring to the “brain-like AGI” story in my head...
Like, I tend to emphasize the overlap between my brain-like AGI story and prosaic AI. There is plenty of overlap. Like they both involve “neural nets”, and (something like) gradient descent, and RL, etc.
By contrast, I haven’t written quite as much about the ways that my (current) brain-like AGI story is non-prosaic. And a big one is that I’m thinking that there would be a hardcoded (by humans) inference algorithm that looks like (some more complicated cousin of) PGM belief propagation.
In that case, yes there’s a search over a model space, because we need to find the (more complicated cousin of a) PGM world-model. But I don’t think that model space affords the same opportunities for mischief that you would get in, say, a 100-layer DNN. Not having thought about it too hard… :-P
Ah, ok. It sounds like I have been systematically mis-perceiving you in this respect.
I would have been much more interested in your posts in the past if you had emphasized this aspect more ;p But perhaps you held back on that to avoid contributing to capabilities research.
Yeah, this is a very important question!