The following is copy-pasta’d from an email sent to the MIRIx workshops mailing list upon receiving this link:
By the way, I think we should be making a much more concerted attack on Naturalized Induction. I’ve given it some thought from the epistemology angle (and I’ve been trying to investigate the kind of algorithmic structures that can do it… but don’t have time ever), and the thing about naturalized induction is that it contains the problem of building an agent that can model the same phenomenon at multiple levels of abstraction, and soundly learn about the phenomenon at all levels of abstraction based on what it sees at any of them.
If we had an agent structure that could do this, it would get a whole lot easier to specify things like a Friendly Utility Function (aka: the Hard Problem of FAI) in terms of abstractions we do know how to write down, such as, “Do What I Mean.” And by “a whole lot easier”, I mean it would become possible to even begin describing such things at all, rather than treating the agent’s reasoning processes as a black box subject to reinforcement learning or probabilistic value learning.
I should note it would also be a nice “prestige accomplishment”, since it actually consists in AI/Cog-Sci research rather than pure mathematics.
“Naturalized induction” is actually a mathmatical way of saying “AGI”. And requesting that there be MIRI(x) workshops on naturalized induction is requesting that MIRI include actual AGI design in its scope.
Which, for the record, I agree with. I have never donated to SIAI/MIRI, although I almost did back in the day when they were funding OpenCog (I merely lacked money at the time). I will not contribute to MIRI—money or time—until they do bring into scope public discussion of workable AGI designs.
“Naturalized induction” is actually a mathmatical way of saying “AGI”.
Well, firstly, it’s not a complete agent in the slightest: it lacks a decision theory and a utility function ;-).
And requesting that there be MIRI(x) workshops on naturalized induction is requesting that MIRI include actual AGI design in its scope.
MIRI already includes naturalized induction in its scope, quite explicitly. There’s just not a lot of discussion because nobody seems to have come up with a very good attack on the problem yet.
The following is copy-pasta’d from an email sent to the MIRIx workshops mailing list upon receiving this link:
I should note it would also be a nice “prestige accomplishment”, since it actually consists in AI/Cog-Sci research rather than pure mathematics.
“Naturalized induction” is actually a mathmatical way of saying “AGI”. And requesting that there be MIRI(x) workshops on naturalized induction is requesting that MIRI include actual AGI design in its scope.
Which, for the record, I agree with. I have never donated to SIAI/MIRI, although I almost did back in the day when they were funding OpenCog (I merely lacked money at the time). I will not contribute to MIRI—money or time—until they do bring into scope public discussion of workable AGI designs.
Well, firstly, it’s not a complete agent in the slightest: it lacks a decision theory and a utility function ;-).
MIRI already includes naturalized induction in its scope, quite explicitly. There’s just not a lot of discussion because nobody seems to have come up with a very good attack on the problem yet.