“Naturalized induction” is actually a mathmatical way of saying “AGI”. And requesting that there be MIRI(x) workshops on naturalized induction is requesting that MIRI include actual AGI design in its scope.
Which, for the record, I agree with. I have never donated to SIAI/MIRI, although I almost did back in the day when they were funding OpenCog (I merely lacked money at the time). I will not contribute to MIRI—money or time—until they do bring into scope public discussion of workable AGI designs.
“Naturalized induction” is actually a mathmatical way of saying “AGI”.
Well, firstly, it’s not a complete agent in the slightest: it lacks a decision theory and a utility function ;-).
And requesting that there be MIRI(x) workshops on naturalized induction is requesting that MIRI include actual AGI design in its scope.
MIRI already includes naturalized induction in its scope, quite explicitly. There’s just not a lot of discussion because nobody seems to have come up with a very good attack on the problem yet.
“Naturalized induction” is actually a mathmatical way of saying “AGI”. And requesting that there be MIRI(x) workshops on naturalized induction is requesting that MIRI include actual AGI design in its scope.
Which, for the record, I agree with. I have never donated to SIAI/MIRI, although I almost did back in the day when they were funding OpenCog (I merely lacked money at the time). I will not contribute to MIRI—money or time—until they do bring into scope public discussion of workable AGI designs.
Well, firstly, it’s not a complete agent in the slightest: it lacks a decision theory and a utility function ;-).
MIRI already includes naturalized induction in its scope, quite explicitly. There’s just not a lot of discussion because nobody seems to have come up with a very good attack on the problem yet.