I wouldn’t say that there’s someone out there directly solving FAI problems without having explicitly intended to do so. I would say there’s a lot we can build on.
Keep in mind, I’ve seen enough of a sample of Eld Science being stupid to understand how you can have a very low prior on Eld Science figuring out anything relevant. But lacking more problem guides from you on the delta between plain AI problems and FAI problems, we go on what we can.
One paper on utility learning that relies on a supervised-learning methodology (pairwise comparison data) rather than a de-facto reinforcement learning methodology (which can and will go wrong in well-known ways when put into AGI). One paper on progress towards induction algorithms that operate at multiple levels of abstraction, which could be useful for naturalized induction if someone put more thought and expertise into it.
That’s only two, but I’m a comparative beginner at this stuff and Eld Science isn’t very good at focusing on our problems, so I expect that there’s actually more to discover and I’m just limited by lack of time and knowledge to do the literature searches.
By the way, I’m already trying to follow the semi-official MIRI curriculum, but if you could actually write out some material on the specific deltas where FAI work departs from the preexisting knowledge-base of academic science, that would be really helpful.
I wouldn’t say that there’s someone out there directly solving FAI problems without having explicitly intended to do so. I would say there’s a lot we can build on.
Keep in mind, I’ve seen enough of a sample of Eld Science being stupid to understand how you can have a very low prior on Eld Science figuring out anything relevant. But lacking more problem guides from you on the delta between plain AI problems and FAI problems, we go on what we can.
One paper on utility learning that relies on a supervised-learning methodology (pairwise comparison data) rather than a de-facto reinforcement learning methodology (which can and will go wrong in well-known ways when put into AGI). One paper on progress towards induction algorithms that operate at multiple levels of abstraction, which could be useful for naturalized induction if someone put more thought and expertise into it.
That’s only two, but I’m a comparative beginner at this stuff and Eld Science isn’t very good at focusing on our problems, so I expect that there’s actually more to discover and I’m just limited by lack of time and knowledge to do the literature searches.
By the way, I’m already trying to follow the semi-official MIRI curriculum, but if you could actually write out some material on the specific deltas where FAI work departs from the preexisting knowledge-base of academic science, that would be really helpful.