Well, sometime ago I moved past the “considering” stage and started writing, but then gave up. After what Luke said just now, I think I’ll give it another try. I gave a lot of weight to Wei’s opinion that publishing UDT might be dangerous, but now he seems to think that the really dangerous topic is logical uncertainty, and my mathy ideas serve as a nice distraction from that :-)
I’m not sure that my thoughts on this topic should be taken that seriously, since I’m quite confused, uncertain, and conflicted (a part of me just wants to see these intellectual puzzles solved ASAP), but since you ask… My last thoughts on this topic were:
It’s the social consequences that I’m most unsure about. It seems like if SIAI can keep “ownership” over the decision theory ideas and use it to preach AI risk, then that would be beneficial, but it could also be the case that the ideas take on a life of their own and we just end up having more people go into decision theory because they see it as a fruitful place to get interesting technical results.
It seems to me at this point that the most likely results of publishing UDT are 1) it gets ignored by the academic mainstream, or 2) it becomes part of the academic mainstream but detached from AI risk idea. “Sure, those SIAI guys once came up with the (incredibly obvious) idea that decisions are logical facts, but what have they done lately? Hey, let’s try this trick to see if we can make our (UDT-inspired) AI do something interesting.” But I don’t claim to have particularly good intuitions about how academia works so if others think that SIAI can get a lot of traction from publishing UDT, they might well be right.
Also, to clarify, my private comment to cousin_it was meant to be a joke. I don’t think the fact that publishing papers about ADT (i.e., proof-based versions of UDT) will distract some people away from UDT (and its emphasis on logical uncertainty) is a very important consideration.
Wei’s well-known outside LW, so if he publically confirmed that logical uncertainty is dangerous, that might be dangerous. I’m not sure what the dangers could be of knowing that I don’t know everything I know; although thinking about that for too long would probably make me more Will Newsome-ish.
Well, sometime ago I moved past the “considering” stage and started writing, but then gave up. After what Luke said just now, I think I’ll give it another try. I gave a lot of weight to Wei’s opinion that publishing UDT might be dangerous, but now he seems to think that the really dangerous topic is logical uncertainty, and my mathy ideas serve as a nice distraction from that :-)
Wei? Your response?
I’m not sure that my thoughts on this topic should be taken that seriously, since I’m quite confused, uncertain, and conflicted (a part of me just wants to see these intellectual puzzles solved ASAP), but since you ask… My last thoughts on this topic were:
It seems to me at this point that the most likely results of publishing UDT are 1) it gets ignored by the academic mainstream, or 2) it becomes part of the academic mainstream but detached from AI risk idea. “Sure, those SIAI guys once came up with the (incredibly obvious) idea that decisions are logical facts, but what have they done lately? Hey, let’s try this trick to see if we can make our (UDT-inspired) AI do something interesting.” But I don’t claim to have particularly good intuitions about how academia works so if others think that SIAI can get a lot of traction from publishing UDT, they might well be right.
Also, to clarify, my private comment to cousin_it was meant to be a joke. I don’t think the fact that publishing papers about ADT (i.e., proof-based versions of UDT) will distract some people away from UDT (and its emphasis on logical uncertainty) is a very important consideration.
Wei’s well-known outside LW, so if he publically confirmed that logical uncertainty is dangerous, that might be dangerous. I’m not sure what the dangers could be of knowing that I don’t know everything I know; although thinking about that for too long would probably make me more Will Newsome-ish.