When I tried to write up some decision theory results as a full-length article, it felt really pointless and unpleasant. I couldn’t get through even a single paragraph without thinking how much I hate it.
One problem is that even though I enjoy coming up with short and sweet proofs, I don’t know in advance which parts will trip up readers and require clarification, and feel very averse to guessing. Here’s a recent example. Maybe the right way is to write discussion posts first, then debug the presentation based on reader comments?
But the bigger problem is that academic articles seem to require a lot of fluff that doesn’t add value. Moldbug called it “grant-related propaganda”, but I’m not sure grants are the main reason why people add fluff. Contrast a typical paper today with John Nash’s 1950 paper which doesn’t waste even a single word on explaining why the subject matter is relevant to anything. I’d be happy to write things up in the same style, but then journals will just reject my writings, won’t they?
I’d be happy to write up things in the same style, but then journals will just reject my writings, won’t they?
No, because Yvain and Kaj and I will polish it and add the stuff you call “fluff” but I call “explanatory, clarifying, context-setting stuff.”
The only thing is that you and I would have to have enough conversations that I understand what you’re talking about, so I can fill in the inferential gaps and hold the reader’s hand through the explanation.
The only thing is that you and I would have to have enough conversations that I understand what you’re talking about, so I can fill in the inferential gaps and hold the reader’s hand through the explanation.
(This might be premature optimization, but:) I suspect this process would go a lot smoother if you could find someone in the Bay Area to act as a sort of on-site translator, ’cuz long distance back-and-forth is sometimes a hassle. Are there any active decision theory hotshots in the Bay Area?
When I tried to write up some decision theory results as a full-length article, it felt really pointless and unpleasant. I couldn’t get through even a single paragraph without thinking how much I hate it.
Are you still considering it, since creating the ‘writeup’ thread on the list, or are you describing what preceded that?
Well, sometime ago I moved past the “considering” stage and started writing, but then gave up. After what Luke said just now, I think I’ll give it another try. I gave a lot of weight to Wei’s opinion that publishing UDT might be dangerous, but now he seems to think that the really dangerous topic is logical uncertainty, and my mathy ideas serve as a nice distraction from that :-)
I’m not sure that my thoughts on this topic should be taken that seriously, since I’m quite confused, uncertain, and conflicted (a part of me just wants to see these intellectual puzzles solved ASAP), but since you ask… My last thoughts on this topic were:
It’s the social consequences that I’m most unsure about. It seems like if SIAI can keep “ownership” over the decision theory ideas and use it to preach AI risk, then that would be beneficial, but it could also be the case that the ideas take on a life of their own and we just end up having more people go into decision theory because they see it as a fruitful place to get interesting technical results.
It seems to me at this point that the most likely results of publishing UDT are 1) it gets ignored by the academic mainstream, or 2) it becomes part of the academic mainstream but detached from AI risk idea. “Sure, those SIAI guys once came up with the (incredibly obvious) idea that decisions are logical facts, but what have they done lately? Hey, let’s try this trick to see if we can make our (UDT-inspired) AI do something interesting.” But I don’t claim to have particularly good intuitions about how academia works so if others think that SIAI can get a lot of traction from publishing UDT, they might well be right.
Also, to clarify, my private comment to cousin_it was meant to be a joke. I don’t think the fact that publishing papers about ADT (i.e., proof-based versions of UDT) will distract some people away from UDT (and its emphasis on logical uncertainty) is a very important consideration.
Wei’s well-known outside LW, so if he publically confirmed that logical uncertainty is dangerous, that might be dangerous. I’m not sure what the dangers could be of knowing that I don’t know everything I know; although thinking about that for too long would probably make me more Will Newsome-ish.
When I tried to write up some decision theory results as a full-length article, it felt really pointless and unpleasant. I couldn’t get through even a single paragraph without thinking how much I hate it.
One problem is that even though I enjoy coming up with short and sweet proofs, I don’t know in advance which parts will trip up readers and require clarification, and feel very averse to guessing. Here’s a recent example. Maybe the right way is to write discussion posts first, then debug the presentation based on reader comments?
But the bigger problem is that academic articles seem to require a lot of fluff that doesn’t add value. Moldbug called it “grant-related propaganda”, but I’m not sure grants are the main reason why people add fluff. Contrast a typical paper today with John Nash’s 1950 paper which doesn’t waste even a single word on explaining why the subject matter is relevant to anything. I’d be happy to write things up in the same style, but then journals will just reject my writings, won’t they?
No, because Yvain and Kaj and I will polish it and add the stuff you call “fluff” but I call “explanatory, clarifying, context-setting stuff.”
The only thing is that you and I would have to have enough conversations that I understand what you’re talking about, so I can fill in the inferential gaps and hold the reader’s hand through the explanation.
(This might be premature optimization, but:) I suspect this process would go a lot smoother if you could find someone in the Bay Area to act as a sort of on-site translator, ’cuz long distance back-and-forth is sometimes a hassle. Are there any active decision theory hotshots in the Bay Area?
Are you still considering it, since creating the ‘writeup’ thread on the list, or are you describing what preceded that?
Well, sometime ago I moved past the “considering” stage and started writing, but then gave up. After what Luke said just now, I think I’ll give it another try. I gave a lot of weight to Wei’s opinion that publishing UDT might be dangerous, but now he seems to think that the really dangerous topic is logical uncertainty, and my mathy ideas serve as a nice distraction from that :-)
Wei? Your response?
I’m not sure that my thoughts on this topic should be taken that seriously, since I’m quite confused, uncertain, and conflicted (a part of me just wants to see these intellectual puzzles solved ASAP), but since you ask… My last thoughts on this topic were:
It seems to me at this point that the most likely results of publishing UDT are 1) it gets ignored by the academic mainstream, or 2) it becomes part of the academic mainstream but detached from AI risk idea. “Sure, those SIAI guys once came up with the (incredibly obvious) idea that decisions are logical facts, but what have they done lately? Hey, let’s try this trick to see if we can make our (UDT-inspired) AI do something interesting.” But I don’t claim to have particularly good intuitions about how academia works so if others think that SIAI can get a lot of traction from publishing UDT, they might well be right.
Also, to clarify, my private comment to cousin_it was meant to be a joke. I don’t think the fact that publishing papers about ADT (i.e., proof-based versions of UDT) will distract some people away from UDT (and its emphasis on logical uncertainty) is a very important consideration.
Wei’s well-known outside LW, so if he publically confirmed that logical uncertainty is dangerous, that might be dangerous. I’m not sure what the dangers could be of knowing that I don’t know everything I know; although thinking about that for too long would probably make me more Will Newsome-ish.