It seems like 1, 3, and 4 could apply to lots of different kinds of research.
It looks like you’re interpreting this post as arguing for doing more decision theory research relative to other kinds of research, which is not really my intention, since as you note, that would require comparing decision theory research to other kinds of research, which I didn’t do. (I would be interested to know how I might have given this impression, so I can recalibrate my writing in the future to avoid such misunderstandings.) My aim in writing this post was more to explain why, given that I’m not optimistic that we can solve decision theory in a definitive way, I’m still interested in decision theory research.
Did you consider other ways to achieve them before you chose to work on decision theory?
No, but I have considered it afterwards, and have added to my research interests (such as directly attacking 1) as a result. (If you’re curious about how I got interested in decision theory originally, the linked post List of Problems That Motivated UDT
should give a pretty good idea.)
Like, it feels to me like there are recognized open problems in AI safety, which plausibly have direct useful applications to AI safety if solved, and additionally are philosophical in nature, relate to important intellectual puzzles, and allow one to firm up the foundations of human rationality.
If we do compare decision theory to other philosophical problems relevant to AI safety (say “how can we tell whether a physical system is having a positive or negative experience?” which I’m also interested in, BTW) decision theory feels relatively more tractable to me, and less prone to the sort of back-and-forth arguments between different camps preferring different solutions, common elsewhere in philosophy, because decision theory seems constrained by having to simultaneously solve so many problems that it’s easier to detect when clear progress has been made. (However, lack of clear evidence of progress in decision theory in recent years could be considered argument against this.)
If other people have different intuitions (and there’s no reason to think that they have especially bad intuitions) I definitely think they should pursue whatever problems/approaches seem most promising to them.
And I would guess there is also a fairly broad set of problems which don’t have direct relevance to AI safety that satisfy 1/3/4. For example, I find a lot of discussion of the replication crisis philosophically unsatisfying and have some thoughts on this—should I write these thoughts up on the grounds that they are useful for AI safety?
I’m not sure I understand this part. Are you saying there are problems that don’t have direct relevance to AI safety, but have indirect relevance via 1/3/4? If so, sure you should write them up, depending on the amount of indirect relevance...
Decision theory could still be a great focus for you personally if it’s something you’re interested in, but if that’s why you chose it, that would be useful for others to know, I think.
As explained above, it’s not as simple as this, and I wasn’t prepared to give a full discussion of “should you choose to work on decision theory or something else” in this post.
It looks like you’re interpreting this post as arguing for doing more decision theory research relative to other kinds of research, which is not really my intention, since as you note, that would require comparing decision theory research to other kinds of research, which I didn’t do. (I would be interested to know how I might have given this impression, so I can recalibrate my writing in the future to avoid such misunderstandings.) My aim in writing this post was more to explain why, given that I’m not optimistic that we can solve decision theory in a definitive way, I’m still interested in decision theory research.
No, but I have considered it afterwards, and have added to my research interests (such as directly attacking 1) as a result. (If you’re curious about how I got interested in decision theory originally, the linked post List of Problems That Motivated UDT should give a pretty good idea.)
If we do compare decision theory to other philosophical problems relevant to AI safety (say “how can we tell whether a physical system is having a positive or negative experience?” which I’m also interested in, BTW) decision theory feels relatively more tractable to me, and less prone to the sort of back-and-forth arguments between different camps preferring different solutions, common elsewhere in philosophy, because decision theory seems constrained by having to simultaneously solve so many problems that it’s easier to detect when clear progress has been made. (However, lack of clear evidence of progress in decision theory in recent years could be considered argument against this.)
If other people have different intuitions (and there’s no reason to think that they have especially bad intuitions) I definitely think they should pursue whatever problems/approaches seem most promising to them.
I’m not sure I understand this part. Are you saying there are problems that don’t have direct relevance to AI safety, but have indirect relevance via 1/3/4? If so, sure you should write them up, depending on the amount of indirect relevance...
As explained above, it’s not as simple as this, and I wasn’t prepared to give a full discussion of “should you choose to work on decision theory or something else” in this post.
Fair enough.