I found that revisiting formal logic/set theory forced more careful intuitions about decision-making, and learning category theory made it less scary to work with more complicated ideas. Learning topology helped with studying set theory and gave some insight into the process of coming up with new mathematical concepts. You’ve probably seen my reading list (all the stuff on it can be downloaded from Kad).
I can’t make a proper explicit argument for studying math being on direct track to contributing to FAI research (particularly since UDT/ADT now look potentially less relevant than I thought before), but it looks like the best available option, giving general enough reasoning skills that could conceivably help, where I’m not aware of other kinds of knowledge that looks potentially useful to a similar extent.
(On the other hand, I probably don’t pay enough attention to the skills I already had two years ago, which include good background in programming and basic background in machine learning.)
I see no easy/convincing way of doing so right now. I’ll write up my ideas when/if they sufficiently mature, or, as is often the case, I’ll move on to a different line of investigation. Basically, morality is seen through a collection of many diverse heuristics, and while a few well-understood heuristics can form the backbone of a tool for boosting the power of an agent, they won’t have foundational significance, and so selection of the heuristics that need to be explicitly understood should be based on the leverage they give, even where they are allowed to have some blind spots.
I found that revisiting formal logic/set theory forced more careful intuitions about decision-making, and learning category theory made it less scary to work with more complicated ideas. Learning topology helped with studying set theory and gave some insight into the process of coming up with new mathematical concepts. You’ve probably seen my reading list (all the stuff on it can be downloaded from Kad).
I can’t make a proper explicit argument for studying math being on direct track to contributing to FAI research (particularly since UDT/ADT now look potentially less relevant than I thought before), but it looks like the best available option, giving general enough reasoning skills that could conceivably help, where I’m not aware of other kinds of knowledge that looks potentially useful to a similar extent.
(On the other hand, I probably don’t pay enough attention to the skills I already had two years ago, which include good background in programming and basic background in machine learning.)
Expand?
I see no easy/convincing way of doing so right now. I’ll write up my ideas when/if they sufficiently mature, or, as is often the case, I’ll move on to a different line of investigation. Basically, morality is seen through a collection of many diverse heuristics, and while a few well-understood heuristics can form the backbone of a tool for boosting the power of an agent, they won’t have foundational significance, and so selection of the heuristics that need to be explicitly understood should be based on the leverage they give, even where they are allowed to have some blind spots.