In light of the “Fixed Points” critique, a set of exercises that seem more useful/reflective of MIRI’s research than those exercises. What I have in mind is taking some of the classic success stories of formalized philosophy (e.g. Turing machines, Kolmogorov complexity, Shannon information, Pearlian causality, etc., but this could also be done for reflective oracles and logical induction), introducing the problems they were meant to solve, and giving some stepping stones that guide one to have the intuitions and thoughts that (presumably) had to be developed to make the finished product. I get that this will be hard, but I think this can be feasibly done for some of the (mostly easier) concepts, and if done really well, it could even be a better way for people to learn those concepts than actually reading about them.
I think this would be an extremely useful exercise for multiple independent reasons:
it’s directly attempting to teach skills which I do not currently know any reproducible way to teach/learn
it involves looking at how breakthroughs happened historically, which is an independently useful meta-strategy
it directly involves investigating the intuitions behind foundational ideas relevant to the theory of agency, and could easily expose alternative views/interpretations which are more useful (in some contexts) than the usual presentations
Yeah, this is definitely more high-risk, high-reward than the others, and the fact that there’s potentially some very substantial spillover effects if successful makes me both excited and nervous about the concept. I’m thinking of Arbital as an example of “trying to solve way too many problems at once”, so I want to manage expectations and just try to make some exercises that inspire people to think about the art of mathematizing certain fuzzy philosophical concepts. (Running title is “Formalization Exercises”, but I’m not sure if there’s a better pithy name that captures it).
In any case, I appreciate the feedback, Mr. Entworth.
(8)
In light of the “Fixed Points” critique, a set of exercises that seem more useful/reflective of MIRI’s research than those exercises. What I have in mind is taking some of the classic success stories of formalized philosophy (e.g. Turing machines, Kolmogorov complexity, Shannon information, Pearlian causality, etc., but this could also be done for reflective oracles and logical induction), introducing the problems they were meant to solve, and giving some stepping stones that guide one to have the intuitions and thoughts that (presumably) had to be developed to make the finished product. I get that this will be hard, but I think this can be feasibly done for some of the (mostly easier) concepts, and if done really well, it could even be a better way for people to learn those concepts than actually reading about them.
I think this would be an extremely useful exercise for multiple independent reasons:
it’s directly attempting to teach skills which I do not currently know any reproducible way to teach/learn
it involves looking at how breakthroughs happened historically, which is an independently useful meta-strategy
it directly involves investigating the intuitions behind foundational ideas relevant to the theory of agency, and could easily expose alternative views/interpretations which are more useful (in some contexts) than the usual presentations
*begins drafting longer proposal*
Yeah, this is definitely more high-risk, high-reward than the others, and the fact that there’s potentially some very substantial spillover effects if successful makes me both excited and nervous about the concept. I’m thinking of Arbital as an example of “trying to solve way too many problems at once”, so I want to manage expectations and just try to make some exercises that inspire people to think about the art of mathematizing certain fuzzy philosophical concepts. (Running title is “Formalization Exercises”, but I’m not sure if there’s a better pithy name that captures it).
In any case, I appreciate the feedback, Mr. Entworth.
Oh no, not you too. It was bad enough with just Bena.
I think we can change your username to have capital letters if you want. ;)