The most common is when you encounter a proposed ontology for the space and you try to reconstruct the unparamterized space that the ontology is trying to parameterize. E.g. the ITN framework from EA. With normal mad libs you just need to identify the verbs and nouns. That can be a good place to start just to warm up for a philosophical argument (can also use a thesaurus to permute the argument and see what happens, like tabooing words but non specific), but ultimately you’re looking for any proposed structure and trying to guess at the type that would satisfy that space if it were a blank to be filled in. So for each of Importance, Tractability, Neglectedness, you might ask what kinds of things in reality are those trying to carve out? How else might you capture some of the same things? Other examples where this comes up are the many ontologies proposed in Superintelligence (eg speed, collective, quality forms of superintelligence) or Christiano’s talk at EAG slide: https://imgur.com/a/GI9g3FI
You can also ask: if it turns out this ontology is correct, what has to be true about the world for that to be so? Or conversely, which distinctions/choices are overdetermined and seemingly ‘fall out of’ the choice of structure and which ones seem arbitrary? Or do they seem arbitrary but you have a hard time thinking of what else could go there (promising to ponder!)
Authors are usually trying to be evocative with the naming to point at the distinction they want to make, but if you’ve written stuff you know this is hard and you’re often not happy with your best effort at naming, just like variable naming in programming. (actually this suggests another interesting name to point at: think of words as variables instead of fixed references. Human languages aren’t type safe.)
A higher order example to show that it doesn’t need to only be applied at the level of words: consider the entire lesswrong website and rationalsphere as a blank. What sort of constraint is being satisfied there? There are probably a few dimensions, one interesting one (to me) is that there is surprisingly little scholarly work on documenting *methods* in philosophy and phenomenology, so there’s lots of low hanging fruit to talk about.
I think something like this is what lead Rawls to be able to characterize reflective equilibrium. (also might have been related to thinking about fixed points in math)
So for each of Importance, Tractability, Neglectedness, you might ask what kinds of things in reality are those trying to carve out? How else might you capture some of the same things?
Ok, can we take this example further? What specific things in reality ARE those trying to carve out? What does your thought process look like to find those things? Then, when you do find those things, what do you do with them and what specific insights does that help with?
I was thinking about why it wouldn’t be easy to answer this without writing a long response and I realized it’s because the concept hinges a lot on something I haven’t written up yet about types of uncertainty.
So a simpler example for now until I post that. Consider Bostrom’s ontology of types of superintelligence: speed, collective, quality. If we want more flexibility in thinking about this area we can return to the question that this ontology is an answer to: what different kinds of superintelligence might exist? or how might you differentiate between two superintelligences? and treat these instead as brainstorming cues. With brainstorming you want to optimize for quantity of answers rather than quality, then do categorization afterwards. You might also try to figure out more forms of the question that the ontology might be an answer to.
The relation back to types of uncertainty is that you can ask about the questions and answers: what kind of uncertainty do we want to reduce by answering this question?
The most common is when you encounter a proposed ontology for the space and you try to reconstruct the unparamterized space that the ontology is trying to parameterize. E.g. the ITN framework from EA. With normal mad libs you just need to identify the verbs and nouns. That can be a good place to start just to warm up for a philosophical argument (can also use a thesaurus to permute the argument and see what happens, like tabooing words but non specific), but ultimately you’re looking for any proposed structure and trying to guess at the type that would satisfy that space if it were a blank to be filled in. So for each of Importance, Tractability, Neglectedness, you might ask what kinds of things in reality are those trying to carve out? How else might you capture some of the same things? Other examples where this comes up are the many ontologies proposed in Superintelligence (eg speed, collective, quality forms of superintelligence) or Christiano’s talk at EAG slide: https://imgur.com/a/GI9g3FI
You can also ask: if it turns out this ontology is correct, what has to be true about the world for that to be so? Or conversely, which distinctions/choices are overdetermined and seemingly ‘fall out of’ the choice of structure and which ones seem arbitrary? Or do they seem arbitrary but you have a hard time thinking of what else could go there (promising to ponder!)
Authors are usually trying to be evocative with the naming to point at the distinction they want to make, but if you’ve written stuff you know this is hard and you’re often not happy with your best effort at naming, just like variable naming in programming. (actually this suggests another interesting name to point at: think of words as variables instead of fixed references. Human languages aren’t type safe.)
A higher order example to show that it doesn’t need to only be applied at the level of words: consider the entire lesswrong website and rationalsphere as a blank. What sort of constraint is being satisfied there? There are probably a few dimensions, one interesting one (to me) is that there is surprisingly little scholarly work on documenting *methods* in philosophy and phenomenology, so there’s lots of low hanging fruit to talk about.
I think something like this is what lead Rawls to be able to characterize reflective equilibrium. (also might have been related to thinking about fixed points in math)
Ok, can we take this example further? What specific things in reality ARE those trying to carve out? What does your thought process look like to find those things? Then, when you do find those things, what do you do with them and what specific insights does that help with?
I was thinking about why it wouldn’t be easy to answer this without writing a long response and I realized it’s because the concept hinges a lot on something I haven’t written up yet about types of uncertainty.
So a simpler example for now until I post that. Consider Bostrom’s ontology of types of superintelligence: speed, collective, quality. If we want more flexibility in thinking about this area we can return to the question that this ontology is an answer to: what different kinds of superintelligence might exist? or how might you differentiate between two superintelligences? and treat these instead as brainstorming cues. With brainstorming you want to optimize for quantity of answers rather than quality, then do categorization afterwards. You might also try to figure out more forms of the question that the ontology might be an answer to.
The relation back to types of uncertainty is that you can ask about the questions and answers: what kind of uncertainty do we want to reduce by answering this question?