Does anyone who knew SquirrelInHell know the subskills in the skill tree they never got around to writing?
EDIT: To clarify, are there any known skills which are equivalent to the Red subskills in BWT’s skill tree? I am very impressed with the exposition on BWT, and would guess the remaining skills were just as high value. Perhaps more than I’d naively guess, if there’s some synergy between them. If you think you know them, please speak out so we can get the complete BWT skillset.
I didn’t know them and can only speak to how I did the tuning ontology thing. For about 2 weeks, I noted any time I was chunking reasoning using concepts. Many of them familiar LW concepts, and lots of others from philosophy, econ, law, common sense sayings, and some of my own that I did or didn’t have names for. This took a bit of practice but wasn’t that hard to train a little ‘noticer’ for. After a while, the pace of new concepts being added to the list started to slow down a lot. This was when I had around 250 concepts. I then played around with the ontology of this list, chunking it different ways (temporal, provenance, natural seeming clusters of related concepts, domain of usefulness, etc.). After doing this for a bit it felt like I was able to get some compressions I didn’t have before and overall my thinking felt cleaner than before. Separately, I also spent some time explicitly trying to compress concepts into as pithy as possible handles using visual metaphors and other creativity techniques to help. This also felt like it cleaned things up. Compression helps with memory because chunking is how we use working memory for anything more complicated than atomic bits of info. Augmenting memory also relied on tracking very closely whether or not a given representation (such as notes, drawing etc.) was actually making it easier to think or was just hitting some other easily goodharted metric, like making me feel more organized etc.
With regard to ‘tracking reality with beliefs’ the most important thing I ever noticed afaict is whether or not my beliefs 1. have fewer degrees of freedom than reality and thus have any explanatory power at all and avoid overfitting, 2. vary with reality in a way that is oriented towards causal models/intervention points that can easily be tested (vs abstraction towers).
With regard to ‘tracking reality with beliefs’ the most important thing I ever noticed afaict is whether or not my beliefs 1. have fewer degrees of freedom than reality and thus have any explanatory power at all and avoid overfitting, 2. vary with reality in a way that is oriented towards causal models/intervention points that can easily be tested (vs abstraction towers).
This seems like a potentially quite helpful concept to me.
I’d be interested in more details of how you go about checking for degrees of freedom.
I think when I do this sort of sanity-checking for myself, things I sometimes do include “wait, why do I believe this in the first place?” and “consider the world where the opposite is true, how would I know?” but those seem like different mental motions.
Easiest is a fictional dialog between a pro and anti position person. The anti person brings counter evidence and then gets to see how the pro position responds. If they respond by remapping the moving parts of the model in a different way, that indicates extra degrees of freedom. Then you can have an easier time noticing when you are doing this same move, ie back peddling and trying to ‘save’ a position when someone gives you push back on it.
(EDIT:) This taxonomy seems especially nice. Basically each point there would need examples and exercises and then that would be a pretty cool problem solving toolkit training program.
Does anyone who knew SquirrelInHell know the subskills in the skill tree they never got around to writing?
EDIT: To clarify, are there any known skills which are equivalent to the Red subskills in BWT’s skill tree? I am very impressed with the exposition on BWT, and would guess the remaining skills were just as high value. Perhaps more than I’d naively guess, if there’s some synergy between them. If you think you know them, please speak out so we can get the complete BWT skillset.
I didn’t know them and can only speak to how I did the tuning ontology thing. For about 2 weeks, I noted any time I was chunking reasoning using concepts. Many of them familiar LW concepts, and lots of others from philosophy, econ, law, common sense sayings, and some of my own that I did or didn’t have names for. This took a bit of practice but wasn’t that hard to train a little ‘noticer’ for. After a while, the pace of new concepts being added to the list started to slow down a lot. This was when I had around 250 concepts. I then played around with the ontology of this list, chunking it different ways (temporal, provenance, natural seeming clusters of related concepts, domain of usefulness, etc.). After doing this for a bit it felt like I was able to get some compressions I didn’t have before and overall my thinking felt cleaner than before. Separately, I also spent some time explicitly trying to compress concepts into as pithy as possible handles using visual metaphors and other creativity techniques to help. This also felt like it cleaned things up. Compression helps with memory because chunking is how we use working memory for anything more complicated than atomic bits of info. Augmenting memory also relied on tracking very closely whether or not a given representation (such as notes, drawing etc.) was actually making it easier to think or was just hitting some other easily goodharted metric, like making me feel more organized etc.
With regard to ‘tracking reality with beliefs’ the most important thing I ever noticed afaict is whether or not my beliefs 1. have fewer degrees of freedom than reality and thus have any explanatory power at all and avoid overfitting, 2. vary with reality in a way that is oriented towards causal models/intervention points that can easily be tested (vs abstraction towers).
This seems like a potentially quite helpful concept to me.
I’d be interested in more details of how you go about checking for degrees of freedom.
I think when I do this sort of sanity-checking for myself, things I sometimes do include “wait, why do I believe this in the first place?” and “consider the world where the opposite is true, how would I know?” but those seem like different mental motions.
Easiest is a fictional dialog between a pro and anti position person. The anti person brings counter evidence and then gets to see how the pro position responds. If they respond by remapping the moving parts of the model in a different way, that indicates extra degrees of freedom. Then you can have an easier time noticing when you are doing this same move, ie back peddling and trying to ‘save’ a position when someone gives you push back on it.
I think that list would be very helpful for me.
Can you form a representative sample of your “list”? Or send the whole thing, if you have it written down.
partially exists here, but very little explanation https://conceptspace.fandom.com/wiki/List_of_Lists_of_Concepts
This is neat.
Did you write all that or who did?
(EDIT:) This taxonomy seems especially nice. Basically each point there would need examples and exercises and then that would be a pretty cool problem solving toolkit training program.
Thanks, I wrote it and found the process of recording my thoughts and organizing them to be helpful.
how did you figure these things out if they were never published on be well tuned?
I didn’t, I’m naming some similar things based on their writing that I went through.
So you came up with it yourself?
yes