Here’s a very partial list of blog post ideas from my drafts/brainstorms folder. Outside view, though, if I took the time to try to turn these in to blog posts, I’d end up changing my mind about more than half of the content in the process of writing it up (and then would eventually end up with blog posts with somewhat different these).
I’m including brief descriptions with the awareness that my descriptions may not parse at this level of brevity, in the hopes that they’re at least interesting teasers.
Contra-Hodgel
(The Litany of Hodgell says “That which can be destroyed by the truth should be”. Its contrapositive therefore says: “That which can destroy [that which should not be destroyed] must not be the full truth.” It is interesting and sometimes-useful to attempt to use Contra-Hodgel as a practical heuristic: “if adopting belief X will meaningfully impair my ability to achieve good things, there must be some extra false belief or assumption somewhere in the system, since true beliefs and accurate maps should just help (e.g., if “there is no Judeo-Christian God” in practice impairs my ability to have good and compassionate friendships, perhaps there is some false belief somewhere in the system that is messing with that).
The 50⁄50 rule
The 50⁄50 rule is a proposed heuristic claiming that about half of all progress on difficult projects will come from already-known-to-be-project-relevant subtasks—for example, if Archimedes wishes to determine whether the king’s crown is unmixed gold, he will get about half his progress from diligently thinking about this question (plus subtopics that seem obviously and explicitly relevant to this question). The other half of progress on difficult projects (according to this heuristic) will come from taking an interest in the rest of the world, including parts not yet known to be related to the problem at hand—in the Archimedes example, from Archimedes taking an interest in what happens to his bathwater.
Relatedly, the 50⁄50 rule estimates that if you would like to move difficult projects forward over long periods of time, it is often useful to spend about half of your high-energy hours on “diligently working on subtasks known-to-be-related to your project”, and the other half taking an interest in the world.
Make New Models, but Keep The Old
“… one is silver and the other’s gold.”
A retelling of: it all adds up to normality.
On Courage and Believing In.
Beliefs are for predicting what’s true. “Believing in”, OTOH, is for creating a local normal that others can accurately predict. For example: “In America, we believe in driving on the right hand side of the road”—thus, when you go outside and look to predict which way people will be driving, you can simply predict (believe) that they’ll be driving on the right hand side.
Analogously, if I decide I “believe in” [honesty, or standing up for my friends, or other such things], I create an internal context in which various models within me can predict that my future actions will involve [honesty, or standing up for my friends, or similar].
It’s important and good to do this sometimes, rather than having one’s life be an accidental mess with nobody home choosing. It’s also closely related to courage.
Ethics for code colonies
If you want to keep caring about people, it makes a lot of sense to e.g. take the time to put your shopping cart back where it goes, or at minimum not to make up excuses about how your future impact on the world makes you too important to do that.
In general, when you take an action, you summon up black box code-modification that takes that action (and changes unknown numbers of other things). Life as a “code colony” is tricky that way.
Ethics is the branch of practical engineering devoted to how to accomplish things with large sets of people over long periods of time—or even with one person over a long period of time in a confusing or unknown environment. It’s the art of interpersonal and intrapersonal coordination. (I mean, sometimes people say “ethics” means “following this set of rules here”. But people also say “math” means “following this algorithm whenever you have to divide fractions” or whatever. And the underneath-thing with ethics is (among other things, maybe) interpersonal and intra-personal coordination, kinda like how there’s an underneath-thing with math that is where those rules come from.)
The need to coordinate in this way holds just as much for consequentialists or anyone else.
It’s kinda terrifying to be trying to do this without a culture. Or to be not trying to do this (still without a culture).
The explicit and the tacit (elaborated a bit in a comment in this AMA; but there’s room for more).
Cloaks, Questing, and Cover Stories
It’s way easier to do novel hypothesis-generation if you can do it within a “cloak”, without making any sort of claim yet about what other people ought to believe. (Teaching this has been quite useful on a practical level for many at AIRCS, MSFP, and instructor trainings—seems worth seeing if it can be useful via text, though that’s harder.)
Me-liefs, We-liefs, and Units of Exchange
Related to “cloaks and cover stories”—we have different pools of resources that are subject to different implicit contracts and commitments. Not all Bayesian evidence is judicial or scientific evidence, etc.. A lot of social coordination works by agreeing to only use certain pools of resources in agreement with certain standards of evidence / procedure / deference (e.g., when a person does shopping for their workplace they follow their workplace’s “which items to buy” procedures; when a physicist speaks to laypeople in their official capacity as a physicist, they follow certain procedures so as to avoid misrepresenting the community of physicists).
People often manage this coordination by changing their beliefs (“yes, I agree that drunk driving is dangerous—therefore you can trust me not to drink and drive”). However, personally I like the rule “beliefs are for true things—social transactions can make my requests of my behaviors but not of my beliefs.” And I’ve got a bunch of gimmicks for navigating the “be robustly and accurately seen as prosocial” without modifying one’s beliefs (“In my driving, I value cooperating with the laws and customs so as to be predictable and trusted and trustworthy in that way; and drunk driving is very strongly against our customs—so you can trust me not to drink and drive.”)
How the Tao unravels
A book review of part of CS Lewis’s book “The abolition of man.” Elaborates CS Lewis’s argument that in postmodern times, people grab hold of part of humane values and assert it in contradiction with other parts of humane values, which then assert back the thing that they’re holding and the other party is missing, and then things fragment further and further. Compares Lewis’s proposed mechanism with how cultural divides have actually been going in the rationality and EA communities over the last ten years.
The need to coordinate in this way holds just as much for consequentialists or anyone else.
I have a strong heuristic that I should slow down and throw a major warning flag if I am doing (or recommending that someone else do) something I believe would be unethical if done by someone not aiming to contribute to a super high impact project. I (weakly) believe more people should use this heuristic.
A bunch of Double Crux posts that I keep promising but am very bad at actually finishing.
The Last Term Problem (or why saving the world is so much harder than it seems) - A abstract decision theoretic problem that has confused me about taking actions at all for the past year.
A post on how the commonly cited paper on how “Introspection is Impossible” (Nisbett and Wilson) is misleading.
Two takes on confabulation—About how the Elephant in the Brain thesis doesn’t imply that we can’t tell what our motivations actually are, just that we aren’t usually motivated to.
A lit review on mental energy and fatigue.
A lit review on how attention works.
Most of my writing is either private strategy documents, or spur of the moment thoughts / development-nuggets that I post here.
This doesn’t capture everything, but one key piece is “People often confuse a lack of motivation to introspect with a lack of ability to introspect. The fact of confabulation does not demonstrate that people are unable articulate what’s actually happening in principle.” Very related to the other post on confabulation I note above.
Also, if I remember correctly, some of the papers in that meta analysis, just have silly setups: testing whether people can introspect into information that they couldn’t have access too. (Possible that I misunderstood or am miss-remembering.)
To give a short positive account:
All introspection depends on comparison between mental states at different points in time. You can’t introspect on some causal factor that doesn’t vary.
Also, the information has to be available at the time of introspection, ie still in short term memory.
But that gives a lot more degrees for freedom that people seem to predict, and in practice I am able to notice many subtle intentions (such as when my behavior is motivated by signalling), that others want to throw out as unknowable.
This isn’t a direct answer to, “What are the LessWrong posts that you wish you had the time to write?” It is a response to a near-by question, though, which is probably something along the lines of, “What problems are you particularly interested in right now?” which is the question that always drives my blogging. Here’s a sampling, in no particular order.
There are things you’re subject to, and things you can take as object. For example, I used to do things like cry when an ambulance went by with its siren on, or say “ouch!” when I put a plate away and it went “clink”, yet I wasn’t aware that I was sensitive to sounds. If asked, “Are you sensitive to sounds?” I’d have said “No.” I did avoid certain sounds in local hill-climby ways, like making music playlists with lots of low strings but no trumpets, or not hanging out with people who speak loudly. But I didn’t “know” I was doing these things; I was *subject* to my sound sensitivity. I could not take it as *object*, so I couldn’t deliberately design my daily life to account for it. Now that I can take my sound sensitivity (and many related things) as object, I’m in a much more powerful position. And it *terrifies* me that I went a quarter of a century without recognizing these basic facts of my experience. It terrifies me even more when I imagine an AI researcher being subject to some similarly crucial thing about how agents work. I would very much like to know what other basic facts of my experience I remain unaware of. I would like to know how to find out what I am currently unable to take as object.
On a related note, you know how an awful lot of people in our community are autistic? It seem to me that our community is subject to this fact. (It also seems to me that many individual people in our community remain subject to most of their autistic patterns, and that this is more like the rule than the exception.) I would like to know what’s going on here, and whether some other state of affairs would be preferable, and how to instantiate that state of affairs.
Why do so many people seem to wait around for other people to teach them things, even when they seem to be trying very hard to learn? Do they think they need permission? Do they think they need authority? What are they protecting? Am I inadvertently destroying it when I try to figure things out for myself? What stops people from interrogating the world on their own terms?
I get an awful lot of use out of asking myself questions. I think I’m unusually good at doing this, and that I know a few other people with this property. I suspect that the really useful thing isn’t so much the questions, as whatever I’m doing with my mind most of the time that allows me to ask good questions. I’d like to know what other people are doing with their minds that prevents this, and whether there’s a different thing to do that’s better.
What is “quality”?
Suppose religion is symbiotic, and not just parasitic. What exactly is it doing for people? How is it doing those things? Are there specific problems it’s solving? What are the problems? How can we solve those problems without tolerating the damage religion causes?
[Some spoilers for bits of the premise of A Fire Upon The Deep and other stories in that sequence.] There’s this alien race in Verner Vinge books called the Tines. A “person” of the Tines species looks at first like a pack of several animals. The singleton members that make up a pack use high-frequency sound, rather than chemical neurotransmitters, to think as one mind. The singleton members of a pack age, so when one of your singletons dies, you adopt a new singleton. Since singletons are all slightly different and sort of have their own personalities, part of personal health and hygiene for Tines involves managing these transitions wisely. If you do a good job — never letting several members die in quick succession, never adopting a singleton that can’t harmonize with the rest of you, taking on new singletons before the oldest ones loose the ability to communicate — then you’re effectively immortal. You just keep amassing new skills and perspectives and thought styles, without drifting too far from your original intentions. If you manage the transitions poorly, though — choosing recklessly, not understanding the patterns an old member has been contributing, participating in a war where several of your singletons may die at once — then your mind could easily become suddenly very different, or disorganized and chaotic, or outright insane, in a way you’ve lost the ability to recover from. I think about the Tines a lot when I experiment with new ways of thinking and feeling. I think much of rationality poses a similar danger to the one faced by the Tines. So I’d like to know what practices constitute personal health and hygiene for cognitive growth and development in humans.
What is original seeing? How does it work? When is it most important? When is it the wrong move? How can I become better at it? How can people who are worse at it than I am become better at it?
In another thread, Adam made a comment that I thought was fantastic. I typed to him, “That comment is fantastic!” As I did so, I noticed that I had an option about how to relate to the comment, and to Adam, when I felt a bid from somewhere in my mind to re-phrase as, “I really like that comment,” or, “I enjoyed reading your comment,” or “I’m excited and impressed by your comment.” That bid came from a place that shares a lot of values with Lesswrong-style rationalists, and 20th century science, and really with liberalism in general. It values objectivity, respect, independence, autonomy, and consent, among other things. It holds map-territory distinctions and keeps its distance from the world, in an attempt to see all things clearly. But I decided to stand behind my claim that the “the comment is fantastic”. I did not “own my experience”, in this case, or highlight that my values are part of me rather than part of the world. I have a feeling that something really important is lost in the careful distance we keep all the time from the world and from each other. Something about the power to act, to affect each other in ways that create small-to-mid-sized superorganisms like teams and communities, something about tending our relationship to the world so that we don’t float off in bubbles of abstraction. Whatever that important thing is, I want to understand it. And I want to protect it, and to incorporate it into my patterns of thought, without loosing all I gain from cold clarity and distance.
I would like to think more clearly, especially when it seems important to do so. There are a lot of things that might affect how clearly you think, some of which are discussed in the Sequences. For example, one common pattern of muddy thought is rationalization, so one way to increase your cognitive clarity is to stop completely ignoring the existence of rationalization. I’ve lately been interested in a category of clarity-increasing thingies that might be sensibly described as “the relationship between a cognitive process and its environment”. By “environment”, I meant to include several things:
The internal mental environment: the cognitive and emotional situation in which a thought pattern finds itself. Example: When part of my mind is trying to tally up how much money I spent in the past month, and local mental processes desperately want the answer to be “very little” for some reason, my clarity of thought while tallying might not be so great. I expect that well maintained internal mental environments — ones that promote clear thinking — tend to have properties like abundance, spaciousness, and groundedness.
The internal physical environment: the physiological state of a body. For example, hydration seems to play a shockingly important role in how well I maintain my internal mental environment while I think. If I’m trying to solve a math problem and have had nothing to drink for two hours, it’s likely I’m trying to work in a state of frustration and impatience. Similar things are true of sleep and exercise.
The external physical environment: the sensory info coming in from the outside world, and the feedback patterns created by external objects and perceptual processes. When I’ve been having a conversation in one room, and then I move to another room, it often feels as though I’ve left half my thoughts behind. I think this is because I’m making extensive use of the walls and couches and such in my computations. I claim that one’s relationship to the external environment can make more or less use of the environment’s supportive potential, and that environments can be arranged in ways that promote clarity of thought (see Adam’s notes on the design of the CFAR venue, for instance).
The social environment: people, especially frequently encountered ones. The social environment is basically just part of the external physical environment, but it’s such an unusual part that I think it ought to be singled out. First of all, it has powerful effects on the internal mental environment. The phrase “politics is the mind killer” means something like “if you want to design the social environment to maximize muddiness of thought, have I got a deal for you”. Secondly, other minds have the remarkable property of containing complex cognitive processes, which are themselves situated in every level of environment. If you’ve ever confided in a close, reasonable friend who had some distance from your own internal turmoil, you know what I’m getting at here. I’ve thought a lot lately about how to build a “healthy community” in which to situate my thoughts. A good way to think about what I’m trying to do is that I want to cultivate the properties of interpersonal interaction that lead to the highest quality, best maintained internal mental environments for all involved.
I built a loft bed recently. Not from scratch, just Ikea-style. When I was about halfway through the process, I realized that I’d put one of the panels on backward. I’d made the mistake toward the beginning, so there were already many pieces screwed into that panel, and no way to flip it around without taking the whole bed apart again. At that point, I had a few thoughts in quick succession:
I really don’t want to take the whole bed apart and put it back together again.
Maybe I could unscrew the pieces connected to that panel, then carefully balance all of them while I flip the panel around? (Something would probably break if I did that.)
You know what, maybe I don’t want a dumb loft bed anyway.
It so happens that in this particular case, I sighed, took the bed apart, carefully noted where each bit was supposed to go, flipped the panel around, and put it all back together again perfectly. But I’ve certainly been in similar situations where for some reason, I let one mistake lead to more mistakes. I rushed, broke things, lost pieces, hurt other people, or gave up. I’d like to know what circumstances obtain when I have get this right, and what circumstances obtain when I don’t. Where can I get patience, groundedness, clarity, gumption, and care?
What is “groundedness”?
I’ve developed a taste for reading books that I hate. I like to try on the perspective of one author after another, authors with whom I think I have really fundamental disagreements about how the world works, how one ought to think, and whether yellow is really such a bad color after all. There’s a generalized version of “reading books you hate” that I might call “perceptual dexterity”, or I might call “the ground of creativity”, which is something like having a thousand prehensile eye-stalks in your mind, and I think prehensile eye-stalks are pretty cool. But I also think it’s generally a good idea to avoid reading books you hate, because your hatred of them is often trying to protect you from “your self and worldview falling apart”, or something. I’d like to know whether my self and worldview are falling apart, or whatever. And if not, I’d like to know whether I’m doing something to prevent it that other people could learn to do, and whether they’d thereby gain access to a whole lot more perspective from which they could triangulate reality.
I have a Google Doc full of ideas. Probably I’ll never write most of these, and if I do probably much of the content will change. But here are some titles, as they currently appear in my personal notes:
Mesa-Optimization in Humans
Primitivist Priors v. Pinker Priors
Local Deontology, Global Consequentialism
Fault-Tolerant Note-Scanning
Goal Convergence as Metaethical Crucial Consideration
Embodied Error Tracking
Abnormally Pleasurable Insights
Burnout Recovery
Against Goal “Legitimacy”
Computational Properties of Slime Mold
Steelmanning the Verificationist Criterion of Meaning
What are the LessWrong posts that you wish you had the time to write?
Here’s a very partial list of blog post ideas from my drafts/brainstorms folder. Outside view, though, if I took the time to try to turn these in to blog posts, I’d end up changing my mind about more than half of the content in the process of writing it up (and then would eventually end up with blog posts with somewhat different these).
I’m including brief descriptions with the awareness that my descriptions may not parse at this level of brevity, in the hopes that they’re at least interesting teasers.
Contra-Hodgel
(The Litany of Hodgell says “That which can be destroyed by the truth should be”. Its contrapositive therefore says: “That which can destroy [that which should not be destroyed] must not be the full truth.” It is interesting and sometimes-useful to attempt to use Contra-Hodgel as a practical heuristic: “if adopting belief X will meaningfully impair my ability to achieve good things, there must be some extra false belief or assumption somewhere in the system, since true beliefs and accurate maps should just help (e.g., if “there is no Judeo-Christian God” in practice impairs my ability to have good and compassionate friendships, perhaps there is some false belief somewhere in the system that is messing with that).
The 50⁄50 rule
The 50⁄50 rule is a proposed heuristic claiming that about half of all progress on difficult projects will come from already-known-to-be-project-relevant subtasks—for example, if Archimedes wishes to determine whether the king’s crown is unmixed gold, he will get about half his progress from diligently thinking about this question (plus subtopics that seem obviously and explicitly relevant to this question). The other half of progress on difficult projects (according to this heuristic) will come from taking an interest in the rest of the world, including parts not yet known to be related to the problem at hand—in the Archimedes example, from Archimedes taking an interest in what happens to his bathwater.
Relatedly, the 50⁄50 rule estimates that if you would like to move difficult projects forward over long periods of time, it is often useful to spend about half of your high-energy hours on “diligently working on subtasks known-to-be-related to your project”, and the other half taking an interest in the world.
Make New Models, but Keep The Old
“… one is silver and the other’s gold.”
A retelling of: it all adds up to normality.
On Courage and Believing In.
Beliefs are for predicting what’s true. “Believing in”, OTOH, is for creating a local normal that others can accurately predict. For example: “In America, we believe in driving on the right hand side of the road”—thus, when you go outside and look to predict which way people will be driving, you can simply predict (believe) that they’ll be driving on the right hand side.
Analogously, if I decide I “believe in” [honesty, or standing up for my friends, or other such things], I create an internal context in which various models within me can predict that my future actions will involve [honesty, or standing up for my friends, or similar].
It’s important and good to do this sometimes, rather than having one’s life be an accidental mess with nobody home choosing. It’s also closely related to courage.
Ethics for code colonies
If you want to keep caring about people, it makes a lot of sense to e.g. take the time to put your shopping cart back where it goes, or at minimum not to make up excuses about how your future impact on the world makes you too important to do that.
In general, when you take an action, you summon up black box code-modification that takes that action (and changes unknown numbers of other things). Life as a “code colony” is tricky that way.
Ethics is the branch of practical engineering devoted to how to accomplish things with large sets of people over long periods of time—or even with one person over a long period of time in a confusing or unknown environment. It’s the art of interpersonal and intrapersonal coordination. (I mean, sometimes people say “ethics” means “following this set of rules here”. But people also say “math” means “following this algorithm whenever you have to divide fractions” or whatever. And the underneath-thing with ethics is (among other things, maybe) interpersonal and intra-personal coordination, kinda like how there’s an underneath-thing with math that is where those rules come from.)
The need to coordinate in this way holds just as much for consequentialists or anyone else.
It’s kinda terrifying to be trying to do this without a culture. Or to be not trying to do this (still without a culture).
The explicit and the tacit (elaborated a bit in a comment in this AMA; but there’s room for more).
Cloaks, Questing, and Cover Stories
It’s way easier to do novel hypothesis-generation if you can do it within a “cloak”, without making any sort of claim yet about what other people ought to believe. (Teaching this has been quite useful on a practical level for many at AIRCS, MSFP, and instructor trainings—seems worth seeing if it can be useful via text, though that’s harder.)
Me-liefs, We-liefs, and Units of Exchange
Related to “cloaks and cover stories”—we have different pools of resources that are subject to different implicit contracts and commitments. Not all Bayesian evidence is judicial or scientific evidence, etc.. A lot of social coordination works by agreeing to only use certain pools of resources in agreement with certain standards of evidence / procedure / deference (e.g., when a person does shopping for their workplace they follow their workplace’s “which items to buy” procedures; when a physicist speaks to laypeople in their official capacity as a physicist, they follow certain procedures so as to avoid misrepresenting the community of physicists).
People often manage this coordination by changing their beliefs (“yes, I agree that drunk driving is dangerous—therefore you can trust me not to drink and drive”). However, personally I like the rule “beliefs are for true things—social transactions can make my requests of my behaviors but not of my beliefs.” And I’ve got a bunch of gimmicks for navigating the “be robustly and accurately seen as prosocial” without modifying one’s beliefs (“In my driving, I value cooperating with the laws and customs so as to be predictable and trusted and trustworthy in that way; and drunk driving is very strongly against our customs—so you can trust me not to drink and drive.”)
How the Tao unravels
A book review of part of CS Lewis’s book “The abolition of man.” Elaborates CS Lewis’s argument that in postmodern times, people grab hold of part of humane values and assert it in contradiction with other parts of humane values, which then assert back the thing that they’re holding and the other party is missing, and then things fragment further and further. Compares Lewis’s proposed mechanism with how cultural divides have actually been going in the rationality and EA communities over the last ten years.
I have a strong heuristic that I should slow down and throw a major warning flag if I am doing (or recommending that someone else do) something I believe would be unethical if done by someone not aiming to contribute to a super high impact project. I (weakly) believe more people should use this heuristic.
Some off the top of my head.
A bunch of Double Crux posts that I keep promising but am very bad at actually finishing.
The Last Term Problem (or why saving the world is so much harder than it seems) - A abstract decision theoretic problem that has confused me about taking actions at all for the past year.
A post on how the commonly cited paper on how “Introspection is Impossible” (Nisbett and Wilson) is misleading.
Two takes on confabulation—About how the Elephant in the Brain thesis doesn’t imply that we can’t tell what our motivations actually are, just that we aren’t usually motivated to.
A lit review on mental energy and fatigue.
A lit review on how attention works.
Most of my writing is either private strategy documents, or spur of the moment thoughts / development-nuggets that I post here.
Can you too-tersely summarize your Nisbett and Wilson argument?
Or, like… writer a teaser / movie trailer for it, if you’re worried your summary would be incomplete or inoculating?
This doesn’t capture everything, but one key piece is “People often confuse a lack of motivation to introspect with a lack of ability to introspect. The fact of confabulation does not demonstrate that people are unable articulate what’s actually happening in principle.” Very related to the other post on confabulation I note above.
Also, if I remember correctly, some of the papers in that meta analysis, just have silly setups: testing whether people can introspect into information that they couldn’t have access too. (Possible that I misunderstood or am miss-remembering.)
To give a short positive account:
All introspection depends on comparison between mental states at different points in time. You can’t introspect on some causal factor that doesn’t vary.
Also, the information has to be available at the time of introspection, ie still in short term memory.
But that gives a lot more degrees for freedom that people seem to predict, and in practice I am able to notice many subtle intentions (such as when my behavior is motivated by signalling), that others want to throw out as unknowable.
This isn’t a direct answer to, “What are the LessWrong posts that you wish you had the time to write?” It is a response to a near-by question, though, which is probably something along the lines of, “What problems are you particularly interested in right now?” which is the question that always drives my blogging. Here’s a sampling, in no particular order.
[edit: cross-posted to Ray’s Open Problems post.]
There are things you’re subject to, and things you can take as object. For example, I used to do things like cry when an ambulance went by with its siren on, or say “ouch!” when I put a plate away and it went “clink”, yet I wasn’t aware that I was sensitive to sounds. If asked, “Are you sensitive to sounds?” I’d have said “No.” I did avoid certain sounds in local hill-climby ways, like making music playlists with lots of low strings but no trumpets, or not hanging out with people who speak loudly. But I didn’t “know” I was doing these things; I was *subject* to my sound sensitivity. I could not take it as *object*, so I couldn’t deliberately design my daily life to account for it. Now that I can take my sound sensitivity (and many related things) as object, I’m in a much more powerful position. And it *terrifies* me that I went a quarter of a century without recognizing these basic facts of my experience. It terrifies me even more when I imagine an AI researcher being subject to some similarly crucial thing about how agents work. I would very much like to know what other basic facts of my experience I remain unaware of. I would like to know how to find out what I am currently unable to take as object.
On a related note, you know how an awful lot of people in our community are autistic? It seem to me that our community is subject to this fact. (It also seems to me that many individual people in our community remain subject to most of their autistic patterns, and that this is more like the rule than the exception.) I would like to know what’s going on here, and whether some other state of affairs would be preferable, and how to instantiate that state of affairs.
Why do so many people seem to wait around for other people to teach them things, even when they seem to be trying very hard to learn? Do they think they need permission? Do they think they need authority? What are they protecting? Am I inadvertently destroying it when I try to figure things out for myself? What stops people from interrogating the world on their own terms?
I get an awful lot of use out of asking myself questions. I think I’m unusually good at doing this, and that I know a few other people with this property. I suspect that the really useful thing isn’t so much the questions, as whatever I’m doing with my mind most of the time that allows me to ask good questions. I’d like to know what other people are doing with their minds that prevents this, and whether there’s a different thing to do that’s better.
What is “quality”?
Suppose religion is symbiotic, and not just parasitic. What exactly is it doing for people? How is it doing those things? Are there specific problems it’s solving? What are the problems? How can we solve those problems without tolerating the damage religion causes?
[Some spoilers for bits of the premise of A Fire Upon The Deep and other stories in that sequence.] There’s this alien race in Verner Vinge books called the Tines. A “person” of the Tines species looks at first like a pack of several animals. The singleton members that make up a pack use high-frequency sound, rather than chemical neurotransmitters, to think as one mind. The singleton members of a pack age, so when one of your singletons dies, you adopt a new singleton. Since singletons are all slightly different and sort of have their own personalities, part of personal health and hygiene for Tines involves managing these transitions wisely. If you do a good job — never letting several members die in quick succession, never adopting a singleton that can’t harmonize with the rest of you, taking on new singletons before the oldest ones loose the ability to communicate — then you’re effectively immortal. You just keep amassing new skills and perspectives and thought styles, without drifting too far from your original intentions. If you manage the transitions poorly, though — choosing recklessly, not understanding the patterns an old member has been contributing, participating in a war where several of your singletons may die at once — then your mind could easily become suddenly very different, or disorganized and chaotic, or outright insane, in a way you’ve lost the ability to recover from. I think about the Tines a lot when I experiment with new ways of thinking and feeling. I think much of rationality poses a similar danger to the one faced by the Tines. So I’d like to know what practices constitute personal health and hygiene for cognitive growth and development in humans.
What is original seeing? How does it work? When is it most important? When is it the wrong move? How can I become better at it? How can people who are worse at it than I am become better at it?
In another thread, Adam made a comment that I thought was fantastic. I typed to him, “That comment is fantastic!” As I did so, I noticed that I had an option about how to relate to the comment, and to Adam, when I felt a bid from somewhere in my mind to re-phrase as, “I really like that comment,” or, “I enjoyed reading your comment,” or “I’m excited and impressed by your comment.” That bid came from a place that shares a lot of values with Lesswrong-style rationalists, and 20th century science, and really with liberalism in general. It values objectivity, respect, independence, autonomy, and consent, among other things. It holds map-territory distinctions and keeps its distance from the world, in an attempt to see all things clearly. But I decided to stand behind my claim that the “the comment is fantastic”. I did not “own my experience”, in this case, or highlight that my values are part of me rather than part of the world. I have a feeling that something really important is lost in the careful distance we keep all the time from the world and from each other. Something about the power to act, to affect each other in ways that create small-to-mid-sized superorganisms like teams and communities, something about tending our relationship to the world so that we don’t float off in bubbles of abstraction. Whatever that important thing is, I want to understand it. And I want to protect it, and to incorporate it into my patterns of thought, without loosing all I gain from cold clarity and distance.
I would like to think more clearly, especially when it seems important to do so. There are a lot of things that might affect how clearly you think, some of which are discussed in the Sequences. For example, one common pattern of muddy thought is rationalization, so one way to increase your cognitive clarity is to stop completely ignoring the existence of rationalization. I’ve lately been interested in a category of clarity-increasing thingies that might be sensibly described as “the relationship between a cognitive process and its environment”. By “environment”, I meant to include several things:
The internal mental environment: the cognitive and emotional situation in which a thought pattern finds itself. Example: When part of my mind is trying to tally up how much money I spent in the past month, and local mental processes desperately want the answer to be “very little” for some reason, my clarity of thought while tallying might not be so great. I expect that well maintained internal mental environments — ones that promote clear thinking — tend to have properties like abundance, spaciousness, and groundedness.
The internal physical environment: the physiological state of a body. For example, hydration seems to play a shockingly important role in how well I maintain my internal mental environment while I think. If I’m trying to solve a math problem and have had nothing to drink for two hours, it’s likely I’m trying to work in a state of frustration and impatience. Similar things are true of sleep and exercise.
The external physical environment: the sensory info coming in from the outside world, and the feedback patterns created by external objects and perceptual processes. When I’ve been having a conversation in one room, and then I move to another room, it often feels as though I’ve left half my thoughts behind. I think this is because I’m making extensive use of the walls and couches and such in my computations. I claim that one’s relationship to the external environment can make more or less use of the environment’s supportive potential, and that environments can be arranged in ways that promote clarity of thought (see Adam’s notes on the design of the CFAR venue, for instance).
The social environment: people, especially frequently encountered ones. The social environment is basically just part of the external physical environment, but it’s such an unusual part that I think it ought to be singled out. First of all, it has powerful effects on the internal mental environment. The phrase “politics is the mind killer” means something like “if you want to design the social environment to maximize muddiness of thought, have I got a deal for you”. Secondly, other minds have the remarkable property of containing complex cognitive processes, which are themselves situated in every level of environment. If you’ve ever confided in a close, reasonable friend who had some distance from your own internal turmoil, you know what I’m getting at here. I’ve thought a lot lately about how to build a “healthy community” in which to situate my thoughts. A good way to think about what I’m trying to do is that I want to cultivate the properties of interpersonal interaction that lead to the highest quality, best maintained internal mental environments for all involved.
I built a loft bed recently. Not from scratch, just Ikea-style. When I was about halfway through the process, I realized that I’d put one of the panels on backward. I’d made the mistake toward the beginning, so there were already many pieces screwed into that panel, and no way to flip it around without taking the whole bed apart again. At that point, I had a few thoughts in quick succession:
I really don’t want to take the whole bed apart and put it back together again.
Maybe I could unscrew the pieces connected to that panel, then carefully balance all of them while I flip the panel around? (Something would probably break if I did that.)
You know what, maybe I don’t want a dumb loft bed anyway.
It so happens that in this particular case, I sighed, took the bed apart, carefully noted where each bit was supposed to go, flipped the panel around, and put it all back together again perfectly. But I’ve certainly been in similar situations where for some reason, I let one mistake lead to more mistakes. I rushed, broke things, lost pieces, hurt other people, or gave up. I’d like to know what circumstances obtain when I have get this right, and what circumstances obtain when I don’t. Where can I get patience, groundedness, clarity, gumption, and care?
What is “groundedness”?
I’ve developed a taste for reading books that I hate. I like to try on the perspective of one author after another, authors with whom I think I have really fundamental disagreements about how the world works, how one ought to think, and whether yellow is really such a bad color after all. There’s a generalized version of “reading books you hate” that I might call “perceptual dexterity”, or I might call “the ground of creativity”, which is something like having a thousand prehensile eye-stalks in your mind, and I think prehensile eye-stalks are pretty cool. But I also think it’s generally a good idea to avoid reading books you hate, because your hatred of them is often trying to protect you from “your self and worldview falling apart”, or something. I’d like to know whether my self and worldview are falling apart, or whatever. And if not, I’d like to know whether I’m doing something to prevent it that other people could learn to do, and whether they’d thereby gain access to a whole lot more perspective from which they could triangulate reality.
I have a Google Doc full of ideas. Probably I’ll never write most of these, and if I do probably much of the content will change. But here are some titles, as they currently appear in my personal notes:
Mesa-Optimization in Humans
Primitivist Priors v. Pinker Priors
Local Deontology, Global Consequentialism
Fault-Tolerant Note-Scanning
Goal Convergence as Metaethical Crucial Consideration
Embodied Error Tracking
Abnormally Pleasurable Insights
Burnout Recovery
Against Goal “Legitimacy”
Computational Properties of Slime Mold
Steelmanning the Verificationist Criterion of Meaning
Manual Tribe Switching
Manual TAP Installation
Keep Your Hobbies
I don’t think that time is my main constraint, but here are some of my blog post shaped ideas:
Taste propagates through a medium
Morality: do-gooding and coordination
What to make of ego depletion research
Taboo “status”
What it means to become calibrated
The NFL Combine as a case study in optimizing for a proxy
The ability to paraphrase
5 approaches to epistemics