Here’s a very partial list of blog post ideas from my drafts/brainstorms folder. Outside view, though, if I took the time to try to turn these in to blog posts, I’d end up changing my mind about more than half of the content in the process of writing it up (and then would eventually end up with blog posts with somewhat different these).
I’m including brief descriptions with the awareness that my descriptions may not parse at this level of brevity, in the hopes that they’re at least interesting teasers.
Contra-Hodgel
(The Litany of Hodgell says “That which can be destroyed by the truth should be”. Its contrapositive therefore says: “That which can destroy [that which should not be destroyed] must not be the full truth.” It is interesting and sometimes-useful to attempt to use Contra-Hodgel as a practical heuristic: “if adopting belief X will meaningfully impair my ability to achieve good things, there must be some extra false belief or assumption somewhere in the system, since true beliefs and accurate maps should just help (e.g., if “there is no Judeo-Christian God” in practice impairs my ability to have good and compassionate friendships, perhaps there is some false belief somewhere in the system that is messing with that).
The 50⁄50 rule
The 50⁄50 rule is a proposed heuristic claiming that about half of all progress on difficult projects will come from already-known-to-be-project-relevant subtasks—for example, if Archimedes wishes to determine whether the king’s crown is unmixed gold, he will get about half his progress from diligently thinking about this question (plus subtopics that seem obviously and explicitly relevant to this question). The other half of progress on difficult projects (according to this heuristic) will come from taking an interest in the rest of the world, including parts not yet known to be related to the problem at hand—in the Archimedes example, from Archimedes taking an interest in what happens to his bathwater.
Relatedly, the 50⁄50 rule estimates that if you would like to move difficult projects forward over long periods of time, it is often useful to spend about half of your high-energy hours on “diligently working on subtasks known-to-be-related to your project”, and the other half taking an interest in the world.
Make New Models, but Keep The Old
“… one is silver and the other’s gold.”
A retelling of: it all adds up to normality.
On Courage and Believing In.
Beliefs are for predicting what’s true. “Believing in”, OTOH, is for creating a local normal that others can accurately predict. For example: “In America, we believe in driving on the right hand side of the road”—thus, when you go outside and look to predict which way people will be driving, you can simply predict (believe) that they’ll be driving on the right hand side.
Analogously, if I decide I “believe in” [honesty, or standing up for my friends, or other such things], I create an internal context in which various models within me can predict that my future actions will involve [honesty, or standing up for my friends, or similar].
It’s important and good to do this sometimes, rather than having one’s life be an accidental mess with nobody home choosing. It’s also closely related to courage.
Ethics for code colonies
If you want to keep caring about people, it makes a lot of sense to e.g. take the time to put your shopping cart back where it goes, or at minimum not to make up excuses about how your future impact on the world makes you too important to do that.
In general, when you take an action, you summon up black box code-modification that takes that action (and changes unknown numbers of other things). Life as a “code colony” is tricky that way.
Ethics is the branch of practical engineering devoted to how to accomplish things with large sets of people over long periods of time—or even with one person over a long period of time in a confusing or unknown environment. It’s the art of interpersonal and intrapersonal coordination. (I mean, sometimes people say “ethics” means “following this set of rules here”. But people also say “math” means “following this algorithm whenever you have to divide fractions” or whatever. And the underneath-thing with ethics is (among other things, maybe) interpersonal and intra-personal coordination, kinda like how there’s an underneath-thing with math that is where those rules come from.)
The need to coordinate in this way holds just as much for consequentialists or anyone else.
It’s kinda terrifying to be trying to do this without a culture. Or to be not trying to do this (still without a culture).
The explicit and the tacit (elaborated a bit in a comment in this AMA; but there’s room for more).
Cloaks, Questing, and Cover Stories
It’s way easier to do novel hypothesis-generation if you can do it within a “cloak”, without making any sort of claim yet about what other people ought to believe. (Teaching this has been quite useful on a practical level for many at AIRCS, MSFP, and instructor trainings—seems worth seeing if it can be useful via text, though that’s harder.)
Me-liefs, We-liefs, and Units of Exchange
Related to “cloaks and cover stories”—we have different pools of resources that are subject to different implicit contracts and commitments. Not all Bayesian evidence is judicial or scientific evidence, etc.. A lot of social coordination works by agreeing to only use certain pools of resources in agreement with certain standards of evidence / procedure / deference (e.g., when a person does shopping for their workplace they follow their workplace’s “which items to buy” procedures; when a physicist speaks to laypeople in their official capacity as a physicist, they follow certain procedures so as to avoid misrepresenting the community of physicists).
People often manage this coordination by changing their beliefs (“yes, I agree that drunk driving is dangerous—therefore you can trust me not to drink and drive”). However, personally I like the rule “beliefs are for true things—social transactions can make my requests of my behaviors but not of my beliefs.” And I’ve got a bunch of gimmicks for navigating the “be robustly and accurately seen as prosocial” without modifying one’s beliefs (“In my driving, I value cooperating with the laws and customs so as to be predictable and trusted and trustworthy in that way; and drunk driving is very strongly against our customs—so you can trust me not to drink and drive.”)
How the Tao unravels
A book review of part of CS Lewis’s book “The abolition of man.” Elaborates CS Lewis’s argument that in postmodern times, people grab hold of part of humane values and assert it in contradiction with other parts of humane values, which then assert back the thing that they’re holding and the other party is missing, and then things fragment further and further. Compares Lewis’s proposed mechanism with how cultural divides have actually been going in the rationality and EA communities over the last ten years.
The need to coordinate in this way holds just as much for consequentialists or anyone else.
I have a strong heuristic that I should slow down and throw a major warning flag if I am doing (or recommending that someone else do) something I believe would be unethical if done by someone not aiming to contribute to a super high impact project. I (weakly) believe more people should use this heuristic.
Here’s a very partial list of blog post ideas from my drafts/brainstorms folder. Outside view, though, if I took the time to try to turn these in to blog posts, I’d end up changing my mind about more than half of the content in the process of writing it up (and then would eventually end up with blog posts with somewhat different these).
I’m including brief descriptions with the awareness that my descriptions may not parse at this level of brevity, in the hopes that they’re at least interesting teasers.
Contra-Hodgel
(The Litany of Hodgell says “That which can be destroyed by the truth should be”. Its contrapositive therefore says: “That which can destroy [that which should not be destroyed] must not be the full truth.” It is interesting and sometimes-useful to attempt to use Contra-Hodgel as a practical heuristic: “if adopting belief X will meaningfully impair my ability to achieve good things, there must be some extra false belief or assumption somewhere in the system, since true beliefs and accurate maps should just help (e.g., if “there is no Judeo-Christian God” in practice impairs my ability to have good and compassionate friendships, perhaps there is some false belief somewhere in the system that is messing with that).
The 50⁄50 rule
The 50⁄50 rule is a proposed heuristic claiming that about half of all progress on difficult projects will come from already-known-to-be-project-relevant subtasks—for example, if Archimedes wishes to determine whether the king’s crown is unmixed gold, he will get about half his progress from diligently thinking about this question (plus subtopics that seem obviously and explicitly relevant to this question). The other half of progress on difficult projects (according to this heuristic) will come from taking an interest in the rest of the world, including parts not yet known to be related to the problem at hand—in the Archimedes example, from Archimedes taking an interest in what happens to his bathwater.
Relatedly, the 50⁄50 rule estimates that if you would like to move difficult projects forward over long periods of time, it is often useful to spend about half of your high-energy hours on “diligently working on subtasks known-to-be-related to your project”, and the other half taking an interest in the world.
Make New Models, but Keep The Old
“… one is silver and the other’s gold.”
A retelling of: it all adds up to normality.
On Courage and Believing In.
Beliefs are for predicting what’s true. “Believing in”, OTOH, is for creating a local normal that others can accurately predict. For example: “In America, we believe in driving on the right hand side of the road”—thus, when you go outside and look to predict which way people will be driving, you can simply predict (believe) that they’ll be driving on the right hand side.
Analogously, if I decide I “believe in” [honesty, or standing up for my friends, or other such things], I create an internal context in which various models within me can predict that my future actions will involve [honesty, or standing up for my friends, or similar].
It’s important and good to do this sometimes, rather than having one’s life be an accidental mess with nobody home choosing. It’s also closely related to courage.
Ethics for code colonies
If you want to keep caring about people, it makes a lot of sense to e.g. take the time to put your shopping cart back where it goes, or at minimum not to make up excuses about how your future impact on the world makes you too important to do that.
In general, when you take an action, you summon up black box code-modification that takes that action (and changes unknown numbers of other things). Life as a “code colony” is tricky that way.
Ethics is the branch of practical engineering devoted to how to accomplish things with large sets of people over long periods of time—or even with one person over a long period of time in a confusing or unknown environment. It’s the art of interpersonal and intrapersonal coordination. (I mean, sometimes people say “ethics” means “following this set of rules here”. But people also say “math” means “following this algorithm whenever you have to divide fractions” or whatever. And the underneath-thing with ethics is (among other things, maybe) interpersonal and intra-personal coordination, kinda like how there’s an underneath-thing with math that is where those rules come from.)
The need to coordinate in this way holds just as much for consequentialists or anyone else.
It’s kinda terrifying to be trying to do this without a culture. Or to be not trying to do this (still without a culture).
The explicit and the tacit (elaborated a bit in a comment in this AMA; but there’s room for more).
Cloaks, Questing, and Cover Stories
It’s way easier to do novel hypothesis-generation if you can do it within a “cloak”, without making any sort of claim yet about what other people ought to believe. (Teaching this has been quite useful on a practical level for many at AIRCS, MSFP, and instructor trainings—seems worth seeing if it can be useful via text, though that’s harder.)
Me-liefs, We-liefs, and Units of Exchange
Related to “cloaks and cover stories”—we have different pools of resources that are subject to different implicit contracts and commitments. Not all Bayesian evidence is judicial or scientific evidence, etc.. A lot of social coordination works by agreeing to only use certain pools of resources in agreement with certain standards of evidence / procedure / deference (e.g., when a person does shopping for their workplace they follow their workplace’s “which items to buy” procedures; when a physicist speaks to laypeople in their official capacity as a physicist, they follow certain procedures so as to avoid misrepresenting the community of physicists).
People often manage this coordination by changing their beliefs (“yes, I agree that drunk driving is dangerous—therefore you can trust me not to drink and drive”). However, personally I like the rule “beliefs are for true things—social transactions can make my requests of my behaviors but not of my beliefs.” And I’ve got a bunch of gimmicks for navigating the “be robustly and accurately seen as prosocial” without modifying one’s beliefs (“In my driving, I value cooperating with the laws and customs so as to be predictable and trusted and trustworthy in that way; and drunk driving is very strongly against our customs—so you can trust me not to drink and drive.”)
How the Tao unravels
A book review of part of CS Lewis’s book “The abolition of man.” Elaborates CS Lewis’s argument that in postmodern times, people grab hold of part of humane values and assert it in contradiction with other parts of humane values, which then assert back the thing that they’re holding and the other party is missing, and then things fragment further and further. Compares Lewis’s proposed mechanism with how cultural divides have actually been going in the rationality and EA communities over the last ten years.
I have a strong heuristic that I should slow down and throw a major warning flag if I am doing (or recommending that someone else do) something I believe would be unethical if done by someone not aiming to contribute to a super high impact project. I (weakly) believe more people should use this heuristic.