Regarding the “fun to argue about” point—maybe a positive recommendation would be “focus on hitting the target”? Or “focus on technical issues”? I don’t think there is a nice, concise phrase that captures what to do.
Before these eleven virtues is a virtue which is nameless.
Miyamoto Musashi wrote, in The Book of Five Rings:
“The primary thing when you take a sword in your hands is your intention to cut the enemy, whatever the means. Whenever you parry, hit, spring, strike or touch the enemy’s cutting sword, you must cut the enemy in the same movement. It is essential to attain this. If you think only of hitting, springing, striking or touching the enemy, you will not be able actually to cut him. More than anything, you must be thinking of carrying your movement through to cutting him.”
Every step of your reasoning must cut through to the correct answer in the same movement. More than anything, you must think of carrying your map through to reflecting the territory.
Musashi calls this virtue “the way of the void”; but I think that this name is sufficient counterintuitive that we should not try to adopt it.
I also am not sure if this is something that’s informed about your models on AI per se; getting nerd-sniped is a common issue for intellectual communities, and being able to actually do things that contribute to your goals is a super important skill.
Focus on technical issues is also a good commandment. See my reply to Elizabeth elsewhere on this page (link) for other thoughts I have on positive recommendations.
Regarding avoiding being nerd-sniped; I didn’t actually say avoid being nerd-sniped—I think learning to just enjoy the technical problems for their own sake can be a great thing for doing valuable research. I specifically meant to avoid getting pulled into e.g. gossip, and other things that steal your attention because they’re social. The same way that forums about podcasts can just become forums that discuss who gets to be president of the podcast forum, it’s important to fight against the forces that would cause an AI x-risk community to largely be about the people in charge of the AI community, rather than understanding AI, alignment, and making intellectual progress.
Regarding the “fun to argue about” point—maybe a positive recommendation would be “focus on hitting the target”? Or “focus on technical issues”? I don’t think there is a nice, concise phrase that captures what to do.
From Yudkowsky’s Twelve Virtues of Rationality:
Musashi calls this virtue “the way of the void”; but I think that this name is sufficient counterintuitive that we should not try to adopt it.
I also am not sure if this is something that’s informed about your models on AI per se; getting nerd-sniped is a common issue for intellectual communities, and being able to actually do things that contribute to your goals is a super important skill.
Focus on technical issues is also a good commandment. See my reply to Elizabeth elsewhere on this page (link) for other thoughts I have on positive recommendations.
Regarding avoiding being nerd-sniped; I didn’t actually say avoid being nerd-sniped—I think learning to just enjoy the technical problems for their own sake can be a great thing for doing valuable research. I specifically meant to avoid getting pulled into e.g. gossip, and other things that steal your attention because they’re social. The same way that forums about podcasts can just become forums that discuss who gets to be president of the podcast forum, it’s important to fight against the forces that would cause an AI x-risk community to largely be about the people in charge of the AI community, rather than understanding AI, alignment, and making intellectual progress.