I’m not really familiar with DA beyond the 2 posts and knowing Vaniver. That being said, thoughts that came to mind as I read.
0. This just seems like a standard coordination problem that I would expect most rationalists to be familiar with. Is there some reason you needed an entirely new analogy? I would not have predicted that out of ~12 reasonably dedicated rationalists, more than 1 would not have this idea as a Thing That Exists In One’s Mind.
1. Seems like a good suggestion that I would not have thought of ahead of time, but seems obvious in retrospect.
2. I guess I didn’t realize how much you were supposed to still be doing at CFAR while this was going on. If I had, I probably would have been significantly more pessimistic. It certainly sounds like the most “obvious” thing out of this list, a priori, even more than the things in 5.
6. Have you read Superforecasters? Something the book goes into, that SSC’s review does not, is a bit of military history and strategy, pioneered by a German named Moltke. The basic idea is that generals cannot anticipate everything that can happen on the ground, and so they should direct the objective of lower officers, rather than the means of attaining them. Perhaps, if you are going to form an “army,” that is equally as valid a model to consider as infantry grunts? My guess would be that rationalists are naturally going to resist the sort of hyper-hive-assimilation of a low-level unit (having many Red Knights, in your terms). Under this model, setting goals, agreeing on serious consequences for not meeting them, encouraging working together on them, and providing communal resources for working on them, might be more effective.
(Allegedly these ideas largely explained the Germany Army’s disproportionate effectiveness in WW2, and Moltke’s ideas have been adopted by pretty much every modern military).
I believe this is also the model described in Ender’s Game, which leads me to my next thought—is 12 too many? Or worded differently, is it likely the same group size is optimal for all the things you want to accomplish? Ender broke his army into many more pieces than usual, all the way down into groups of 4 or 5, to get each group to be very tight. Maybe you’ve already got this part covered; I only mention it because I didn’t see any mention of doing this systematically, only the EE thing (which you said was very successful).
16. I really like your phrasing at the end of this, and will probably look for an opportunity to use it at some point.
edit: I totally forgot, I think 7) is the one that is basically in line with what I actually did predict at the start.
0. This just seems like a standard coordination problem that I would expect most rationalists to be familiar with. Is there some reason you needed an entirely new analogy? I would not have predicted that out of ~12 reasonably dedicated rationalists, more than 1 would not have this idea as a Thing That Exists In One’s Mind.
My impression is that while I might expect a median economist to have heard of a stag hunt, I don’t expect that of a median rationalist (where I do expect a median rationalist to have heard of the prisoner’s dilemma). Talking about choosing cooperate vs. defect triggers a different sort of social reasoning / mental movements than talking about choosing stag vs. rabbit.
(In particular, I remember several discussions about house culture early on, where a norm of “always cooperate” was proposed and got some pushback, when “choose stag” would have likely made more sense / gone over better.)
Something the book goes into, that SSC’s review does not, is a bit of military history and strategy, pioneered by a German named Moltke.
I don’t think auftragstaktik would’ve helped with the specific problems underlying 6. I think that was something more like—a major benefit of having a commander is to specialize cognitive labor and lower coordination costs (since everyone just needs to know the plan, not feel like they have justified the plan to their satisfaction), and instead we had more standard discussion and consensus forming that led to us not specializing cognitive labor or lowering coordination costs, and not having practiced following orders for when it was important to follow orders. (Mission-type tactics still relies on people following orders!)
As I pointed out in my recent post, in econ the prisoner’s dilemma is not a coordination problem, and it seems useful to conceptually separate those two decision matrices.
As I pointed out in my recent post, in econ the prisoner’s dilemma is not a coordination problem, and it seems useful to conceptually separate those two decision matrices.
Sure; did you read me as claiming that it was was?
I also note that the Stag Hunt is importantly different from the Coordination Game in that only one of the good outcomes requires coordination—if you choose Rabbit, you get the lower payoff regardless of what I do.
I’m not really familiar with DA beyond the 2 posts and knowing Vaniver. That being said, thoughts that came to mind as I read.
0. This just seems like a standard coordination problem that I would expect most rationalists to be familiar with. Is there some reason you needed an entirely new analogy? I would not have predicted that out of ~12 reasonably dedicated rationalists, more than 1 would not have this idea as a Thing That Exists In One’s Mind.
1. Seems like a good suggestion that I would not have thought of ahead of time, but seems obvious in retrospect.
2. I guess I didn’t realize how much you were supposed to still be doing at CFAR while this was going on. If I had, I probably would have been significantly more pessimistic. It certainly sounds like the most “obvious” thing out of this list, a priori, even more than the things in 5.
6. Have you read Superforecasters? Something the book goes into, that SSC’s review does not, is a bit of military history and strategy, pioneered by a German named Moltke. The basic idea is that generals cannot anticipate everything that can happen on the ground, and so they should direct the objective of lower officers, rather than the means of attaining them. Perhaps, if you are going to form an “army,” that is equally as valid a model to consider as infantry grunts? My guess would be that rationalists are naturally going to resist the sort of hyper-hive-assimilation of a low-level unit (having many Red Knights, in your terms). Under this model, setting goals, agreeing on serious consequences for not meeting them, encouraging working together on them, and providing communal resources for working on them, might be more effective.
(Allegedly these ideas largely explained the Germany Army’s disproportionate effectiveness in WW2, and Moltke’s ideas have been adopted by pretty much every modern military).
I believe this is also the model described in Ender’s Game, which leads me to my next thought—is 12 too many? Or worded differently, is it likely the same group size is optimal for all the things you want to accomplish? Ender broke his army into many more pieces than usual, all the way down into groups of 4 or 5, to get each group to be very tight. Maybe you’ve already got this part covered; I only mention it because I didn’t see any mention of doing this systematically, only the EE thing (which you said was very successful).
16. I really like your phrasing at the end of this, and will probably look for an opportunity to use it at some point.
edit: I totally forgot, I think 7) is the one that is basically in line with what I actually did predict at the start.
My impression is that while I might expect a median economist to have heard of a stag hunt, I don’t expect that of a median rationalist (where I do expect a median rationalist to have heard of the prisoner’s dilemma). Talking about choosing cooperate vs. defect triggers a different sort of social reasoning / mental movements than talking about choosing stag vs. rabbit.
(In particular, I remember several discussions about house culture early on, where a norm of “always cooperate” was proposed and got some pushback, when “choose stag” would have likely made more sense / gone over better.)
I don’t think auftragstaktik would’ve helped with the specific problems underlying 6. I think that was something more like—a major benefit of having a commander is to specialize cognitive labor and lower coordination costs (since everyone just needs to know the plan, not feel like they have justified the plan to their satisfaction), and instead we had more standard discussion and consensus forming that led to us not specializing cognitive labor or lowering coordination costs, and not having practiced following orders for when it was important to follow orders. (Mission-type tactics still relies on people following orders!)
As I pointed out in my recent post, in econ the prisoner’s dilemma is not a coordination problem, and it seems useful to conceptually separate those two decision matrices.
Sure; did you read me as claiming that it was was?
I also note that the Stag Hunt is importantly different from the Coordination Game in that only one of the good outcomes requires coordination—if you choose Rabbit, you get the lower payoff regardless of what I do.