I really like this. I enjoy your aesthetic and ambition.
[…]But something magical does accrue when you make the jump from 99% to 100%[…]
There’s something about this whole section that nags me. I really, really like the aesthetic… and yet… there’s something about how it’s phrased here that inspires a wish in me to argue with you about what you said.
I think what you’re trying to get at here is how, when you convert a “shades of grey” perspective into a “No, this either hits these standards or it doesn’t” kind of discrete clarity, it’s possible to switch from approximation to precision. And when you chain together steps that each have to work, you can tell what the output is much more clearly if you’re getting each step to give a binary “Yes, I’m working properly” or “Nope, not quite meeting the defined standard.”
And I think you’re using this to suggest that Dragon Army should be a system with discretely clear standards and with each component of the system (i.e., each person) either (a) definitely meeting that standard or (b) recognizing where they don’t and then building up to that standard. This makes the whole system dependable in a way you just cannot do if there are no clear discrete standards or if the system is lax about some component not meeting the standards (e.g., giving someone a pass for merely “trying”).
I think this is what you mean when you say, “[…]the `absolute’ part is important.” The word “absolute” is standing in for having these standards and endeavoring for literally 100% of the (finitely many, discrete) components of the system 100% meeting those standards.
Confirm/deny/improve?
All of the above was meant to point at reasons why I suspect trusting individuals responding to incentives moment-by-moment to be a weaker and less effective strategy than building an intentional community that Actually Asks Things Of Its Members.
It’s worth noting that the people most closely involved with this project (i.e. my closest advisors and those most likely to actually sign on as housemates) have been encouraged to spend a significant amount of time explicitly vetting me with regards to questions like “does this guy actually think things through,” “is this guy likely to be stupid or meta-stupid,” “will this guy listen/react/update/pivot in response to evidence or consensus opposition,” and “when this guy has intuitions that he can’t explain, do they tend to be validated in the end?”
I just want to add public corroboration on this point. Yes, Duncan encouraged along these lines. My own answers round to “is good” in each case. I’m really just flat-out not worried about him leading Dragon Army.
And it doesn’t quite solve things to say, “well, this is an optional, consent-based process, and if you don’t like it, don’t join,” because good and moral people have to stop and wonder whether their friends and colleagues with slightly weaker epistemics and slightly less-honed allergies to evil are getting hoodwinked. In short, if someone’s building a coercive trap, it’s everyone’s problem.
I really like that you point out things like this.
Should the experiment prove successful past its first six months, and worth continuing for a full year or longer, by the end of the first year every Dragon shall have a skill set including, but not limited to[…]
I like the list, overall. I can give you a more detailed commentary in person rather than digging in here. Let me know if you’d prefer it done in public here. (Just trying not to overly tax public attention with personal impressions.)
[…]we are trying not to fall prey to GOODHART’S DEMON.
Heh. That reference made me laugh. :-) I like that as a focus, as will surprise you not at all.
Confirm. More particularly, I’m pointing at something like “being able to rely on a plot that requires ten things to go right” (e.g. a CFAR workshop).
Feel free to add any number of additional thoughts and personal impressions—I like the idea of being able to say “But I did due diligence! We argued everything right out in the open fora!”
I really like this. I enjoy your aesthetic and ambition.
There’s something about this whole section that nags me. I really, really like the aesthetic… and yet… there’s something about how it’s phrased here that inspires a wish in me to argue with you about what you said.
I think what you’re trying to get at here is how, when you convert a “shades of grey” perspective into a “No, this either hits these standards or it doesn’t” kind of discrete clarity, it’s possible to switch from approximation to precision. And when you chain together steps that each have to work, you can tell what the output is much more clearly if you’re getting each step to give a binary “Yes, I’m working properly” or “Nope, not quite meeting the defined standard.”
And I think you’re using this to suggest that Dragon Army should be a system with discretely clear standards and with each component of the system (i.e., each person) either (a) definitely meeting that standard or (b) recognizing where they don’t and then building up to that standard. This makes the whole system dependable in a way you just cannot do if there are no clear discrete standards or if the system is lax about some component not meeting the standards (e.g., giving someone a pass for merely “trying”).
I think this is what you mean when you say, “[…]the `absolute’ part is important.” The word “absolute” is standing in for having these standards and endeavoring for literally 100% of the (finitely many, discrete) components of the system 100% meeting those standards.
Confirm/deny/improve?
Yep, I agree. Free markets are a terrible strategy for opposing Moloch.
I just want to add public corroboration on this point. Yes, Duncan encouraged along these lines. My own answers round to “is good” in each case. I’m really just flat-out not worried about him leading Dragon Army.
I really like that you point out things like this.
I like the list, overall. I can give you a more detailed commentary in person rather than digging in here. Let me know if you’d prefer it done in public here. (Just trying not to overly tax public attention with personal impressions.)
Heh. That reference made me laugh. :-) I like that as a focus, as will surprise you not at all.
Confirm. More particularly, I’m pointing at something like “being able to rely on a plot that requires ten things to go right” (e.g. a CFAR workshop).
Feel free to add any number of additional thoughts and personal impressions—I like the idea of being able to say “But I did due diligence! We argued everything right out in the open fora!”