“You can not stop me, I spend thirty thousand men a month.” -Napoleon
romeostevensit
Good timing.
Jesus: “I just got done trying to fix this!”
Less jokingly, scapegoating, accountability sinks, liability laundering, declining trust, kakonomics, form an interesting constellation that I feel is under explored for understanding human behavior when part of large systems.
Anglo armies have been extremely unusual historically speaking for their low rates of atrocity.
(I don’t think this is super relevant for AI, but I think this is where intuitions about the superiority of the west bottoms out)
Training wheels have been replaced with balance bikes for this reason.
I think the major impacts that matter are on war, pandemic risk, and x-risk. I rarely see anyone try to figure those out, perhaps the sign is too uncertain due to complexity.
Type errors:
Map-territory confusion (labels facts)
Is-ought confusion (fact value)
Means-ends confusion (value strategy)
Implementation-classification confusion (strategy label) eg “if you classify this as an emergency that must mean you support taking immediate action”
Semantic-normative confusion (label value) eg “if you classify this as art you must think it is valuable”
Empirical-procedural confusion (fact strategy) eg “recidivism rates are highest among those without stable employment, therefore job training programs are the most important intervention”
it’s about training the same muscle groups with lower joint injury. eg people do deadlifts with 2x+ bodyweight but RDLs are effective at bodyweight even for strong people.
lately i’ve been doing one legged leg press for similar reasons, though less time effective.
Prior: physical health and social success
Dating studies causing updates away from that prior: none found
It used to be weird to me how much ink was spilled on twisting the prior into knots, but I eventually realized it was people who don’t like it for the obvious reason.
What is a useful prediction that eliminatism makes?
The school I found that seemed most serious (and whose stuff also worked for me) held the position that these things basically don’t work for some people unless or until they have certain spontaneous experiences. No one knows what causes them. Some people report that they had the experiences on psychedelics, but no one knows if that’s really causal or their propensity to take psychedelics was also caused by this upstream thing. I don’t think there’s much point in trying to force it, I don’t think it works.
Found this interesting and useful. Big update for me is that ‘I cut you choose’ is basically the property that most (all?) good self therapy modalities use afaict. In that the part or part-coalition running the therapy procedure can offer but not force things, since its frames are subtly biasing the process.
Thanks for the link. I mean that predictions are outputs of a process that includes a representation, so part of what’s getting passed back and forth in the diagram are better and worse fit representations. The degrees of freedom point is that we choose very flexible representations, whittle them down with the actual data available, then get surprised that that representation yields other good predictions. But we should expect this if Nature shares any modular structure with our perception at all, which it would if there was both structural reasons (literally same substrate) and evolutionary pressure for representations with good computational properties i.e. simple isomorphisms and compressions.
The two concepts that I thought were missing from Eliezer’s technical explanation of technical explanation that would have simplified some of the explanation were compression and degrees of freedom. Degrees of freedom seems very relevant here in terms of how we map between different representations. Why are representations so important for humans? Because they have different computational properties/traversal costs while humans are very computationally limited.
I saw memetic disenfranchisement as central themes of both.
Two tacit points that seemed to emerge to me:
Have someone who is ambiently aware and proactively getting info to the right people, or noticing when team members will need info and setting up the scaffolding so that they can consistently get it cheaply and up to date.
The authority goes all the way up. The locally ambiently aware person has power vested in them by higher ups, meaning that when people drag their feet bc of not liking some of the harsher OODA loops you have backup.
Surprisingly small amounts of money can do useful things IMO. There’s lots of talk about billions of dollars flying around, but almost all of it can’t structurally be spent on weird things and comes with strings attached that cause the researchers involved to spend significant fractions of their time optimizing to keep those purse strings opened. So you have more leverage here than is perhaps obvious.
My second order advice is to please be careful about getting eaten (memetically) and spend some time on cognitive security. The fact that ~all wealthy people don’t do that much interesting stuff with their money implies that the attractors preventing interesting action are very very strong and you shouldn’t just assume you’re too smart for that. Magic tricks work by violating our intuitions about how much time a person would devote to training a very weird edge case skill or particular trick. Likewise, I think people dramatically underestimate how much their social environment will warp into one that encourages you to be sublimated into the existing wealth hierarchy (the one that seemingly doesn’t do much). Specifically, it’s easy to attribute substitution yourself from high impact choices to choices where the grantees make you feel high impact. But high impact people don’t have the time, talent, or inclination to optimize how you feel.
Since most all of a wealthy person’s impact comes mediated through the actions of others, I believe the top skill to cultivate besides cogsec is expert judgement. I’d encourage you to talk through with an LLM some of the top results from research into expert judgement. It’s a tricky problem to figure out who to defer to when you are giving out money and hence everyone has an incentive to represent themselves as an expert.
I don’t know the details of Talinn’s grant process but as Tallinn seems to have avoided some of these problems it might be worth taking inspiration from. (SFF, S-Process mentioned elsewhere here).
Not entirely wrong
They’re entirely correct. Learning new communication techniques are about what you choose to say, not what other people do.
Red Herring. Quibbling over difficult to detect effects is a waste of time while we’re failing to kill those who commit ten+ violent crimes and account for a substantial fraction of all such crime. I don’t buy mistake theory on this.
Waistcoat and rolled up sleeves works in many more settings and still looks amazing.
Definitely for preference cascades. For common knowledge I’d say it’s about undermining of common knowledge formation (eg meme to not share salary, strong pressure not to name that emperor is naked, etc.)