It seems to me that if attending to the ordinary business of your life, including career and hobbies, amounts to doing nothing, there’s something very deeply wrong happening, and people would do well to attend to that problem first. On the other hand, doing nothing is preferable to doing harm, and it’s entirely possible that many people are actually causing harm, e.g. by generating misinformation, and it would be better if they just stopped, even if they can’t figure out how to do whatever they were pretending to do.
I certainly don’t think that someone donating their surplus to GiveDirectly, or living more modestly in order to share more with others, is doing a wrong thing. It’s admirable to want to share one’s wealth with those who have less.
It seems to me that if attending to the ordinary business of your life, including career and hobbies, amounts to doing nothing, there’s something very deeply wrong happening, and people would do well to attend to that problem first.
I’m tempted to answer this statement by saying that something very deeply wrong is clearly happening, e.g., there’s not nearly enough effort in the world to prevent coordination failures that could destroy most of the potential value of the universe, and attending to that problem would involve doing something besides or in addition to attending to the ordinary business of life. I feel like this is probably missing your point though. Do you want to spell out what you mean more, e.g., is there some other “something very deeply wrong happening” you have in mind, and if so what do you think people should do about it?
If people who can pay their own rent are actually doing nothing by default, that implies that our society’s credit-allocation system is deeply broken. If so, then we can’t reasonably hope to get right answers by applying simplified economic models that assume credit-allocation is approximately right, the way I see EAs doing, until we have a solid theoretical understanding of what kind of world we actually live in.
Here’s a simple example: Robin Hanson’s written a lot about how it’s not clear that health care is beneficial on the margin. This is basically unsurprising if you think there are a lot of bullshit jobs. But 80,000 Hours’s medical career advice assumes that the system basically knows what it’s doing and that health care delivers health on the margin—the only question is how much.
It seems to me that if an intellectual community isn’t resolving these kind of fundamental confusions (and at least one side has to be deeply confused here, or at least badly misinformed), then it should expect to be very deeply confused about philanthropy. Not just in the sense of “what is the optimal strategy,” but in the sense of “what does giving away money even do.”
I don’t see there as being a ‘fundamental confusion’ here, and not even that much of a fundamental disagreement.
When I crunched the numbers on ‘how much good do doctors do’ it was meant to provide a rough handle on a plausible upper bound: even if we beg the question against critics of medicine (of which there are many), and even if we presume any observational marginal response is purely causal (and purely mediated by doctors), the numbers aren’t (in EA terms) that exciting in terms of direct impact.
In talks, I generally use the upper 95% confidence bound or central estimate of the doctor coefficient as a rough steer (it isn’t a significant predictor, and there’s reasonable probability mass on the impact being negative): although I suspect there will be generally unaccounted confounders attenuating ‘true’ effect rather than colliders masking it, these sort of ecological studies are sufficiently insensitive to either to be no more than indications—alongside the qualitative factors—that the ‘best (naive) case’ for direct impact as a doctor isn’t promising.
There’s little that turns on which side of zero our best guess falls, so long as we be confident it is a long way down from the best candidates: on the scale of intervention effectiveness, there’s not that much absolute distance between estimates (I suspect) Hanson or I would offer. There might not be much disagreement even in coarse qualitative terms: Hanson’s work here—I think—focuses on the US, and US health outcomes are a sufficiently pathological outlier in the world I’m also unsure whether marginal US medical effort is beneficial; I’m not sure Hanson has staked out a view on whether he’s similarly uncertain about positive marginal impact in non-US countries, so he might agree with my view it is (modestly) net-positive, despite its dysfunction (neither I nor what I wrote assumes the system ‘basically knows what it’s doing’ in the common-sense meaning).
If Hanson has staked out this broader view, then I do disagree with it, but I don’t think this disagreement would indicate at least one of us has to be ‘deeply confused’ (this looks like a pretty crisp disagreement to me) nor ‘badly misinformed’ (I don’t think there are key considerations one-or-other of us is ignorant of which explains why one of us errs to sceptical or cautiously optimistic). My impressions are also less sympathetic to ‘signalling accounts’ of healthcare than his (cf.) - but again, my view isn’t ‘This is total garbage’, and I doubt he’s monomaniacally hedgehog-y about the signalling account. (Both of us have also argued for attenuating our individual impressions in deference to a wider consensus/outside view for all things considered judgements).
Although I think the balance of expertise leans against archly sceptical takes on medicine, I don’t foresee convincing adjudication on this point coming any time soon, nor that EA can reasonably expect to be the ones to provide this breakthrough—still less for all the potential sign-inverting crucial considerations out there. Stumbling on as best we can with our best guess seems a better approach than being paralyzed until we’re sure we’ve figured it all out.
Something that nets out to a small or no effect because large benefits and harms cancel out is very different (with different potential for impact) than something like, say, faith healing, where you can’t outperform just by killing fewer patients. A marginalist analysis that assumes that the person making the decision doesn’t know their own intentions & is just another random draw of a ball from an urn totally misses this factor.
A marginalist analysis that assumes that the person making the decision doesn’t know their own intentions & is just another random draw of a ball from an urn totally misses this factor.
Happily, this factor has not been missed by either my profile or 80k’s work here more generally. Among other things, we looked at:
Variance in impact between specialties and (intranational) location (1) (as well as variance in earnings for E2G reasons) (2, also, cf.)
Areas within medicine which look particularly promising (3)
Why ‘direct’ clinical impact (either between or within clinical specialties) probably has limited variance versus (e.g.) research (4), also
I also cover this in talks I have given on medical careers, as well as when offering advice to people contemplating a medical career or how to have a greater impact staying within medicine.
I still think trying to get a handle on the average case is a useful benchmark.
that implies that our society’s credit-allocation system is deeply broken
I want to double click on “credit-allocation system.” it sounds like an important part of your model, but I don’t really know what you mean. Something like “answering the question of ‘who is responsible for the good in our world?’” Like I’m miss-allocating credit to the health sector, which is (maybe) not actually responsible for much good?
What does this have to do with if people who can pay their rent are doing something or nothing by default? Is your claim that by participating in the economy, they should be helping by default (they pay their landlord, who buys goods, which pays manufacturers, etc.) And if that isn’t having a positive impact, that must mean that society is collectively able to identify the places where value come from?
helping by default (they pay their landlord, who buys goods, which pays manufacturers, etc.)
The exact opposite—getting paid should imply something. The naive Econ 101 view is that it implies producing something of value. “Production” is generally measured in terms of what people are willing to pay for.
If getting paid has little to do with helping others on net , then our society’s official unit of account isn’t tracking production (Talents), GDP is a measurement of the level of coercion in a society (There Is a War), the bullshit jobs hypothesis is true, we can’t take job descriptions at face value, and CEA’s advice to build career capital just means join a powerful gang.
This undermines enough of the core operating assumptions EAs seem to be using that the right thing to do in that case is try to build better models of what’s going on, not act based on what your own models imply is disinformation.
I’m trying to make sense of what you’re saying here, but bear with me, we have a large inferential distance.
Let’s see.
The Talents piece was interesting. I bet I’m still missing something, but I left a paraphrase as a comment over there.
I read the all of “There Is a War”, but I still don’t get the claim, “GDP is a measurement of the level of coercion in a society.” I’m going to keep working at it.
I basically already thought that lots of jobs are bullshit, but I might skim or listen to David Graeber’s book to get more data.
Oh. He’s the guy that wrote Debt: the First 5000 Years! (Which makes a very similar point about money as the middle parts of this post.)
Given my current understanding, I don’t get either the claim that “CEA’s advice to build career capital just means join a powerful gang” or that “This undermines enough of the core operating assumptions EAs seem to be using.”
I do agree that the main work to be done is figuring out what is actually going on in the world and how the world actually works.
I’m going to keep reading and thinking and try to get what you’re saying.
. . .
My initial response before I followed your links, so this is at least partially obviated:
1.
The exact opposite—getting paid should imply something. The naive Econ 101 view is that it implies producing something of value. “Production” is generally measured in terms of what people are willing to pay for.
First of all...Yep it does seem pretty weird that we maybe live in a world where most people are paid but produce no wealth. As a case in point, my understanding is that a large fraction of programmers actually add negative value, by adding bugs to code.
It certainly seems correct to me, to stop and be like “There are millions of people up there in those skyscrapers, working in offices, and it seems like (maybe) a lot of them are producing literally no value. WTF?! How did we end up in a world like this! What is going on?”
My current best guess, the following: Some people are creating value, huge amounts of value in total (we live in a very rich society, by historical standards), but many (most?) people are doing useless work. But for employers, the overhead of identifying which work is creating value and which work isn’t is (apparently) more costly than the resources that would be saved by cutting the people that aren’t producing value.
It’s like Paul Graham says: in a company your work is averaged together with a bunch of other people’s and it is hard or impossible to assess each person’s contribution. This gives rise to a funny dynamic where a lot of the people are basically occupying a free-rider / parasitic niche: they produce ~no wealth, but they average their work with some people who do.
(To be clear: the issue is rarely people being hired to do something that in principle could be productive, but then slacking off. I would guess that is much more frequently the case that an institution, because of its own [very hard to resolve] inadequacies, hire people specifically do do useless things.)
But an important point here is that on average, people are creating value, even if most of the human-working hours are useless. The basic formula of dollars = value created, still holds, it’s just really noisy.
2.
It seems to me that if attending to the ordinary business of your life, including career and hobbies, amounts to doing nothing, there’s something very deeply wrong happening, and people would do well to attend to that problem first.
Well, importantly, as an EA, I don’t mean “doing nothing” in the sense of providing value to no one at all. I mean “doing nothing” as “doing nothing to make a dent in the big problems.”
And this state of affairs isn’t that surprising. My default models has it that I can engage in positive-sum trades, which do provide value to others in my economic sphere, but that by default none of that surplus gets directed at people outside of that economic sphere. The most salient example might be animals, who have no ability to advocate for themselves, or trade with us. They are outside the virtuous circle of our economy, and don’t benefit from it unless people like me take specific action to save them.
The same basic argument goes for people in third world countries and far-future entities.
So, yeah, this is a problem. And your average EA thinks we should attend to it. But according to the narrative, EA is already on it.
I read the all of “There Is a War”, but I still don’t get the claim, “GDP is a measurement of the level of coercion in a society.” I’m going to keep working at it.
I think it’s analytically pretty simple. GDP involves adding up all the “output” into a single metric. Output is measured based on others’ willingness to pay. The more payments are motivated by violence rather than the production of something everyone is glad to have more of, the more GDP measures expropriation rather than production. There Is A War is mostly about working out the details & how this relates to macroeconomic ideas of “stimulus,” “aggregate demand,” etc, but if that analytic argument doesn’t make sense to you, then that’s the point we should be working out.
Ok. This makes sense to me. GDP measures a mix of trades that occur due to simple mutual benefit and “trades” that occur because of extortion or manipulation.
If you look at the combined metric, and interpret it to be a measure of only the first kind of trade, you’re likely overstating how much value is being created, perhaps by a huge margin, depending on what percentage of trades are based on violence.
But I’m not really clear on why you’re talking about GDP at all. It seems like you’re taking the claim that “GDP is a bad metric for value creation”, and concluding that “interventions like give directly are a misguided.”
Rereading this thread, I come to
If people who can pay their own rent are actually doing nothing by default, that implies that our society’s credit-allocation system is deeply broken. If so, then we can’t reasonably hope to get right answers by applying simplified economic models that assume credit-allocation is approximately right, the way I see EAs doing, until we have a solid theoretical understanding of what kind of world we actually live in.
Is the argument something like...
1. GDP is is irreparably corrupt, as a useful measure. Folks often take it as a measure of how much value is created, but it is actually just as much a measure of how much violence is being done.
2. This is an example of a more general problem: All of our metrics for tracking value are similarly broken. Our methods of allocating credit don’t work at all.
3. Given that we don’t have robust methods for allocating credit, we can’t trust that anything good happens when give money to the actual organization “Give Directly”. For all we know that money gets squandered on activities that superficially look like helping, but are actually useless or harmful. (This is a reasonable supposition, because this is what most organizations do on priors.)
4. Given that we can’t trust giving money to Give Directly does any good, our only hope for doing good is to actually make sense of what is happening in the world so that we can construct credit allocation systems on which we can actually rely.
This is something like a 9 - gets the overall structure of the argument right with some important caveats:
I’d make a slightly weaker a claim for 2 - that credit-allocation methods have to be presumed broken until established otherwise, and no adequate audit has entered common knowledge.
An important part of the reason for 3 is that, the larger the share of “knowledge work” that we think is mostly about creating disinformation, the more one should distrust any official representations one hasn’t personally checked, when there’s any profit or social incentive to make up such stories. Based on my sense of the character of the people I met while working at GiveWell, and the kind of scrutiny they said they applied to charities, I’d personally be surprised if GiveDirectly didn’t actually exist, or simply pocketed the money. But it’s not at all obvious to me that people without my privileged knowledge should be sure of that.
credit-allocation methods have to be presumed broken until established otherwise, and no adequate audit has entered common knowledge.
That does not seem obvious to me. It certainly does not seem to follow from merely the fact the GDP is not a good measure of national welfare. (In large part, because my impression is that economists say all the time that GDP is not a good measure of national welfare.)
Presumably you believe that point 2 holds, not just because of the GDP example, but because you’ve seen many, many examples (like health care, which you mention above). Or maybe because you have an analytical argument that the sort of thing that happens with GDP has to generalize to other credit allocation systems?
Is that right? Can you say more about why you expect this to be a general problem?
. . .
I have a much higher credence that give Directly Exists and is doing basically what it says it is doing than you do.
If I do a stack trace on why I think that...
I have a background expectation that the most blatant kinds of fraudulence will be caught. I live in a society that has laws, including laws about what sorts of things non-profits are allowed to do, and not do, with money. If they were lying about every having given any money to anyone in Africa, I’m confident that someone would notice that, and blow the whistle, and the perpetrators would be in jail. (A better hidden, but consequently less extreme incidence of embezzlement is much more plausible, though I would still expect it to be caught eventually.)
They’re sending some somewhat costly-to-fake signals of actually trying to help. For instance, I heard on a blog once, that they were doing an RCT, to see if cash transfers actually improve people’s lives. (I think. I may just be wrong about the simple facts, here.) Most charities don’t do anything like that, and most of the world doesn’t fault them for it. Plus it sounds like a hassle. The only reasons why you would organize an RCT, are 1) you are actually trying to figure out if your intervention works 2) you have a very niche marketing strategy that involves sending costly signals of epistemic virtue, to hoodwink people like me into thinking “Yay Give Directly”, or 3) some combination of 1 and 2, whereby you’re actually interested in the answer, and also part of your motivation is knowing how much it will impress the EAs.
I find it implausible that they are doing strictly 2, because I don’t think the idea would occur to anyone who wasn’t genuinely curious. 3 seems likely.
Trust-chains: They are endorsed by people who are respected by people who’s epistemics I trust. GiveWell endorsed them. I personally have not read GiveWell’s evaluations in much depth, but I know that many people around me including, for instance Carl Shulman, have engaged with them extensively. Not only does everyone around me have oodles of respect for Carl, but I can personally verify (with a small sample size of interactions), that his thinking is extremely careful and rigorous. If Carl thought that GiveWell’s research was generally low quality, I would expect this to be a known, oft-mentioned thing (and I would expect his picture to not be on the OpenPhil website). Carl, is of course, only an example. There are other people around who’s epistemics I trust, who find GiveWell’s research to be good enough to be worth talking about. (Or at least old school GiveWell. I do have a sense that the magic has faded in recent years, as usually happens to institutions.)
I happen to know some of these people personally, but I don’t think that’s a Crux. Several years ago, I was a smart, but inexperienced college student. I came across LessWrong, and correctly identified that the people of that community had better epistemology than me (plus I was impressed with this Eliezer-guy who was apparently making progress on these philosophical problems, in sort of the mode that I had tried to make progress, but he was way ahead of me, and way more skilled.) On LessWrong, they’re talking a lot about GiveWell, and GiveWell recommended charities. I think it’s pretty reasonable to assume that the analysis going into choosing those charities is high quality. Maybe not perfect, but much better than I should be able to expect to do myself (as a college students).
It seems to me that I’m pretty correct in thinking that Give Directly does what it says it does.
You disagree though? Can you point at what I’m getting wrong?
My current understanding of your view: You think that institutional dysfunction and optimized misinformation is so common, that the evidence I note above is not sufficient to overwhelm a the prior, and I should assume that Give Directly is doing approximately nothing of value (and maybe causing harm), until I get much stronger evidence otherwise. (And that evidence should be of the form that I can check with my own eyes and my own models?)
I have a background expectation that the most blatant kinds of fraudulence will be caught.
Consider how long Theranos operated, its prestigious board of directors, and the fact that it managed to make a major sale to Walgreens before blowing up. Consider how prominent Three Cups of Tea was (promoted by a New York Times columnist), for how long, before it was exposed. Consider that official US government nutrition advice still reflects obviously distorted, politically motivated research from the early 20th Century. Consider that the MLM company Amway managed to bribe Harvard to get the right introductions to Chinese regulators. Scams can and do capture the official narrative and prosecute whistleblowers.
Consider that pretty much by definition we’re not aware of the most successful scams.
[Note that I’m shifting the conversation some. The grandparent was about things like Give Directly, and this is mostly talking about large, rich companies like Theanos.]
One could look at this evidence and think:
Wow. These fraudulent endeavors ran for really a long time. And the fact that they got caught means that they are probabilisticly not the best executed scams. This stuff must be happening all around us!
Or a person might look at this evidence and think:
So it seems that scams are really quite rare: there are only a dozen or-so scandals like this every decade. And they collapsed in the end. This doesn’t seem like a big part of the world.
Because this is a situation involving hidden evidence, I’m not really sure how to distinguish between those worlds, except for something like a randomized audit: 0.001% of companies in the economy are randomly chosen for a detailed investigation, regardless of any allegations.
I would expect that we live in something closer to the second world, if for no other reason than that this world looks really rich, and that wealth has to be created by something other than outright scams (which is not to say that everyone isn’t also dabbling in misinformation).
I would be shocked if more than one of the S&P 500 companies was a scam on the level of Theanos. Does your world model predict that some of them are?
Coca-Cola produces something about as worthless as Theranos machines, substituting the experience of a thing for the thing itself, & is pretty blatant about it. The scams that “win” gerrymander our concept-boundaries to make it hard to see. Likewise Pepsi. JPMorgan Chase & Bank of America, in different ways, are scams structurally similar to Bernie Madoff but with a legitimate state subsidy to bail them out when they blow up. This is not an exhaustive list, just the first 4 that jumped out at me. Pharma is also mostly a scam these days, nearly all of the extant drugs that matter are already off-patent.
Also Facebook, but “scam” is less obviously the right category.
Somewhat confused by the coca-cola example. I don’t buy coke very often, but it seems usually worth it to me when I do buy it (in small amounts, since I do think it tastes pretty good). Is the claim that they are not providing any value some kind of assumption about my coherent extrapolated volition?
It was originally marketed as a health tonic, but its apparent curative properties were due to the powerful stimulant and analgesic cocaine, not any health-enhancing ingredients. Later the cocaine was taken out (but the “Coca” in the name retained), so now it fools the subconscious into thinking it’s healthful with—on different timescales—mass media advertising, caffeine, and refined sugar.
It’s less overtly a scam now, in large part because it has the endowment necessary to manipulate impressions more subtly at scale.
I mean, I agree that Coca Cola engages in marketing practices that try to fabricate associations that are not particularly truth-oriented, but that’s very different from the thing with Theranos.
I model Coca Cola mostly as damaging for my health, and model its short-term positive performance effects to be basically fully mediated via caffeine, but I still think it’s providing me value above and beyond those those benefits, and outweighing the costs in certain situations.
Theranos seems highly disanalogous, since I think almost no one who knew the actual extend of Theranos’ capabilities, and had accurate beliefs about its technologies, would give money to them. I have pretty confident bounds on the effects of coca-cola, and still decide to sometimes give them my money, and would be really highly surprised if there turns out to be a fact about coke that its internal executives are aware of (even subconsciously) that would drastically change that assessment for me, and it doesn’t seem like that’s what you are arguing for.
Presumably you believe that point 2 holds, not just because of the GDP example, but because you’ve seen many, many examples (like health care, which you mention above). Or maybe because you have an analytical argument that the sort of thing that happens with GDP has to generalize to other credit allocation systems?
Both—it would be worrying to have an analytic argument but not notice lots of examples, and it would require much more investigation (and skepticism) if it were happening all the time for no apparent reason.
I tried to gesture at the gestalt of the argument in The Humility Argument for Honesty. Basically, all conflict between intelligent agents contains a large information component, so if we’re fractally at war with each other, we should expect most info channels that aren’t immediately life-support-critical to turn into disinformation, and we should expect this process to accelerate over time.
For examples, important search terms are “preference falsification” and “Gell-Mann amnesia”.
I don’t think I disagree with you on GiveDirectly, except that I suspect you aren’t tracking some important ways your trust chain is likely to make correlated errors along the lines of assuming official statistics are correct. Quick check: what’s your 90% confidence interval for global population, after Googling the official number, which is around 7.7 billion?
except that I suspect you aren’t tracking some important ways your trust chain is likely to make correlated errors along the lines of assuming official statistics are correct.
Interesting.
Quick check: what’s your 90% confidence interval for global population, after Googling the official number, which is around 7.7 billion?
I don’t know, certainly not off by more than a half billion in either direction? I don’t know how hard it is to estimate the number of people on earth. It doesn’t seem like there’s much incentive to mess with the numbers here.
It doesn’t seem like there’s much incentive to mess with the numbers here.
Guessing at potential comfounders—There may be incentives for individual countries (or cities) to inflate their numbers (to seem more important) – or, deflate their numbers, to avoid taxes.
I basically already thought that lots of jobs are bullshit, but I might skim or listen to David Graeber’s book to get more data.
It’s not really about how many jobs are bullshit, so much as what it means to do a bullshit job. On Graeber’s model, bullshit jobs are mostly about propping up the story that bullshit jobs are necessary for production. Moral Mazes might help clarify the mechanism, and what I mean about gangs—a lot of white-collar work involves a kind of participatory business theater, to prop up the ego claims of one’s patron.
The more we think the white-collar world works this way, the more skeptical we should be of the literal truth of claims to be “working on” some problem or other using conventional structures.
My intuitive answer to the question “What is a gang?”:
A gang is an organization of thugs that claims resources, like territory or protection money, via force or the threat of force.
Is that close to how you are using the term? What’s the important/relevant feature of a “gang”, when you say “CEA’s advice to build career capital just means join a powerful gang”?
Do you mean something like the following? (This is a probably incorrect paraphrase, not a quote)
A gang extracts resources from their victims by requiring they pay tribute or “protection money”. Ostensibly, this involves the victim (perhaps a small business owner) paying the gang for a service, protection from other gangs. But in actuality, this tribute represents extortion: all parties involved understand that the gang is making a threat, “pay up, or we’ll attack you.”
Most white collar workers are executing a similar maneuver, except that instead of using force, they are corrupting the victim’s ability to make sense of the situation. The management consulting firm is implicitly making the claim, “You need us. You can’t make good decisions without us” to some client. While in actuality, the consultancy creates some very official looking documents that have almost no content.
Or, in the same vein, there is an ecosystem of people around a philanthropist, all of which are following their incentives to validate the philanthropist’s ego, and convince him that / appear as if they’re succeeding at his charitable goals.
So called “career capital” amounts to having more prestige, or otherwise be better at convincing people, and therefore being able to extort larger amounts.
Am I on the right track at all?
Or is it more direct than that?
Most so called “value creation” is actually adversarial extraction of value from others, things like programmers optimizing a social media feed to keep people on their platform for longer, or ad agencies developing advertisements that cause people to buy products against their best interests.
Since most of the economy is a 0-sum game like this, any “career capital” must cache out in terms of being better at this exploitative process, or providing value to people / entities that doing the exploiting (which is the same thing, but a degree or a few degrees removed).
Most white collar workers are executing a similar maneuver, except that instead of using force, they are corrupting the victim’s ability to make sense of the situation.
I think it’s actually a combination of this, and actual coordination to freeze out marginal gangs or things that aren’t gangs, from access to the system. Venture capitalists, for example, will tend to fund people who feel like members of the right gang, use the right signifiers in the right ways, went to the right schools, etc. Everyone I’ve talked with about their experience pitching startups has reported that making judgments on the merits is at best highly noncentral behavior.
If enough of the economy is cartelized, and the cartels are taxing noncartels indirectly via the state, then it doesn’t much matter whether the cartels apply force directly, though sometimes they still do.
So called “career capital” amounts to having more prestige, or otherwise be better at convincing people, and therefore being able to extort larger amounts.
It basically involves sending or learning how to send a costly signal of membership in a prestigious gang, including some mixture of job history, acculturation, and integrating socially into a network.
If I replaced the word “gang” here, with the word “ingroup” or “club” or “class”, does that seem just as good?
In these sentences in particular...
Venture capitalists, for example, will tend to fund people who feel like members of the right gang, use the right signifiers in the right ways, went to the right schools, etc.
and
It basically involves sending or learning how to send a costly signal of membership in a prestigious gang, including some mixture of job history, acculturation, and integrating socially into a network.
...I’m tempted to replace the word “gang” with the word “ingroup”.
My guess is that you would say, “An ingroup that coordinates to exclude / freeze out non-ingroup-members from a market is a gang. Let’s not mince words.”
Maybe more specifically an ingroup that takes over a potentially real, profitable social niche, squeezes out everyone else, and uses the niche’s leverage to maximize rent extraction, is a gang.
While I’m not sure I get it either, I think Benquo’s frame has a high level disagreement with the sort of question that utilitarianism asks in the first place (as well as the sort of questions that many non-utilitarian variants of EA are asking). Or rather, objects to the frame in which the question is often asked.
My attempt to summarize the objection is (curious how close this lands for Benquo) is:
”Much of the time, people have internalized moral systems not as something they get to reason about and have agency over, but as something imposed from outside, that they need to submit to. This is a fundamentally unhealthy way to relate to morality.
A person in a bad relationship is further away from a healthy relationship, than a single person, because first the person has to break up with their spouse, which is traumatic and exhausting. A person with a flawed moral foundation trying to figure out how to do good is further away from figuring out how to do good than a person who is just trying to make a generally good life for themselves.
This is important:
a) because if you try to impose your morality on people who are “just making a good life for themselves”, you are continuing to build societal momentum in a direction that alienates people from their own agency and welbeing.
b) “just making a good life for themselves” is, in fact, one of the core goods one can do, and in a just world it’d be what most people were doing.
I think There is A War is one of the earlier Benquo pieces exploring this (or: probably there are earlier-still-ones, but it’s the one I happened to re-read recently). A more recent comment is his objection to Habryka’s take on Integrity (link to comment deep in the conversation that gets to the point, but might require reading the thread for context)
My previous attempt to pass his ITT may also provide some context.
It seems to me that if attending to the ordinary business of your life, including career and hobbies, amounts to doing nothing, there’s something very deeply wrong happening, and people would do well to attend to that problem first. On the other hand, doing nothing is preferable to doing harm, and it’s entirely possible that many people are actually causing harm, e.g. by generating misinformation, and it would be better if they just stopped, even if they can’t figure out how to do whatever they were pretending to do.
I certainly don’t think that someone donating their surplus to GiveDirectly, or living more modestly in order to share more with others, is doing a wrong thing. It’s admirable to want to share one’s wealth with those who have less.
I’m tempted to answer this statement by saying that something very deeply wrong is clearly happening, e.g., there’s not nearly enough effort in the world to prevent coordination failures that could destroy most of the potential value of the universe, and attending to that problem would involve doing something besides or in addition to attending to the ordinary business of life. I feel like this is probably missing your point though. Do you want to spell out what you mean more, e.g., is there some other “something very deeply wrong happening” you have in mind, and if so what do you think people should do about it?
If people who can pay their own rent are actually doing nothing by default, that implies that our society’s credit-allocation system is deeply broken. If so, then we can’t reasonably hope to get right answers by applying simplified economic models that assume credit-allocation is approximately right, the way I see EAs doing, until we have a solid theoretical understanding of what kind of world we actually live in.
Here’s a simple example: Robin Hanson’s written a lot about how it’s not clear that health care is beneficial on the margin. This is basically unsurprising if you think there are a lot of bullshit jobs. But 80,000 Hours’s medical career advice assumes that the system basically knows what it’s doing and that health care delivers health on the margin—the only question is how much.
It seems to me that if an intellectual community isn’t resolving these kind of fundamental confusions (and at least one side has to be deeply confused here, or at least badly misinformed), then it should expect to be very deeply confused about philanthropy. Not just in the sense of “what is the optimal strategy,” but in the sense of “what does giving away money even do.”
[I wrote the 80k medical careers page]
I don’t see there as being a ‘fundamental confusion’ here, and not even that much of a fundamental disagreement.
When I crunched the numbers on ‘how much good do doctors do’ it was meant to provide a rough handle on a plausible upper bound: even if we beg the question against critics of medicine (of which there are many), and even if we presume any observational marginal response is purely causal (and purely mediated by doctors), the numbers aren’t (in EA terms) that exciting in terms of direct impact.
In talks, I generally use the upper 95% confidence bound or central estimate of the doctor coefficient as a rough steer (it isn’t a significant predictor, and there’s reasonable probability mass on the impact being negative): although I suspect there will be generally unaccounted confounders attenuating ‘true’ effect rather than colliders masking it, these sort of ecological studies are sufficiently insensitive to either to be no more than indications—alongside the qualitative factors—that the ‘best (naive) case’ for direct impact as a doctor isn’t promising.
There’s little that turns on which side of zero our best guess falls, so long as we be confident it is a long way down from the best candidates: on the scale of intervention effectiveness, there’s not that much absolute distance between estimates (I suspect) Hanson or I would offer. There might not be much disagreement even in coarse qualitative terms: Hanson’s work here—I think—focuses on the US, and US health outcomes are a sufficiently pathological outlier in the world I’m also unsure whether marginal US medical effort is beneficial; I’m not sure Hanson has staked out a view on whether he’s similarly uncertain about positive marginal impact in non-US countries, so he might agree with my view it is (modestly) net-positive, despite its dysfunction (neither I nor what I wrote assumes the system ‘basically knows what it’s doing’ in the common-sense meaning).
If Hanson has staked out this broader view, then I do disagree with it, but I don’t think this disagreement would indicate at least one of us has to be ‘deeply confused’ (this looks like a pretty crisp disagreement to me) nor ‘badly misinformed’ (I don’t think there are key considerations one-or-other of us is ignorant of which explains why one of us errs to sceptical or cautiously optimistic). My impressions are also less sympathetic to ‘signalling accounts’ of healthcare than his (cf.) - but again, my view isn’t ‘This is total garbage’, and I doubt he’s monomaniacally hedgehog-y about the signalling account. (Both of us have also argued for attenuating our individual impressions in deference to a wider consensus/outside view for all things considered judgements).
Although I think the balance of expertise leans against archly sceptical takes on medicine, I don’t foresee convincing adjudication on this point coming any time soon, nor that EA can reasonably expect to be the ones to provide this breakthrough—still less for all the potential sign-inverting crucial considerations out there. Stumbling on as best we can with our best guess seems a better approach than being paralyzed until we’re sure we’ve figured it all out.
Something that nets out to a small or no effect because large benefits and harms cancel out is very different (with different potential for impact) than something like, say, faith healing, where you can’t outperform just by killing fewer patients. A marginalist analysis that assumes that the person making the decision doesn’t know their own intentions & is just another random draw of a ball from an urn totally misses this factor.
Happily, this factor has not been missed by either my profile or 80k’s work here more generally. Among other things, we looked at:
Variance in impact between specialties and (intranational) location (1) (as well as variance in earnings for E2G reasons) (2, also, cf.)
Areas within medicine which look particularly promising (3)
Why ‘direct’ clinical impact (either between or within clinical specialties) probably has limited variance versus (e.g.) research (4), also
I also cover this in talks I have given on medical careers, as well as when offering advice to people contemplating a medical career or how to have a greater impact staying within medicine.
I still think trying to get a handle on the average case is a useful benchmark.
I just want to register disagreement.
I want to double click on “credit-allocation system.” it sounds like an important part of your model, but I don’t really know what you mean. Something like “answering the question of ‘who is responsible for the good in our world?’” Like I’m miss-allocating credit to the health sector, which is (maybe) not actually responsible for much good?
What does this have to do with if people who can pay their rent are doing something or nothing by default? Is your claim that by participating in the economy, they should be helping by default (they pay their landlord, who buys goods, which pays manufacturers, etc.) And if that isn’t having a positive impact, that must mean that society is collectively able to identify the places where value come from?
I think I don’t get it.
The exact opposite—getting paid should imply something. The naive Econ 101 view is that it implies producing something of value. “Production” is generally measured in terms of what people are willing to pay for.
If getting paid has little to do with helping others on net , then our society’s official unit of account isn’t tracking production (Talents), GDP is a measurement of the level of coercion in a society (There Is a War), the bullshit jobs hypothesis is true, we can’t take job descriptions at face value, and CEA’s advice to build career capital just means join a powerful gang.
This undermines enough of the core operating assumptions EAs seem to be using that the right thing to do in that case is try to build better models of what’s going on, not act based on what your own models imply is disinformation.
I’m trying to make sense of what you’re saying here, but bear with me, we have a large inferential distance.
Let’s see.
The Talents piece was interesting. I bet I’m still missing something, but I left a paraphrase as a comment over there.
I read the all of “There Is a War”, but I still don’t get the claim, “GDP is a measurement of the level of coercion in a society.” I’m going to keep working at it.
I basically already thought that lots of jobs are bullshit, but I might skim or listen to David Graeber’s book to get more data.
Oh. He’s the guy that wrote Debt: the First 5000 Years! (Which makes a very similar point about money as the middle parts of this post.)
Given my current understanding, I don’t get either the claim that “CEA’s advice to build career capital just means join a powerful gang” or that “This undermines enough of the core operating assumptions EAs seem to be using.”
I do agree that the main work to be done is figuring out what is actually going on in the world and how the world actually works.
I’m going to keep reading and thinking and try to get what you’re saying.
. . .
My initial response before I followed your links, so this is at least partially obviated:
1.
First of all...Yep it does seem pretty weird that we maybe live in a world where most people are paid but produce no wealth. As a case in point, my understanding is that a large fraction of programmers actually add negative value, by adding bugs to code.
It certainly seems correct to me, to stop and be like “There are millions of people up there in those skyscrapers, working in offices, and it seems like (maybe) a lot of them are producing literally no value. WTF?! How did we end up in a world like this! What is going on?”
My current best guess, the following: Some people are creating value, huge amounts of value in total (we live in a very rich society, by historical standards), but many (most?) people are doing useless work. But for employers, the overhead of identifying which work is creating value and which work isn’t is (apparently) more costly than the resources that would be saved by cutting the people that aren’t producing value.
It’s like Paul Graham says: in a company your work is averaged together with a bunch of other people’s and it is hard or impossible to assess each person’s contribution. This gives rise to a funny dynamic where a lot of the people are basically occupying a free-rider / parasitic niche: they produce ~no wealth, but they average their work with some people who do.
(To be clear: the issue is rarely people being hired to do something that in principle could be productive, but then slacking off. I would guess that is much more frequently the case that an institution, because of its own [very hard to resolve] inadequacies, hire people specifically do do useless things.)
But an important point here is that on average, people are creating value, even if most of the human-working hours are useless. The basic formula of dollars = value created, still holds, it’s just really noisy.
2.
Well, importantly, as an EA, I don’t mean “doing nothing” in the sense of providing value to no one at all. I mean “doing nothing” as “doing nothing to make a dent in the big problems.”
And this state of affairs isn’t that surprising. My default models has it that I can engage in positive-sum trades, which do provide value to others in my economic sphere, but that by default none of that surplus gets directed at people outside of that economic sphere. The most salient example might be animals, who have no ability to advocate for themselves, or trade with us. They are outside the virtuous circle of our economy, and don’t benefit from it unless people like me take specific action to save them.
The same basic argument goes for people in third world countries and far-future entities.
So, yeah, this is a problem. And your average EA thinks we should attend to it. But according to the narrative, EA is already on it.
I think it’s analytically pretty simple. GDP involves adding up all the “output” into a single metric. Output is measured based on others’ willingness to pay. The more payments are motivated by violence rather than the production of something everyone is glad to have more of, the more GDP measures expropriation rather than production. There Is A War is mostly about working out the details & how this relates to macroeconomic ideas of “stimulus,” “aggregate demand,” etc, but if that analytic argument doesn’t make sense to you, then that’s the point we should be working out.
Ok. This makes sense to me. GDP measures a mix of trades that occur due to simple mutual benefit and “trades” that occur because of extortion or manipulation.
If you look at the combined metric, and interpret it to be a measure of only the first kind of trade, you’re likely overstating how much value is being created, perhaps by a huge margin, depending on what percentage of trades are based on violence.
But I’m not really clear on why you’re talking about GDP at all. It seems like you’re taking the claim that “GDP is a bad metric for value creation”, and concluding that “interventions like give directly are a misguided.”
Rereading this thread, I come to
Is the argument something like...
1. GDP is is irreparably corrupt, as a useful measure. Folks often take it as a measure of how much value is created, but it is actually just as much a measure of how much violence is being done.
2. This is an example of a more general problem: All of our metrics for tracking value are similarly broken. Our methods of allocating credit don’t work at all.
3. Given that we don’t have robust methods for allocating credit, we can’t trust that anything good happens when give money to the actual organization “Give Directly”. For all we know that money gets squandered on activities that superficially look like helping, but are actually useless or harmful. (This is a reasonable supposition, because this is what most organizations do on priors.)
4. Given that we can’t trust giving money to Give Directly does any good, our only hope for doing good is to actually make sense of what is happening in the world so that we can construct credit allocation systems on which we can actually rely.
On a scale of 0 to 10, how close was that?
This is something like a 9 - gets the overall structure of the argument right with some important caveats:
I’d make a slightly weaker a claim for 2 - that credit-allocation methods have to be presumed broken until established otherwise, and no adequate audit has entered common knowledge.
An important part of the reason for 3 is that, the larger the share of “knowledge work” that we think is mostly about creating disinformation, the more one should distrust any official representations one hasn’t personally checked, when there’s any profit or social incentive to make up such stories. Based on my sense of the character of the people I met while working at GiveWell, and the kind of scrutiny they said they applied to charities, I’d personally be surprised if GiveDirectly didn’t actually exist, or simply pocketed the money. But it’s not at all obvious to me that people without my privileged knowledge should be sure of that.
Ok. Great.
That does not seem obvious to me. It certainly does not seem to follow from merely the fact the GDP is not a good measure of national welfare. (In large part, because my impression is that economists say all the time that GDP is not a good measure of national welfare.)
Presumably you believe that point 2 holds, not just because of the GDP example, but because you’ve seen many, many examples (like health care, which you mention above). Or maybe because you have an analytical argument that the sort of thing that happens with GDP has to generalize to other credit allocation systems?
Is that right? Can you say more about why you expect this to be a general problem?
. . .
I have a much higher credence that give Directly Exists and is doing basically what it says it is doing than you do.
If I do a stack trace on why I think that...
I have a background expectation that the most blatant kinds of fraudulence will be caught. I live in a society that has laws, including laws about what sorts of things non-profits are allowed to do, and not do, with money. If they were lying about every having given any money to anyone in Africa, I’m confident that someone would notice that, and blow the whistle, and the perpetrators would be in jail. (A better hidden, but consequently less extreme incidence of embezzlement is much more plausible, though I would still expect it to be caught eventually.)
They’re sending some somewhat costly-to-fake signals of actually trying to help. For instance, I heard on a blog once, that they were doing an RCT, to see if cash transfers actually improve people’s lives. (I think. I may just be wrong about the simple facts, here.) Most charities don’t do anything like that, and most of the world doesn’t fault them for it. Plus it sounds like a hassle. The only reasons why you would organize an RCT, are 1) you are actually trying to figure out if your intervention works 2) you have a very niche marketing strategy that involves sending costly signals of epistemic virtue, to hoodwink people like me into thinking “Yay Give Directly”, or 3) some combination of 1 and 2, whereby you’re actually interested in the answer, and also part of your motivation is knowing how much it will impress the EAs.
I find it implausible that they are doing strictly 2, because I don’t think the idea would occur to anyone who wasn’t genuinely curious. 3 seems likely.
Trust-chains: They are endorsed by people who are respected by people who’s epistemics I trust. GiveWell endorsed them. I personally have not read GiveWell’s evaluations in much depth, but I know that many people around me including, for instance Carl Shulman, have engaged with them extensively. Not only does everyone around me have oodles of respect for Carl, but I can personally verify (with a small sample size of interactions), that his thinking is extremely careful and rigorous. If Carl thought that GiveWell’s research was generally low quality, I would expect this to be a known, oft-mentioned thing (and I would expect his picture to not be on the OpenPhil website). Carl, is of course, only an example. There are other people around who’s epistemics I trust, who find GiveWell’s research to be good enough to be worth talking about. (Or at least old school GiveWell. I do have a sense that the magic has faded in recent years, as usually happens to institutions.)
I happen to know some of these people personally, but I don’t think that’s a Crux. Several years ago, I was a smart, but inexperienced college student. I came across LessWrong, and correctly identified that the people of that community had better epistemology than me (plus I was impressed with this Eliezer-guy who was apparently making progress on these philosophical problems, in sort of the mode that I had tried to make progress, but he was way ahead of me, and way more skilled.) On LessWrong, they’re talking a lot about GiveWell, and GiveWell recommended charities. I think it’s pretty reasonable to assume that the analysis going into choosing those charities is high quality. Maybe not perfect, but much better than I should be able to expect to do myself (as a college students).
It seems to me that I’m pretty correct in thinking that Give Directly does what it says it does.
You disagree though? Can you point at what I’m getting wrong?
My current understanding of your view: You think that institutional dysfunction and optimized misinformation is so common, that the evidence I note above is not sufficient to overwhelm a the prior, and I should assume that Give Directly is doing approximately nothing of value (and maybe causing harm), until I get much stronger evidence otherwise. (And that evidence should be of the form that I can check with my own eyes and my own models?)
Consider how long Theranos operated, its prestigious board of directors, and the fact that it managed to make a major sale to Walgreens before blowing up. Consider how prominent Three Cups of Tea was (promoted by a New York Times columnist), for how long, before it was exposed. Consider that official US government nutrition advice still reflects obviously distorted, politically motivated research from the early 20th Century. Consider that the MLM company Amway managed to bribe Harvard to get the right introductions to Chinese regulators. Scams can and do capture the official narrative and prosecute whistleblowers.
Consider that pretty much by definition we’re not aware of the most successful scams.
Related: The Scams Are Winning
[Note that I’m shifting the conversation some. The grandparent was about things like Give Directly, and this is mostly talking about large, rich companies like Theanos.]
One could look at this evidence and think:
Or a person might look at this evidence and think:
Because this is a situation involving hidden evidence, I’m not really sure how to distinguish between those worlds, except for something like a randomized audit: 0.001% of companies in the economy are randomly chosen for a detailed investigation, regardless of any allegations.
I would expect that we live in something closer to the second world, if for no other reason than that this world looks really rich, and that wealth has to be created by something other than outright scams (which is not to say that everyone isn’t also dabbling in misinformation).
I would be shocked if more than one of the S&P 500 companies was a scam on the level of Theanos. Does your world model predict that some of them are?
Coca-Cola produces something about as worthless as Theranos machines, substituting the experience of a thing for the thing itself, & is pretty blatant about it. The scams that “win” gerrymander our concept-boundaries to make it hard to see. Likewise Pepsi. JPMorgan Chase & Bank of America, in different ways, are scams structurally similar to Bernie Madoff but with a legitimate state subsidy to bail them out when they blow up. This is not an exhaustive list, just the first 4 that jumped out at me. Pharma is also mostly a scam these days, nearly all of the extant drugs that matter are already off-patent.
Also Facebook, but “scam” is less obviously the right category.
Somewhat confused by the coca-cola example. I don’t buy coke very often, but it seems usually worth it to me when I do buy it (in small amounts, since I do think it tastes pretty good). Is the claim that they are not providing any value some kind of assumption about my coherent extrapolated volition?
It was originally marketed as a health tonic, but its apparent curative properties were due to the powerful stimulant and analgesic cocaine, not any health-enhancing ingredients. Later the cocaine was taken out (but the “Coca” in the name retained), so now it fools the subconscious into thinking it’s healthful with—on different timescales—mass media advertising, caffeine, and refined sugar.
It’s less overtly a scam now, in large part because it has the endowment necessary to manipulate impressions more subtly at scale.
I mean, I agree that Coca Cola engages in marketing practices that try to fabricate associations that are not particularly truth-oriented, but that’s very different from the thing with Theranos.
I model Coca Cola mostly as damaging for my health, and model its short-term positive performance effects to be basically fully mediated via caffeine, but I still think it’s providing me value above and beyond those those benefits, and outweighing the costs in certain situations.
Theranos seems highly disanalogous, since I think almost no one who knew the actual extend of Theranos’ capabilities, and had accurate beliefs about its technologies, would give money to them. I have pretty confident bounds on the effects of coca-cola, and still decide to sometimes give them my money, and would be really highly surprised if there turns out to be a fact about coke that its internal executives are aware of (even subconsciously) that would drastically change that assessment for me, and it doesn’t seem like that’s what you are arguing for.
Both—it would be worrying to have an analytic argument but not notice lots of examples, and it would require much more investigation (and skepticism) if it were happening all the time for no apparent reason.
I tried to gesture at the gestalt of the argument in The Humility Argument for Honesty. Basically, all conflict between intelligent agents contains a large information component, so if we’re fractally at war with each other, we should expect most info channels that aren’t immediately life-support-critical to turn into disinformation, and we should expect this process to accelerate over time.
For examples, important search terms are “preference falsification” and “Gell-Mann amnesia”.
I don’t think I disagree with you on GiveDirectly, except that I suspect you aren’t tracking some important ways your trust chain is likely to make correlated errors along the lines of assuming official statistics are correct. Quick check: what’s your 90% confidence interval for global population, after Googling the official number, which is around 7.7 billion?
Interesting.
I don’t know, certainly not off by more than a half billion in either direction? I don’t know how hard it is to estimate the number of people on earth. It doesn’t seem like there’s much incentive to mess with the numbers here.
Guessing at potential comfounders—There may be incentives for individual countries (or cities) to inflate their numbers (to seem more important) – or, deflate their numbers, to avoid taxes.
It’s not really about how many jobs are bullshit, so much as what it means to do a bullshit job. On Graeber’s model, bullshit jobs are mostly about propping up the story that bullshit jobs are necessary for production. Moral Mazes might help clarify the mechanism, and what I mean about gangs—a lot of white-collar work involves a kind of participatory business theater, to prop up the ego claims of one’s patron.
The more we think the white-collar world works this way, the more skeptical we should be of the literal truth of claims to be “working on” some problem or other using conventional structures.
My intuitive answer to the question “What is a gang?”:
A gang is an organization of thugs that claims resources, like territory or protection money, via force or the threat of force.
Is that close to how you are using the term? What’s the important/relevant feature of a “gang”, when you say “CEA’s advice to build career capital just means join a powerful gang”?
Do you mean something like the following? (This is a probably incorrect paraphrase, not a quote)
Am I on the right track at all?
Or is it more direct than that?
Is any of that right?
Overall your wording seems pretty close.
I think it’s actually a combination of this, and actual coordination to freeze out marginal gangs or things that aren’t gangs, from access to the system. Venture capitalists, for example, will tend to fund people who feel like members of the right gang, use the right signifiers in the right ways, went to the right schools, etc. Everyone I’ve talked with about their experience pitching startups has reported that making judgments on the merits is at best highly noncentral behavior.
If enough of the economy is cartelized, and the cartels are taxing noncartels indirectly via the state, then it doesn’t much matter whether the cartels apply force directly, though sometimes they still do.
It basically involves sending or learning how to send a costly signal of membership in a prestigious gang, including some mixture of job history, acculturation, and integrating socially into a network.
If I replaced the word “gang” here, with the word “ingroup” or “club” or “class”, does that seem just as good?
In these sentences in particular...
and
...I’m tempted to replace the word “gang” with the word “ingroup”.
My guess is that you would say, “An ingroup that coordinates to exclude / freeze out non-ingroup-members from a market is a gang. Let’s not mince words.”
Maybe more specifically an ingroup that takes over a potentially real, profitable social niche, squeezes out everyone else, and uses the niche’s leverage to maximize rent extraction, is a gang.
While I’m not sure I get it either, I think Benquo’s frame has a high level disagreement with the sort of question that utilitarianism asks in the first place (as well as the sort of questions that many non-utilitarian variants of EA are asking). Or rather, objects to the frame in which the question is often asked.
My attempt to summarize the objection is (curious how close this lands for Benquo) is:
”Much of the time, people have internalized moral systems not as something they get to reason about and have agency over, but as something imposed from outside, that they need to submit to. This is a fundamentally unhealthy way to relate to morality.
A person in a bad relationship is further away from a healthy relationship, than a single person, because first the person has to break up with their spouse, which is traumatic and exhausting. A person with a flawed moral foundation trying to figure out how to do good is further away from figuring out how to do good than a person who is just trying to make a generally good life for themselves.
This is important:
a) because if you try to impose your morality on people who are “just making a good life for themselves”, you are continuing to build societal momentum in a direction that alienates people from their own agency and welbeing.
b) “just making a good life for themselves” is, in fact, one of the core goods one can do, and in a just world it’d be what most people were doing.
I think There is A War is one of the earlier Benquo pieces exploring this (or: probably there are earlier-still-ones, but it’s the one I happened to re-read recently). A more recent comment is his objection to Habryka’s take on Integrity (link to comment deep in the conversation that gets to the point, but might require reading the thread for context)
My previous attempt to pass his ITT may also provide some context.