Two (very different) kinds of donors
This post describes a very simple and very important distinction between two kinds of donors/two kinds of donations.
I apologize if the content of this (short) post is obvious to you. Repeated experience has led me to believe that it is not obvious to many people, and can sometimes be something of an epiphany for them, so it seems worth sharing in linkable form. A disagreement that is tightly analogous to this one is currently wrecking my parents’ marriage, for instance, and they each independently found this to be a concretely useful metaphor.
There are (at least) two very different kinds of donors, and they give very different kinds of donations, and they do not always tag themselves or their donations as such. In part, this is because many people are unaware that [the other kind of donor] exists at all, and so they don’t know that they need to identify themselves as being of a particular type. Both types, in my experience, assume themselves to be the default.
The first kind of donor is donating to the mission. They attend a CFAR workshop, for instance, and enjoy themselves immensely, and believe that the experience will be valuable for others. They want [more of that], so they donate to CFAR.
Whether they say so explicitly or not, they are donating to cause [more of that]. They believe and expect that their money will be used in ways which are legibly about causing [more of that]. Thus, while they may not actually earmark their donation in any particular way, if CFAR’s books were to become public, they would expect to see expenditures like:
Venue costs
Food and catering costs
Subsidies for promising workshop attendees
Salaries for instructors and other staff
Continuing education for instructors and researchers (e.g. conference fees, program tuition, travel expenses directly related to such)
Staff retreats (for curriculum development)
I call these donors “legibles.”
The second kind of donor is donating to an agent. They attend a CFAR workshop, for instance, and enjoy themselves immensely, and believe that the experience will be valuable for others. They think to themselves “the people who caused this to happen are doing something good in the world. I want to see more stuff like this. I want to unlock these people, so that they are free to pursue their ambitions.”
Whether they say so explicitly or not, they are delegating. They believe and expect that the people they’ve given money to will be better at using that money than they would themselves (at least for those particular marginal dollars). They have no particular expectations about how their money will be spent.
I call these donors “patrons.”
Imagine that for some reason, an organization like CFAR uses donor dollars to purchase fifty rubber dildos.
(This example is deliberately chosen to be a specific kind of “outrageous.” To be clear, nothing like this ever happened with CFAR money to the best of my knowledge; this is a hypothetical and I’m using CFAR as my example org simply because I worked there (and learned about the distinction between different kinds of donors while working there).)
Legibles, if they discover this fact, will be shocked. They may very well feel personally betrayed. They may demand an investigation, and in their culture they are entirely correct to do so. If they discover multiple other purchases that are similarly wildly divorced from “the sorts of things one would reasonably expect are involved in developing and running applied rationality workshops,” they may demand their money back, or call for the firing of various executives, or launch a campaign to have CFAR dissolved.
Patrons will not be shocked. They may be curious. They will very likely be confused. But their overall response will be “I have no idea what CFAR needed fifty rubber dildos for … some kind of social experiment? … but anyway, I trust that there’s a reason, and I don’t feel the need to go chasing after it.”
They might lose this faith if it later turns out that they hear the reason, and it doesn’t make sense after the fact, especially if this happens three or four times. But even then, they will tend to simply stop donating money, as opposed to feeling actively betrayed and wanting some form of restitution. They’re starting from a prior of “this money will be well-spent by virtue of the fact that the person spending it is wise and reasonable in the ways I care about.” They expect every bit as much responsibility as the first donor, but they do not expect legibility.
Legibles believe they are, in essence, making a purchase. Turning dollars into CFAR workshops, or rationality essays written, or birds-saved-from-oil-spills, or doctors-paid-to-cure-diabetes.
Patrons believe they are making an investment. They are gambling, in the same way that one gambles when one puts money into a small tech startup. They are hoping for something great, and will be only normal amounts of disappointed if what they get in return is nothing.
(Note that a given donor can be either kind on different days, or when giving dollars to different people or different projects.)
If you are seeking donations, or grants, or funding of any kind, it behooves you to know which kind you are seeking.
If you are giving donations, or grants, or funding of any kind, it’s helpful to know, and to be able to clearly state, which kind you are offering, so that e.g. the person you gave a support grant to knows whether they owe you some kind of tangible product at the end of it, or not.
A lot of misunderstandings and disappointments can be solved with clear(er) and (more) accurate expectations.
- 22 Dec 2021 1:57 UTC; 5 points) 's comment on In Defense of Attempting Hard Things, and my story of the Leverage ecosystem by (
- 25 Jun 2022 3:14 UTC; 3 points) 's comment on Open & Welcome Thread—June 2022 by (
I think the fundamental point here is an interesting one but I think that this post is unfortunately marred by using a lewd example that limits the potential audience—there are people who I would not be willing to share this post with as a result.
This is not a problem unique to you in particular—I think many of Eliezer’s posts have similar issues, Three Worlds Collide being perhaps the most well-known but there are also some that suffer from being too “sneery” towards religion or similar—but I thought it might be worth pointing out regardless.
I feel like the nature of the example was important for reasons which are intuitive and not fully transparent to me.
While it is a smidge extra work, feel free to copy-paste the text in an email to someone, replacing the exactly two instances of [phrase] with “outdoor Jacuzzis” or similar, if that’s useful to you.
I don’t want to reduce the audience of anything important unnecessarily, but this seems like a squashing of authorial style that is too strong, to me.
If I showed your comment to someone and asked them to guess what it was about, “the word ‘dildo’ appears 2 times in the text” is not what I think they’d guess.
These do seem like clusters that exist, but the example given for why “legible” donors might feel betrayed isn’t quite landing for me.
I consider myself a mission-oriented donor, but I wouldn’t pay attention to the kinds of “inputs” an organization purchases, rather than the “outputs” (papers, programs, giant dildo art installations, etc). I could be typical-minding or an edge case (or fall into some other cluster you’ve thought of but haven’t listed entirely), but I think most “legible” donors wouldn’t care about intermediate purchases/inputs either?
In my experience, most legible donors don’t care about intermediate purchases and inputs until they start to be upset, and at that point, intermediate purchases and inputs are often either the cause of, or the pretext for, becoming much more upset/confirming one’s inklings of suspicion.
In part, things seem to me to go wrong because approximately everyone knows, to at least some degree, that it doesn’t make sense to have tons and tons of looming oversight and force people to burn time and attention on justifying every little thing.
But the definition of what’s “little,” or what’s “obviously justified” vs. “potentially justifiable but in need of explicit justification,” is not common/universal, so what one person thinks is business as usual, another thinks needs explaining.
And if it’s been a year or two since anyone checked, then things very quickly go from “X needs explaining” to “a pattern of extremely irresponsible spending on all sorts of Xs, resulting in Y wasted dollars over the past year.”
Especially if the bewildered response is “but I thought you trusted my judgment.”
“Yeah, I did.”
Ok, thanks for clarifying. To me that sounds like a spectrum of “how much weirdness/lack of legibility are donors willing to tolerate w.r.t. the internals of an org”, and that might be correlated with whether a donor happens to be donating to the mission or to the agent(s).
I suspect another reason I’m struggling a bit with the idea is that although I consider myself a mission-oriented donor, I’m also having trouble coming up with an example of an intermediate input where finding out about it would cause me to be upset (immediately). Like, if MIRI purchased 50 dildos I wouldn’t blink twice because it’s not a significant expenditure relative to the organization’s size and I can think of a variety of plausible reasons that would make them a reasonable purchase. But if MIRI spent six figures on sex toys in one year I’d hope that there was an explanation, even if that was literally just “yeah these are actually inputs into research” or something like that. Maybe that still counts? It feels like the scale at which it starts to matter is also just another axis on that spectrum.
I guess it’s worth noting that the MIRI mission in particular is much less specific than the mission of e.g. an animal rights charity, or a person who has a specific project pitch? Like, “cause humanity to successfully navigate the acute risk period” is much more on the open-ended side of mission-space. For nearly any given X, it’s more plausible that it could be relevant to the MIRI mission than that it could be relevant to, say, someone who’s trying to build widgets.
Yeah, agreed, that’s definitely part of what was making it a bit ambiguous for me.
This comes up all the time on services like Kickstarter or Patreon. These sites suggest to varying degrees that you’ll get a product (in which case you’re more of a customer), but also that success is not guaranteed (in which case you’re more of a patron or investor). Sometimes one gets the sense that these sites deliberately blur the lines between these two categories because they want to have their cake and eat it, too.
A related case is donating to individual needy people (for instance, with a Go Fund Me or giving cash to a person on the street). Sone donors are happy with the receivers buying anything, assuming that the receiver knows their personal situation best, while other donors would want the receivers to use it for a specific purpose. This is especially so for things that seem like vices. If you give a homeless person money and they use it to buy alcohol, many people would be quite upset, while others would trust that they could decide for themself what the optimal way to use the money was.
I’m curious to what extent this distinction can be mapped to the disagreeable/agreeable spectrum. Also curious if this thing can be generalized to diff kinds of support in general, i.e. conditional/unconditional love. I’m also getting father/mother archetypes vibes from this.
I would expect the Thiel Foundation (central donor of both Leverage and MIRI) to be working more with the patron model and at the same time Thiel is low-agreeableness.
Ah yeah, on second thought the “father” vibe might also be one that is (disagreeably) investing in the agent, not buying their activities, and when you are buying the activity it seems that the disagreeable/agreeable dimension kind of collapses, i.e. it doesn’t matter how the recipient feels about it, they’re just an object.
How about means versus ends? More precisely, I suppose you can turn this binary distinction into a discrete one by saying that while one donor might be supporting your current activities, the next one might be supporting the one-level-up goal that your current activities are pursuing, while the next one might be supporting the two-levels-up goal that your current activities are pursuing, etc.
In other words, different donors might have different levels of tolerance if you pivot. The more fundamentally you change your plans, the more the donor has to be on the “investing in the agent” end of the spectrum to support it.
There’s a school of thought that disagreeable people are important for making a lot of progress and that they are best at creating that progress when they are given freedom.
I think that disagreeable people are more likely to believe that paradigm and thus provide funds in a way that works better for disagreeable people which is the patron model.
Seems true. Not sure what update to make from it, though, as I would expect some counterexamples even with a strong correlation.
(Also not actually sure that Thiel is low-agreeableness but I adopt it as a prior going forward since before my prior was ???).
I do think that there’s probably a correlation. I think that e.g. high-agreeableness people will tend to be more patron-ish, and low-agreeableness people will tend to be more patron-ish while they are feeling particularly agreeable.
re: father/mother archetypes, interestingly, the conflict my parents are having is over their shared finances/pooled savings; Mom thinks that it should be spent on [herself, my dad, and me and my brother] whereas Dad thinks it should be spent on [whatever either she or he thinks is a good idea] and they had both been proceeding as if the other was doing it their way. When they reviewed a decade of fairly large expenditures, this created H U G E conflict, and feelings of betrayal on both sides.
(Mom feels betrayed because Dad “spent her hard-earned money” on things outside the mission; Dad feels betrayed because Mom is looking at good, moral expenditures and calling them a violation/transgression.)
I think there’s some real distinction here in terms of donor motivation and expectation, but I’m not convinced that this cut is quite at the joint. The example given would be far enough outside of expectation that both types of donor would be surprised, and both would react to explanations fairly similarly (if it does further the ends of the charity, accept it; if it’s stupid/unlikely to work, or for non-aligned reasons, reject it and stop donating).
Categorizing donors in order to make sure you’re serving their needs is worth something. But almost all charities (I’ve been more involved with regional and local food charities than rationality ones) take great care to frame their choices in terms of mission, regardless of how they think of donors. They do tend to communicate somewhat differently for capital campaigns and other directed giving to subsets of the mission, but even then it’s usually more fine-grained than legible vs trusting.
edit: you address this a little in the fact that CFAR has a fuzzier mission than some other charities. I suspect that’s key—I don’t work with many charities that are personality-based or have significant patrons donating for “people I like who seem to do smart things” reasons. I think I’d recommend that CFAR work that way too, even if you have significant patronage income, you should strive to ensure you’re spending effectively and legibly toward outcomes. Honestly, you should do this to avoid self-deception as much as to satisfy donors.
I think part of the difference here might be that you’re thinking more in terms of the narrow(er) subset of explicit charities, whereas I’m sort of looking at:
Charities
Startups
Grants
Personal donations
Personal gifts
… as all being part of a single set, and trying to model that?
I agree that if you limit yourself to just e.g. legal nonprofits, the distinction made above is much less of an explainer.
I think it may be depends on what a donor expect about competence of people who make decisions in an organization.
Sometimes a person want delegate a task to an agent but they don’t trust fully this agent. They may expect a lack of competence or a lack of motivation or some another problem. And they think they need to control. And they think they can control something.
And sometimes a person expect that this agent is more competent in a task and motivated enough. And then they think they don’t need to control this agent.
And sometimes people evaluate a competence of another agent right and sometimes wrong.