It seems like the primary problem you point to here is that asking for a grand narrative produces pressure to inflate the importance of your proposed project.
Inflating the importance of your project is a fundamental pressure in any granting situation. Grantors fundamentally want projects that have a large impact per dollar.
So getting rid of a grand narrative requirement isn’t going to solve that problem.
The counterbalance is grantors looking for proposals that are epistemically careful in estimating the impact of their project. The organizations you mention are almost certainly careful to have a well-tuned bullshit detector as part of their review process. This disincentives trying to fool the org, and therefore trying to fool yourself. It’s not perfect.
The other factor here is that if an organization is giving grants for projects that “[improve ] humanity’s long term prospects for survival and flourishing...” then not all projects are eligible for that funding. Which sucks for those projects.
Your project of measuring vegan’s biomarkers could qualify. A large portion of people doing survival-odds-enhancing projects are rationalists; a significant fraction of the rationalist community is vegan; if vegans aren’t healthy, they’ll have less energy and intelligence; that marginally effects their ability to do all of the other potentially highly impactful projects. It’s a small but real change in our odds of survival and flourishing. Estimating those odds accurately and honestly could either win you a grant, or convince you that this project isn’t actually in the category of shifting our odds of a good future, and maybe you should do something else or apply elsewhere for funding.
Furthering this line of argument, I can think of specific projects that do make it easy to supply a convincing grand narrative about their impact on the world, including technical AI safety research, wastewater monitoring for potential pandemics, institutions working on improved epistemics, and work to enhance human intelligence and decision-making. Whether or not a project lends itself to a grand narrative does, in fact, suggest to me that it’s more likely to be able to achieve impact on the world scale. And many of these projects seem concrete enough to me that it’s easy to say whether or not the grand narrative seems reasonable or not.
The activity of helping vegans get tested for nutritional deficiencies doesn’t fit a grand narrative for world-scale impact. But if the idea was to work on making concierge medicine especially available to ultra-high performers in the field of x-risk in order to ensure that the Paul Christianos of the world face minimal health impediments to their research, I think that would lend itself to a grand narrative that might be compelling to grantmakers. It also suggests a wider and different range of options for how one might pivot if nutritional testing for vegans wasn’t feeling like it was achieving enough impact.
I also think there’s an analogy to be drawn here between startups and those applying for grants. One of the most common reasons startups fail is that they make a product people don’t want to buy, and never pivot. One of the things venture capital and startup advisors can do is counsel startups on how to make a product the market wants. It seems like there’s an opportunity here to help energetic, self-starting, smart people connect their professional interests with the kinds of world-impact grand narratives that grantmakers find compelling. EA and 80,000 Hours do this to some extent, but there’s often a sense in which they’re trying to recruit people into pre-established molds or simply headhunt people who have it all figured out already. Helping people who already have compelling but small-scale projects think bigger and adapt their projects into things that might actually have world-scale potential seems useful and perhaps under-supplied.
The vegan nutrition project led to a lot of high-up EAs getting nutrition tested and some supplement changes, primarily due to seeking the tests out themselves after the iron post. If I was doing the project again, I’d prioritize that post and similar over offering testing. But I didn’t know when I started that iron deficiencies would be the standout issue, and even if I had I would have felt uncomfortable listing “impact by motivating others” as a plan. What if I wrote something and nobody cared? I did hope to create a virtuous cycle via word of mouth on the benefits of test-and-supplement, which has mostly not happened yet.
You can argue it was a flaw in me that rendered me incapable of imagining that outcome and putting it on a grant. More recently I wrote a grant that had “motivate medical change via informative blog posts” at its core, so clearly I don’t think doing so is inherently immoral. But the flaw that kept me from predicting that path before I’d actually done it is connected to some of my virtues, and specifically the virtues that make me good at the quantified lit review work.
Or my community organizer friend. There are advantages to organizers who care deeply about x-risk and see organizing as a path to doing so. But there are serious disadvantages as well.
I think my model might be [good models + high impact vision grounded in that model] > [good models alone + modest goals] > [mediocre model + grand vision], where good model means both reasonably accurate and continually improving based on feedback loops inherent in the project, with the latter probably being more important. And I think that if you reward grand vision too much, you both select for and cause worse models with less self-correction.
Of the items you listed
technical AI safety research, wastewater monitoring for potential pandemics, institutions working on improved epistemics, and work to enhance human intelligence and decision-making
I would only count wastewater monitoring as a project. Much technical alignment research counts as a project but “do technical alignment research” is a stupid plan, you need to provide specfics. The other items on the list are goals. They’re good goals, but they’re not plans and they’re not projects and I absolutely would value a solid plan with a modest goal over a vague plan to solve any of these.
Thanks for the reply, Elizabeth. I agree with pretty much everything you say here. I particularly like this part:
[good models + high impact vision grounded in that model] > [good models alone + modest goals] > [mediocre model + grand vision]
I think this is a foundational starting point for thinking about the process of spinning up any impactful project. It also helps me see the wisdom in what you are arguing here more clearly, for two reasons:
It shows how selecting for grand vision can result in selecting for heavily mediocre models, particularly if there is a) no equally stringent selection for good models and b) as we might expect, far more grand visions + mediocre models than good models + high-impact visions grounded in that model. Since I expect most people will agree that in the general population, (b) is true, the crux of any disagreement may depend on how successfully good models are already selected for by grantmakers, as well as how self-selecting the population of applicants is for having good models. I don’t have an opinion on that subject, and I think that professional grantmakers and people who have looked at a lot of grant-winners and grant-losers would be the relevant experts. I hope they do deign to comment on your post.
When people shift from pursuing a grand narrative to pursuing a good model, this can come witih the dissolution of the formerly motivating grand narrative, along with envy or a sense of discouragement when comparing oneself with those who have achieved a high impact vision grounded in a good model. When we read statements such as that of Hamming, that asks why you’re not working on the obviously most important project in your field, we can feel discouraged or as if we are on the wrong track if our instant answer is anything other than “but I am working on the most important problem!” This model offers an alternative point of view, which is that the first step in getting to your Hamming problem is to stop pursuing a groundless grand vision and to pursue a good model, even if it’s not obvious what the ultimate benefit is. Figuring out how to give that journey some structure so that it doesn’t become an exercise in self-justification or a recipe for aimless wandering seems good, but I still think it is a step in the right direction over clinging to one’s initial grand vision. I would rather see a population of people chasing good models, sometimes aimlessly, than a population of people chasing grand visions, since I expect this would be even more aimless.
Some minor notes:
It does sound like discomfort with articulating the most apt high-impact vision was part of why you were reluctant to list it. I’m not sure if that was emotional discomfort or intellectual discomfort, but I would not be surprised if in general, lack of emotional confidence to identify and articulate the most apt high-impact vision when one has a good model already is a slowdown for some people. I have noticed that ChatGPT has been pretty good at helping me to articulate the high-impact vision in suitably ringing prose when I have a good model already—I used it to write the copy for a database website I created, because it was a lot harder for me to write “mission statement-esque” prose than to write software, clean the data, and build the website. The good model was much easier than articulating the high-impact vision even though I had the vision. I don’t know if that’s a “flaw” exactly—the point is just to distinguish “I have no idea what the apt high-impact vision for my extant good model is” from “I have an idea of what the high-impact vision is, but I’m not sure” from “I know what the high-impact vision is, but I’m uncomfortable being loud and proud about it or don’t know how to put it into words effectively.”
Regarding my list of putative projects, I agree with you that only the wastewater monitoring project is a project, per se. The rest of the ones I listed are more themes for projects, but I presume there are a number of concrete projects within each theme that could be listed—I am simply relatively unfamiliar with these areas and so I didn’t have a bunch of specific examples at the top of my head.
I’m glad it was so helpful, thanks for prompting me to formalize it and for providing elaborations. Both of your points feel important to me.
I’m glad GPT worked for you but I think it’s a risky maneuver and I’m scared of the world where it is the common solution to this problem. The push for grand vision doesn’t just make models worse, it hurts your ability to evaluate models as well. GPT is designed to create the appearance of models where none exists, and I want it as far from the grantmaking process as possible. I think solutions like “ask for a 0.1percentile vision” solve this more neatly.
I’m no longer quite sure what you were aiming for with the first paragraph in your first comment. I think projects with the goal of “improve epistemics” are very nearly guaranteed to be fake. Not quite 100%- I sent in a grant with that goal myself recently, and I have high hopes for CE’s new Research Training Program. But a stunning number of things had to go right for my project to feel viable to me. For one, I’d already done most of the work and it was easy to lay out the remaining steps (although they still ballooned and I missed my deadline).
It also feels relevant that I didn’t backchain that plan. I’d had a vague goal of improving epistemics for years without figuring out anything more useful than “be the change I wanted to see in the world”. The useful action only came when I got mad about a specific object-level thing I was investigating for unrelated reasons.
PS. I realize that using my projects as examples places you in an awkward position. I officially give you my blessing to be brutal in discussing projects I bring up as examples.
I’m glad GPT worked for you but I think it’s a risky maneuver and I’m scared of the world where it is the common solution to this problem.
Yeah, I would distinguish between using GPT to generate a grand vision vs. using it to express it in a particular style. The latter is how I used it—with the project I’m referring to, the model + vision were in place because I’d already spent almost a year doing research on the topic for my MS. However, I just don’t have much experience or flair for writing that up in a way that is resonant and has that “institutional flavor,” and GPT was helpful for that.
Here’s a revision of the first paragraph you were asking about. I think that there are many grantmaking models that can work at least somewhat, but they all face tradeoffs. If you try to pick only projects with a model-grounded vision, you risk giving to empty grand narratives instead. If you try to pick only grantees with a good model, then you risk creating a stultified process in which grantees all feel pressure to plan everything out in advance to an unrealistic degree. If you regrant to just fund people who seem like they know what they’re doing, then you make grants susceptible to privilege and corruption.
I think all these are risks, and probably at the end of the day, the ideal amount of grift, frustration, and privilege/corruption in grantmaking is not zero (an idea I take from Lying For Money, which says the same about fraud). And I believe this because I also think that grantmakers can have reasonable success in any of these approaches—vision-based, model-based, and person-based. There are also some projects that are based on a model that’s legible to just about anybody, where the person carrying them out is credible, and where it clearly can operate on the world scale. I would characterize Kevin Esvelt’s wastewater monitoring project that way. Projects like this are winners under any grantmaking paradigm, and that’s the sort of project my first paragraph in my original comment was about.
Another way I might put it is that grantmaking is in the ideal case giving money to a project that ticks all three boxes (vision, model, person), but in many cases grants are about ticking one or two boxes and crossing one’s fingers. I think it would be good to be clear about that and create a space for model-based, or model+person-based, or model+vision-based grantmaking with some clarity about what a pivot might look like if the vision, model or person didn’t pan out.
I have to disagree with you at least somewhat about projects to improve epistemics. Maybe it’s selection bias—I’m not plugged into the SF rationalist scene and it may be that there’s a lot of sloppy ideas bruited about in that space, I don’t know, but I can think of a bunch of projects to improve epistemics that I have personally benefitted from greatly—LessWrong and the online rationalist community, the suite of prediction markets and competitions, a lot of information-gathering and processing software tools, and of course a great deal of scientific research that helps me think more clearly not just about technical topics about about thinking itself. I wouldn’t be at all surprised if there are a bunch of bad or insignificant projects that are things like workshops or minor software apps. I guess I just think that projects to improve epistemics don’t seem obviously more difficult than others, the vision makes sense to me, and it seems tractable to separate the wheat from the chaff with some efficacy. That might be my own naivety and lack of experience however.
I have personally benefitted from some of your projects and ideas, particularly the idea of epistemic spot-checks, which turn out to be useful even if you do have or are in the process of earning a graduate degree in the subject. That’s not only because there’s a lot of bull out there, but also because the process of checking a true claim can greatly enrich your interpretation of it. When I read review articles, I frequently find myself reading the citations 2-3 layers deep, and even that doesn’t seem like enough in many cases, because I gain such great benefits from understanding what exactly the top-level review summary is referring to. It seems like your projects are somewhat on the borderline between academic research and a boutique report for individual or small-group decision making. I think both are useful. It’s hard to judge utility unless you yourself have a need for either the academic research or are making a decision about the same topic, so I can’t opine about the quality of the reports you have generated. I do think that my academic journey so far has made me see that there’s tremendous utility in putting together the right collection of information to inform the right decision, but it’s only possible to do that if you invest quite a bit of yourself into a particular domain and if you are in collaboration with others who are as well. So from the outside, it seems like it might be valuable to see if you can find a group of people doing work you really believe in, and then invest a lot in building up those relationships and figuring out how your research skills can be most informative. Maybe that’s what you’re already doing, I am not sure. But if I was a regranter and had money to give out at least on a model+person basis, I would happily regrant to you!
I think my statement of “nearly guaranteed to be false” was an exaggeration, or at least misleading for what you can expect after applying some basic filters and a reasonable definition of epistemics. I love QURI and manifold and those do fit best in the epistemics bucket, although aren’t central examples for me for reasons that are probably unfair to the epistemics category.
Guesstimate might be a good example project. I use guesstimate and love it. If I put myself in the shoes of its creator writing a grant application 6 or 7 years, I find it really easy to write a model-based application for funding and difficult to write a vision-based statement. It’s relatively easy to spell out a model of what makes BOTECs hard and some ideas for making them easier. It’s hard to say what better BOTECs will bring in the world. I think that the ~2016 grant maker should have accepted “look lots of people you care about do BOTECs and I can clearly make BOTECs better”, without a more detailed vision of impact.
I think it’s plausible grantmakers would accept that pitch (or that it was the pitch and they did accept it, maybe @ozziegooen can tell us?). Not every individual evaluator, but some, and as you say it’s good to have multiple people valuing different things. My complaint is that I think the existing applications don’t make it obvious that that’s an okay pitch to make. My goal is some combination of “get the forms changed to make it more obvious that this kind of pitch is okay” and “spread the knowledge that that this can work even if the form seems like the form wants something else”.
In terms of me personally… I think the nudges for vision have been good for me and the push/demands for vision have been bad. Without the nudges I probably am too much of a dilettante, and thinking about scope at all is good and puts me more in contact with reality. But the big rewards (in terms of money and social status) pushed me to fake vision and I think that slowed me down. I think it’s plausible that “give Elizabeth money to exude rigor and talk to people” would have been a good[1] use of a marginal x-risk dollar in 2018.[2]
During the post-scarcity days of 2022 there was something of a pattern of people offering me open ended money, but then asking for a few examples of projects I might do, and then asking for them to be more legible and the value to be immediately obvious, and fill out forms with the vibe that I’m definitely going to do these specific things and if I don’t have committed a moral fraud… So it ended up in the worst of all possible worlds, where I was being asked for a strong commitment without time to think through what I wanted to commit to. I inevitably ended up turning these down, and was starting to do so earlier and earlier in the process when the money tap was shut off. I think if I hadn’t had the presence of mind to turn these down it would have been really bad, because I not only was committed to a multi-month plan I spent a few hours on, but I would have been committed to falsely viewing the time as free form and following my epistemics.
Honestly I think the best thing for funding me and people like me[3] might be to embrace impact certificates/retroactive grant making. It avoids the problems that stem from premature project legibilization without leaving grantmakers funding a bunch of random bullshit. That’s probably a bigger deal than wording on a form.
I have gotten marginal exclusive retreat invites on the theory that “look she’s not aiming very high[4] but having her here will make everyone a little more honest and a little more grounded in reality”, and I think they were happy with that decision. TBC this was a pitch someone else made on my behalf I didn’t hear about until later.
relevant features of this category: doing lots of small projects that don’t make sense to lump together, scrupulous about commitments to the point it’s easy to create poor outcomes, have enough runway that it doesn’t matter when I get paid and I can afford to gamble on projects.
My complaint is that I think the existing applications don’t make it obvious that that’s an okay pitch to make. My goal is some combination of “get the forms changed to make it more obvious that this kind of pitch is okay” and “spread the knowledge that that this can work even if the form seems like the form wants something else”.
That seems like an easy win—and if the grantmaker is specifically not interested in pure model-based justifications, saying so would also be helpful so that honest model-based applicants don’t have to waste their time.
and fill out forms with the vibe that I’m definitely going to do these specific things and if I don’t have committed a moral fraud
That seems like a foolish grantmaking strategy—in the startup world, most VCs seem to encourage startups to pivot, kill unpromising projects, and assume that the first product idea isn’t going to be the last one because it takes time to find a compelling benefit and product-market fit. To insist that the grantee stake their reputation not only on successful execution but also on sticking to the original project idea seems like a way to help projects fail while selecting for a mixture of immaturity and dishonesty. That doesn’t mean I think those awarded grants are immoral—my hope is that most applicants are moral people and that such a rigid grantmaking process is just making the selection process marginally worse than it otherwise might be.
Honestly I think the best thing for funding me and people like me[3] might be to embrace impact certificates/retroactive grant making. It avoids the problems that stem from premature project legibilization without leaving grantmakers funding a bunch of random bullshit. That’s probably a bigger deal than wording on a form.
Yeah, I think this is an interesting space. Certainly much more work to make this work than changing the wording on a form though!
Sounds like we’re pretty much in agreement at least in terms of general principles.
It seems like the primary problem you point to here is that asking for a grand narrative produces pressure to inflate the importance of your proposed project.
Inflating the importance of your project is a fundamental pressure in any granting situation. Grantors fundamentally want projects that have a large impact per dollar.
So getting rid of a grand narrative requirement isn’t going to solve that problem.
The counterbalance is grantors looking for proposals that are epistemically careful in estimating the impact of their project. The organizations you mention are almost certainly careful to have a well-tuned bullshit detector as part of their review process. This disincentives trying to fool the org, and therefore trying to fool yourself. It’s not perfect.
The other factor here is that if an organization is giving grants for projects that “[improve ] humanity’s long term prospects for survival and flourishing...” then not all projects are eligible for that funding. Which sucks for those projects.
Your project of measuring vegan’s biomarkers could qualify. A large portion of people doing survival-odds-enhancing projects are rationalists; a significant fraction of the rationalist community is vegan; if vegans aren’t healthy, they’ll have less energy and intelligence; that marginally effects their ability to do all of the other potentially highly impactful projects. It’s a small but real change in our odds of survival and flourishing. Estimating those odds accurately and honestly could either win you a grant, or convince you that this project isn’t actually in the category of shifting our odds of a good future, and maybe you should do something else or apply elsewhere for funding.
Furthering this line of argument, I can think of specific projects that do make it easy to supply a convincing grand narrative about their impact on the world, including technical AI safety research, wastewater monitoring for potential pandemics, institutions working on improved epistemics, and work to enhance human intelligence and decision-making. Whether or not a project lends itself to a grand narrative does, in fact, suggest to me that it’s more likely to be able to achieve impact on the world scale. And many of these projects seem concrete enough to me that it’s easy to say whether or not the grand narrative seems reasonable or not.
The activity of helping vegans get tested for nutritional deficiencies doesn’t fit a grand narrative for world-scale impact. But if the idea was to work on making concierge medicine especially available to ultra-high performers in the field of x-risk in order to ensure that the Paul Christianos of the world face minimal health impediments to their research, I think that would lend itself to a grand narrative that might be compelling to grantmakers. It also suggests a wider and different range of options for how one might pivot if nutritional testing for vegans wasn’t feeling like it was achieving enough impact.
I also think there’s an analogy to be drawn here between startups and those applying for grants. One of the most common reasons startups fail is that they make a product people don’t want to buy, and never pivot. One of the things venture capital and startup advisors can do is counsel startups on how to make a product the market wants. It seems like there’s an opportunity here to help energetic, self-starting, smart people connect their professional interests with the kinds of world-impact grand narratives that grantmakers find compelling. EA and 80,000 Hours do this to some extent, but there’s often a sense in which they’re trying to recruit people into pre-established molds or simply headhunt people who have it all figured out already. Helping people who already have compelling but small-scale projects think bigger and adapt their projects into things that might actually have world-scale potential seems useful and perhaps under-supplied.
The vegan nutrition project led to a lot of high-up EAs getting nutrition tested and some supplement changes, primarily due to seeking the tests out themselves after the iron post. If I was doing the project again, I’d prioritize that post and similar over offering testing. But I didn’t know when I started that iron deficiencies would be the standout issue, and even if I had I would have felt uncomfortable listing “impact by motivating others” as a plan. What if I wrote something and nobody cared? I did hope to create a virtuous cycle via word of mouth on the benefits of test-and-supplement, which has mostly not happened yet.
You can argue it was a flaw in me that rendered me incapable of imagining that outcome and putting it on a grant. More recently I wrote a grant that had “motivate medical change via informative blog posts” at its core, so clearly I don’t think doing so is inherently immoral. But the flaw that kept me from predicting that path before I’d actually done it is connected to some of my virtues, and specifically the virtues that make me good at the quantified lit review work.
Or my community organizer friend. There are advantages to organizers who care deeply about x-risk and see organizing as a path to doing so. But there are serious disadvantages as well.
I think my model might be [good models + high impact vision grounded in that model] > [good models alone + modest goals] > [mediocre model + grand vision], where good model means both reasonably accurate and continually improving based on feedback loops inherent in the project, with the latter probably being more important. And I think that if you reward grand vision too much, you both select for and cause worse models with less self-correction.
Of the items you listed
I would only count wastewater monitoring as a project. Much technical alignment research counts as a project but “do technical alignment research” is a stupid plan, you need to provide specfics. The other items on the list are goals. They’re good goals, but they’re not plans and they’re not projects and I absolutely would value a solid plan with a modest goal over a vague plan to solve any of these.
Thanks for the reply, Elizabeth. I agree with pretty much everything you say here. I particularly like this part:
I think this is a foundational starting point for thinking about the process of spinning up any impactful project. It also helps me see the wisdom in what you are arguing here more clearly, for two reasons:
It shows how selecting for grand vision can result in selecting for heavily mediocre models, particularly if there is a) no equally stringent selection for good models and b) as we might expect, far more grand visions + mediocre models than good models + high-impact visions grounded in that model. Since I expect most people will agree that in the general population, (b) is true, the crux of any disagreement may depend on how successfully good models are already selected for by grantmakers, as well as how self-selecting the population of applicants is for having good models. I don’t have an opinion on that subject, and I think that professional grantmakers and people who have looked at a lot of grant-winners and grant-losers would be the relevant experts. I hope they do deign to comment on your post.
When people shift from pursuing a grand narrative to pursuing a good model, this can come witih the dissolution of the formerly motivating grand narrative, along with envy or a sense of discouragement when comparing oneself with those who have achieved a high impact vision grounded in a good model. When we read statements such as that of Hamming, that asks why you’re not working on the obviously most important project in your field, we can feel discouraged or as if we are on the wrong track if our instant answer is anything other than “but I am working on the most important problem!” This model offers an alternative point of view, which is that the first step in getting to your Hamming problem is to stop pursuing a groundless grand vision and to pursue a good model, even if it’s not obvious what the ultimate benefit is. Figuring out how to give that journey some structure so that it doesn’t become an exercise in self-justification or a recipe for aimless wandering seems good, but I still think it is a step in the right direction over clinging to one’s initial grand vision. I would rather see a population of people chasing good models, sometimes aimlessly, than a population of people chasing grand visions, since I expect this would be even more aimless.
Some minor notes:
It does sound like discomfort with articulating the most apt high-impact vision was part of why you were reluctant to list it. I’m not sure if that was emotional discomfort or intellectual discomfort, but I would not be surprised if in general, lack of emotional confidence to identify and articulate the most apt high-impact vision when one has a good model already is a slowdown for some people. I have noticed that ChatGPT has been pretty good at helping me to articulate the high-impact vision in suitably ringing prose when I have a good model already—I used it to write the copy for a database website I created, because it was a lot harder for me to write “mission statement-esque” prose than to write software, clean the data, and build the website. The good model was much easier than articulating the high-impact vision even though I had the vision. I don’t know if that’s a “flaw” exactly—the point is just to distinguish “I have no idea what the apt high-impact vision for my extant good model is” from “I have an idea of what the high-impact vision is, but I’m not sure” from “I know what the high-impact vision is, but I’m uncomfortable being loud and proud about it or don’t know how to put it into words effectively.”
Regarding my list of putative projects, I agree with you that only the wastewater monitoring project is a project, per se. The rest of the ones I listed are more themes for projects, but I presume there are a number of concrete projects within each theme that could be listed—I am simply relatively unfamiliar with these areas and so I didn’t have a bunch of specific examples at the top of my head.
I’m glad it was so helpful, thanks for prompting me to formalize it and for providing elaborations. Both of your points feel important to me.
I’m glad GPT worked for you but I think it’s a risky maneuver and I’m scared of the world where it is the common solution to this problem. The push for grand vision doesn’t just make models worse, it hurts your ability to evaluate models as well. GPT is designed to create the appearance of models where none exists, and I want it as far from the grantmaking process as possible. I think solutions like “ask for a 0.1percentile vision” solve this more neatly.
I’m no longer quite sure what you were aiming for with the first paragraph in your first comment. I think projects with the goal of “improve epistemics” are very nearly guaranteed to be fake. Not quite 100%- I sent in a grant with that goal myself recently, and I have high hopes for CE’s new Research Training Program. But a stunning number of things had to go right for my project to feel viable to me. For one, I’d already done most of the work and it was easy to lay out the remaining steps (although they still ballooned and I missed my deadline).
It also feels relevant that I didn’t backchain that plan. I’d had a vague goal of improving epistemics for years without figuring out anything more useful than “be the change I wanted to see in the world”. The useful action only came when I got mad about a specific object-level thing I was investigating for unrelated reasons.
PS. I realize that using my projects as examples places you in an awkward position. I officially give you my blessing to be brutal in discussing projects I bring up as examples.
Yeah, I would distinguish between using GPT to generate a grand vision vs. using it to express it in a particular style. The latter is how I used it—with the project I’m referring to, the model + vision were in place because I’d already spent almost a year doing research on the topic for my MS. However, I just don’t have much experience or flair for writing that up in a way that is resonant and has that “institutional flavor,” and GPT was helpful for that.
Here’s a revision of the first paragraph you were asking about. I think that there are many grantmaking models that can work at least somewhat, but they all face tradeoffs. If you try to pick only projects with a model-grounded vision, you risk giving to empty grand narratives instead. If you try to pick only grantees with a good model, then you risk creating a stultified process in which grantees all feel pressure to plan everything out in advance to an unrealistic degree. If you regrant to just fund people who seem like they know what they’re doing, then you make grants susceptible to privilege and corruption.
I think all these are risks, and probably at the end of the day, the ideal amount of grift, frustration, and privilege/corruption in grantmaking is not zero (an idea I take from Lying For Money, which says the same about fraud). And I believe this because I also think that grantmakers can have reasonable success in any of these approaches—vision-based, model-based, and person-based. There are also some projects that are based on a model that’s legible to just about anybody, where the person carrying them out is credible, and where it clearly can operate on the world scale. I would characterize Kevin Esvelt’s wastewater monitoring project that way. Projects like this are winners under any grantmaking paradigm, and that’s the sort of project my first paragraph in my original comment was about.
Another way I might put it is that grantmaking is in the ideal case giving money to a project that ticks all three boxes (vision, model, person), but in many cases grants are about ticking one or two boxes and crossing one’s fingers. I think it would be good to be clear about that and create a space for model-based, or model+person-based, or model+vision-based grantmaking with some clarity about what a pivot might look like if the vision, model or person didn’t pan out.
I have to disagree with you at least somewhat about projects to improve epistemics. Maybe it’s selection bias—I’m not plugged into the SF rationalist scene and it may be that there’s a lot of sloppy ideas bruited about in that space, I don’t know, but I can think of a bunch of projects to improve epistemics that I have personally benefitted from greatly—LessWrong and the online rationalist community, the suite of prediction markets and competitions, a lot of information-gathering and processing software tools, and of course a great deal of scientific research that helps me think more clearly not just about technical topics about about thinking itself. I wouldn’t be at all surprised if there are a bunch of bad or insignificant projects that are things like workshops or minor software apps. I guess I just think that projects to improve epistemics don’t seem obviously more difficult than others, the vision makes sense to me, and it seems tractable to separate the wheat from the chaff with some efficacy. That might be my own naivety and lack of experience however.
I have personally benefitted from some of your projects and ideas, particularly the idea of epistemic spot-checks, which turn out to be useful even if you do have or are in the process of earning a graduate degree in the subject. That’s not only because there’s a lot of bull out there, but also because the process of checking a true claim can greatly enrich your interpretation of it. When I read review articles, I frequently find myself reading the citations 2-3 layers deep, and even that doesn’t seem like enough in many cases, because I gain such great benefits from understanding what exactly the top-level review summary is referring to. It seems like your projects are somewhat on the borderline between academic research and a boutique report for individual or small-group decision making. I think both are useful. It’s hard to judge utility unless you yourself have a need for either the academic research or are making a decision about the same topic, so I can’t opine about the quality of the reports you have generated. I do think that my academic journey so far has made me see that there’s tremendous utility in putting together the right collection of information to inform the right decision, but it’s only possible to do that if you invest quite a bit of yourself into a particular domain and if you are in collaboration with others who are as well. So from the outside, it seems like it might be valuable to see if you can find a group of people doing work you really believe in, and then invest a lot in building up those relationships and figuring out how your research skills can be most informative. Maybe that’s what you’re already doing, I am not sure. But if I was a regranter and had money to give out at least on a model+person basis, I would happily regrant to you!
I agree with your general principles here.
I think my statement of “nearly guaranteed to be false” was an exaggeration, or at least misleading for what you can expect after applying some basic filters and a reasonable definition of epistemics. I love QURI and manifold and those do fit best in the epistemics bucket, although aren’t central examples for me for reasons that are probably unfair to the epistemics category.
Guesstimate might be a good example project. I use guesstimate and love it. If I put myself in the shoes of its creator writing a grant application 6 or 7 years, I find it really easy to write a model-based application for funding and difficult to write a vision-based statement. It’s relatively easy to spell out a model of what makes BOTECs hard and some ideas for making them easier. It’s hard to say what better BOTECs will bring in the world. I think that the ~2016 grant maker should have accepted “look lots of people you care about do BOTECs and I can clearly make BOTECs better”, without a more detailed vision of impact.
I think it’s plausible grantmakers would accept that pitch (or that it was the pitch and they did accept it, maybe @ozziegooen can tell us?). Not every individual evaluator, but some, and as you say it’s good to have multiple people valuing different things. My complaint is that I think the existing applications don’t make it obvious that that’s an okay pitch to make. My goal is some combination of “get the forms changed to make it more obvious that this kind of pitch is okay” and “spread the knowledge that that this can work even if the form seems like the form wants something else”.
In terms of me personally… I think the nudges for vision have been good for me and the push/demands for vision have been bad. Without the nudges I probably am too much of a dilettante, and thinking about scope at all is good and puts me more in contact with reality. But the big rewards (in terms of money and social status) pushed me to fake vision and I think that slowed me down. I think it’s plausible that “give Elizabeth money to exude rigor and talk to people” would have been a good[1] use of a marginal x-risk dollar in 2018.[2]
During the post-scarcity days of 2022 there was something of a pattern of people offering me open ended money, but then asking for a few examples of projects I might do, and then asking for them to be more legible and the value to be immediately obvious, and fill out forms with the vibe that I’m definitely going to do these specific things and if I don’t have committed a moral fraud… So it ended up in the worst of all possible worlds, where I was being asked for a strong commitment without time to think through what I wanted to commit to. I inevitably ended up turning these down, and was starting to do so earlier and earlier in the process when the money tap was shut off. I think if I hadn’t had the presence of mind to turn these down it would have been really bad, because I not only was committed to a multi-month plan I spent a few hours on, but I would have been committed to falsely viewing the time as free form and following my epistemics.
Honestly I think the best thing for funding me and people like me[3] might be to embrace impact certificates/retroactive grant making. It avoids the problems that stem from premature project legibilization without leaving grantmakers funding a bunch of random bullshit. That’s probably a bigger deal than wording on a form.
where by good I mean “more impactful in expectation than the marginal project funded”.
I have gotten marginal exclusive retreat invites on the theory that “look she’s not aiming very high[4] but having her here will make everyone a little more honest and a little more grounded in reality”, and I think they were happy with that decision. TBC this was a pitch someone else made on my behalf I didn’t hear about until later.
relevant features of this category: doing lots of small projects that don’t make sense to lump together, scrupulous about commitments to the point it’s easy to create poor outcomes, have enough runway that it doesn’t matter when I get paid and I can afford to gamble on projects.
although the part where I count as “not ambitious” is a huge selection effect.
That seems like an easy win—and if the grantmaker is specifically not interested in pure model-based justifications, saying so would also be helpful so that honest model-based applicants don’t have to waste their time.
That seems like a foolish grantmaking strategy—in the startup world, most VCs seem to encourage startups to pivot, kill unpromising projects, and assume that the first product idea isn’t going to be the last one because it takes time to find a compelling benefit and product-market fit. To insist that the grantee stake their reputation not only on successful execution but also on sticking to the original project idea seems like a way to help projects fail while selecting for a mixture of immaturity and dishonesty. That doesn’t mean I think those awarded grants are immoral—my hope is that most applicants are moral people and that such a rigid grantmaking process is just making the selection process marginally worse than it otherwise might be.
Yeah, I think this is an interesting space. Certainly much more work to make this work than changing the wording on a form though!
Sounds like we’re pretty much in agreement at least in terms of general principles.