I’m glad GPT worked for you but I think it’s a risky maneuver and I’m scared of the world where it is the common solution to this problem.
Yeah, I would distinguish between using GPT to generate a grand vision vs. using it to express it in a particular style. The latter is how I used it—with the project I’m referring to, the model + vision were in place because I’d already spent almost a year doing research on the topic for my MS. However, I just don’t have much experience or flair for writing that up in a way that is resonant and has that “institutional flavor,” and GPT was helpful for that.
Here’s a revision of the first paragraph you were asking about. I think that there are many grantmaking models that can work at least somewhat, but they all face tradeoffs. If you try to pick only projects with a model-grounded vision, you risk giving to empty grand narratives instead. If you try to pick only grantees with a good model, then you risk creating a stultified process in which grantees all feel pressure to plan everything out in advance to an unrealistic degree. If you regrant to just fund people who seem like they know what they’re doing, then you make grants susceptible to privilege and corruption.
I think all these are risks, and probably at the end of the day, the ideal amount of grift, frustration, and privilege/corruption in grantmaking is not zero (an idea I take from Lying For Money, which says the same about fraud). And I believe this because I also think that grantmakers can have reasonable success in any of these approaches—vision-based, model-based, and person-based. There are also some projects that are based on a model that’s legible to just about anybody, where the person carrying them out is credible, and where it clearly can operate on the world scale. I would characterize Kevin Esvelt’s wastewater monitoring project that way. Projects like this are winners under any grantmaking paradigm, and that’s the sort of project my first paragraph in my original comment was about.
Another way I might put it is that grantmaking is in the ideal case giving money to a project that ticks all three boxes (vision, model, person), but in many cases grants are about ticking one or two boxes and crossing one’s fingers. I think it would be good to be clear about that and create a space for model-based, or model+person-based, or model+vision-based grantmaking with some clarity about what a pivot might look like if the vision, model or person didn’t pan out.
I have to disagree with you at least somewhat about projects to improve epistemics. Maybe it’s selection bias—I’m not plugged into the SF rationalist scene and it may be that there’s a lot of sloppy ideas bruited about in that space, I don’t know, but I can think of a bunch of projects to improve epistemics that I have personally benefitted from greatly—LessWrong and the online rationalist community, the suite of prediction markets and competitions, a lot of information-gathering and processing software tools, and of course a great deal of scientific research that helps me think more clearly not just about technical topics about about thinking itself. I wouldn’t be at all surprised if there are a bunch of bad or insignificant projects that are things like workshops or minor software apps. I guess I just think that projects to improve epistemics don’t seem obviously more difficult than others, the vision makes sense to me, and it seems tractable to separate the wheat from the chaff with some efficacy. That might be my own naivety and lack of experience however.
I have personally benefitted from some of your projects and ideas, particularly the idea of epistemic spot-checks, which turn out to be useful even if you do have or are in the process of earning a graduate degree in the subject. That’s not only because there’s a lot of bull out there, but also because the process of checking a true claim can greatly enrich your interpretation of it. When I read review articles, I frequently find myself reading the citations 2-3 layers deep, and even that doesn’t seem like enough in many cases, because I gain such great benefits from understanding what exactly the top-level review summary is referring to. It seems like your projects are somewhat on the borderline between academic research and a boutique report for individual or small-group decision making. I think both are useful. It’s hard to judge utility unless you yourself have a need for either the academic research or are making a decision about the same topic, so I can’t opine about the quality of the reports you have generated. I do think that my academic journey so far has made me see that there’s tremendous utility in putting together the right collection of information to inform the right decision, but it’s only possible to do that if you invest quite a bit of yourself into a particular domain and if you are in collaboration with others who are as well. So from the outside, it seems like it might be valuable to see if you can find a group of people doing work you really believe in, and then invest a lot in building up those relationships and figuring out how your research skills can be most informative. Maybe that’s what you’re already doing, I am not sure. But if I was a regranter and had money to give out at least on a model+person basis, I would happily regrant to you!
I think my statement of “nearly guaranteed to be false” was an exaggeration, or at least misleading for what you can expect after applying some basic filters and a reasonable definition of epistemics. I love QURI and manifold and those do fit best in the epistemics bucket, although aren’t central examples for me for reasons that are probably unfair to the epistemics category.
Guesstimate might be a good example project. I use guesstimate and love it. If I put myself in the shoes of its creator writing a grant application 6 or 7 years, I find it really easy to write a model-based application for funding and difficult to write a vision-based statement. It’s relatively easy to spell out a model of what makes BOTECs hard and some ideas for making them easier. It’s hard to say what better BOTECs will bring in the world. I think that the ~2016 grant maker should have accepted “look lots of people you care about do BOTECs and I can clearly make BOTECs better”, without a more detailed vision of impact.
I think it’s plausible grantmakers would accept that pitch (or that it was the pitch and they did accept it, maybe @ozziegooen can tell us?). Not every individual evaluator, but some, and as you say it’s good to have multiple people valuing different things. My complaint is that I think the existing applications don’t make it obvious that that’s an okay pitch to make. My goal is some combination of “get the forms changed to make it more obvious that this kind of pitch is okay” and “spread the knowledge that that this can work even if the form seems like the form wants something else”.
In terms of me personally… I think the nudges for vision have been good for me and the push/demands for vision have been bad. Without the nudges I probably am too much of a dilettante, and thinking about scope at all is good and puts me more in contact with reality. But the big rewards (in terms of money and social status) pushed me to fake vision and I think that slowed me down. I think it’s plausible that “give Elizabeth money to exude rigor and talk to people” would have been a good[1] use of a marginal x-risk dollar in 2018.[2]
During the post-scarcity days of 2022 there was something of a pattern of people offering me open ended money, but then asking for a few examples of projects I might do, and then asking for them to be more legible and the value to be immediately obvious, and fill out forms with the vibe that I’m definitely going to do these specific things and if I don’t have committed a moral fraud… So it ended up in the worst of all possible worlds, where I was being asked for a strong commitment without time to think through what I wanted to commit to. I inevitably ended up turning these down, and was starting to do so earlier and earlier in the process when the money tap was shut off. I think if I hadn’t had the presence of mind to turn these down it would have been really bad, because I not only was committed to a multi-month plan I spent a few hours on, but I would have been committed to falsely viewing the time as free form and following my epistemics.
Honestly I think the best thing for funding me and people like me[3] might be to embrace impact certificates/retroactive grant making. It avoids the problems that stem from premature project legibilization without leaving grantmakers funding a bunch of random bullshit. That’s probably a bigger deal than wording on a form.
I have gotten marginal exclusive retreat invites on the theory that “look she’s not aiming very high[4] but having her here will make everyone a little more honest and a little more grounded in reality”, and I think they were happy with that decision. TBC this was a pitch someone else made on my behalf I didn’t hear about until later.
relevant features of this category: doing lots of small projects that don’t make sense to lump together, scrupulous about commitments to the point it’s easy to create poor outcomes, have enough runway that it doesn’t matter when I get paid and I can afford to gamble on projects.
My complaint is that I think the existing applications don’t make it obvious that that’s an okay pitch to make. My goal is some combination of “get the forms changed to make it more obvious that this kind of pitch is okay” and “spread the knowledge that that this can work even if the form seems like the form wants something else”.
That seems like an easy win—and if the grantmaker is specifically not interested in pure model-based justifications, saying so would also be helpful so that honest model-based applicants don’t have to waste their time.
and fill out forms with the vibe that I’m definitely going to do these specific things and if I don’t have committed a moral fraud
That seems like a foolish grantmaking strategy—in the startup world, most VCs seem to encourage startups to pivot, kill unpromising projects, and assume that the first product idea isn’t going to be the last one because it takes time to find a compelling benefit and product-market fit. To insist that the grantee stake their reputation not only on successful execution but also on sticking to the original project idea seems like a way to help projects fail while selecting for a mixture of immaturity and dishonesty. That doesn’t mean I think those awarded grants are immoral—my hope is that most applicants are moral people and that such a rigid grantmaking process is just making the selection process marginally worse than it otherwise might be.
Honestly I think the best thing for funding me and people like me[3] might be to embrace impact certificates/retroactive grant making. It avoids the problems that stem from premature project legibilization without leaving grantmakers funding a bunch of random bullshit. That’s probably a bigger deal than wording on a form.
Yeah, I think this is an interesting space. Certainly much more work to make this work than changing the wording on a form though!
Sounds like we’re pretty much in agreement at least in terms of general principles.
Yeah, I would distinguish between using GPT to generate a grand vision vs. using it to express it in a particular style. The latter is how I used it—with the project I’m referring to, the model + vision were in place because I’d already spent almost a year doing research on the topic for my MS. However, I just don’t have much experience or flair for writing that up in a way that is resonant and has that “institutional flavor,” and GPT was helpful for that.
Here’s a revision of the first paragraph you were asking about. I think that there are many grantmaking models that can work at least somewhat, but they all face tradeoffs. If you try to pick only projects with a model-grounded vision, you risk giving to empty grand narratives instead. If you try to pick only grantees with a good model, then you risk creating a stultified process in which grantees all feel pressure to plan everything out in advance to an unrealistic degree. If you regrant to just fund people who seem like they know what they’re doing, then you make grants susceptible to privilege and corruption.
I think all these are risks, and probably at the end of the day, the ideal amount of grift, frustration, and privilege/corruption in grantmaking is not zero (an idea I take from Lying For Money, which says the same about fraud). And I believe this because I also think that grantmakers can have reasonable success in any of these approaches—vision-based, model-based, and person-based. There are also some projects that are based on a model that’s legible to just about anybody, where the person carrying them out is credible, and where it clearly can operate on the world scale. I would characterize Kevin Esvelt’s wastewater monitoring project that way. Projects like this are winners under any grantmaking paradigm, and that’s the sort of project my first paragraph in my original comment was about.
Another way I might put it is that grantmaking is in the ideal case giving money to a project that ticks all three boxes (vision, model, person), but in many cases grants are about ticking one or two boxes and crossing one’s fingers. I think it would be good to be clear about that and create a space for model-based, or model+person-based, or model+vision-based grantmaking with some clarity about what a pivot might look like if the vision, model or person didn’t pan out.
I have to disagree with you at least somewhat about projects to improve epistemics. Maybe it’s selection bias—I’m not plugged into the SF rationalist scene and it may be that there’s a lot of sloppy ideas bruited about in that space, I don’t know, but I can think of a bunch of projects to improve epistemics that I have personally benefitted from greatly—LessWrong and the online rationalist community, the suite of prediction markets and competitions, a lot of information-gathering and processing software tools, and of course a great deal of scientific research that helps me think more clearly not just about technical topics about about thinking itself. I wouldn’t be at all surprised if there are a bunch of bad or insignificant projects that are things like workshops or minor software apps. I guess I just think that projects to improve epistemics don’t seem obviously more difficult than others, the vision makes sense to me, and it seems tractable to separate the wheat from the chaff with some efficacy. That might be my own naivety and lack of experience however.
I have personally benefitted from some of your projects and ideas, particularly the idea of epistemic spot-checks, which turn out to be useful even if you do have or are in the process of earning a graduate degree in the subject. That’s not only because there’s a lot of bull out there, but also because the process of checking a true claim can greatly enrich your interpretation of it. When I read review articles, I frequently find myself reading the citations 2-3 layers deep, and even that doesn’t seem like enough in many cases, because I gain such great benefits from understanding what exactly the top-level review summary is referring to. It seems like your projects are somewhat on the borderline between academic research and a boutique report for individual or small-group decision making. I think both are useful. It’s hard to judge utility unless you yourself have a need for either the academic research or are making a decision about the same topic, so I can’t opine about the quality of the reports you have generated. I do think that my academic journey so far has made me see that there’s tremendous utility in putting together the right collection of information to inform the right decision, but it’s only possible to do that if you invest quite a bit of yourself into a particular domain and if you are in collaboration with others who are as well. So from the outside, it seems like it might be valuable to see if you can find a group of people doing work you really believe in, and then invest a lot in building up those relationships and figuring out how your research skills can be most informative. Maybe that’s what you’re already doing, I am not sure. But if I was a regranter and had money to give out at least on a model+person basis, I would happily regrant to you!
I agree with your general principles here.
I think my statement of “nearly guaranteed to be false” was an exaggeration, or at least misleading for what you can expect after applying some basic filters and a reasonable definition of epistemics. I love QURI and manifold and those do fit best in the epistemics bucket, although aren’t central examples for me for reasons that are probably unfair to the epistemics category.
Guesstimate might be a good example project. I use guesstimate and love it. If I put myself in the shoes of its creator writing a grant application 6 or 7 years, I find it really easy to write a model-based application for funding and difficult to write a vision-based statement. It’s relatively easy to spell out a model of what makes BOTECs hard and some ideas for making them easier. It’s hard to say what better BOTECs will bring in the world. I think that the ~2016 grant maker should have accepted “look lots of people you care about do BOTECs and I can clearly make BOTECs better”, without a more detailed vision of impact.
I think it’s plausible grantmakers would accept that pitch (or that it was the pitch and they did accept it, maybe @ozziegooen can tell us?). Not every individual evaluator, but some, and as you say it’s good to have multiple people valuing different things. My complaint is that I think the existing applications don’t make it obvious that that’s an okay pitch to make. My goal is some combination of “get the forms changed to make it more obvious that this kind of pitch is okay” and “spread the knowledge that that this can work even if the form seems like the form wants something else”.
In terms of me personally… I think the nudges for vision have been good for me and the push/demands for vision have been bad. Without the nudges I probably am too much of a dilettante, and thinking about scope at all is good and puts me more in contact with reality. But the big rewards (in terms of money and social status) pushed me to fake vision and I think that slowed me down. I think it’s plausible that “give Elizabeth money to exude rigor and talk to people” would have been a good[1] use of a marginal x-risk dollar in 2018.[2]
During the post-scarcity days of 2022 there was something of a pattern of people offering me open ended money, but then asking for a few examples of projects I might do, and then asking for them to be more legible and the value to be immediately obvious, and fill out forms with the vibe that I’m definitely going to do these specific things and if I don’t have committed a moral fraud… So it ended up in the worst of all possible worlds, where I was being asked for a strong commitment without time to think through what I wanted to commit to. I inevitably ended up turning these down, and was starting to do so earlier and earlier in the process when the money tap was shut off. I think if I hadn’t had the presence of mind to turn these down it would have been really bad, because I not only was committed to a multi-month plan I spent a few hours on, but I would have been committed to falsely viewing the time as free form and following my epistemics.
Honestly I think the best thing for funding me and people like me[3] might be to embrace impact certificates/retroactive grant making. It avoids the problems that stem from premature project legibilization without leaving grantmakers funding a bunch of random bullshit. That’s probably a bigger deal than wording on a form.
where by good I mean “more impactful in expectation than the marginal project funded”.
I have gotten marginal exclusive retreat invites on the theory that “look she’s not aiming very high[4] but having her here will make everyone a little more honest and a little more grounded in reality”, and I think they were happy with that decision. TBC this was a pitch someone else made on my behalf I didn’t hear about until later.
relevant features of this category: doing lots of small projects that don’t make sense to lump together, scrupulous about commitments to the point it’s easy to create poor outcomes, have enough runway that it doesn’t matter when I get paid and I can afford to gamble on projects.
although the part where I count as “not ambitious” is a huge selection effect.
That seems like an easy win—and if the grantmaker is specifically not interested in pure model-based justifications, saying so would also be helpful so that honest model-based applicants don’t have to waste their time.
That seems like a foolish grantmaking strategy—in the startup world, most VCs seem to encourage startups to pivot, kill unpromising projects, and assume that the first product idea isn’t going to be the last one because it takes time to find a compelling benefit and product-market fit. To insist that the grantee stake their reputation not only on successful execution but also on sticking to the original project idea seems like a way to help projects fail while selecting for a mixture of immaturity and dishonesty. That doesn’t mean I think those awarded grants are immoral—my hope is that most applicants are moral people and that such a rigid grantmaking process is just making the selection process marginally worse than it otherwise might be.
Yeah, I think this is an interesting space. Certainly much more work to make this work than changing the wording on a form though!
Sounds like we’re pretty much in agreement at least in terms of general principles.