I think my statement of “nearly guaranteed to be false” was an exaggeration, or at least misleading for what you can expect after applying some basic filters and a reasonable definition of epistemics. I love QURI and manifold and those do fit best in the epistemics bucket, although aren’t central examples for me for reasons that are probably unfair to the epistemics category.
Guesstimate might be a good example project. I use guesstimate and love it. If I put myself in the shoes of its creator writing a grant application 6 or 7 years, I find it really easy to write a model-based application for funding and difficult to write a vision-based statement. It’s relatively easy to spell out a model of what makes BOTECs hard and some ideas for making them easier. It’s hard to say what better BOTECs will bring in the world. I think that the ~2016 grant maker should have accepted “look lots of people you care about do BOTECs and I can clearly make BOTECs better”, without a more detailed vision of impact.
I think it’s plausible grantmakers would accept that pitch (or that it was the pitch and they did accept it, maybe @ozziegooen can tell us?). Not every individual evaluator, but some, and as you say it’s good to have multiple people valuing different things. My complaint is that I think the existing applications don’t make it obvious that that’s an okay pitch to make. My goal is some combination of “get the forms changed to make it more obvious that this kind of pitch is okay” and “spread the knowledge that that this can work even if the form seems like the form wants something else”.
In terms of me personally… I think the nudges for vision have been good for me and the push/demands for vision have been bad. Without the nudges I probably am too much of a dilettante, and thinking about scope at all is good and puts me more in contact with reality. But the big rewards (in terms of money and social status) pushed me to fake vision and I think that slowed me down. I think it’s plausible that “give Elizabeth money to exude rigor and talk to people” would have been a good[1] use of a marginal x-risk dollar in 2018.[2]
During the post-scarcity days of 2022 there was something of a pattern of people offering me open ended money, but then asking for a few examples of projects I might do, and then asking for them to be more legible and the value to be immediately obvious, and fill out forms with the vibe that I’m definitely going to do these specific things and if I don’t have committed a moral fraud… So it ended up in the worst of all possible worlds, where I was being asked for a strong commitment without time to think through what I wanted to commit to. I inevitably ended up turning these down, and was starting to do so earlier and earlier in the process when the money tap was shut off. I think if I hadn’t had the presence of mind to turn these down it would have been really bad, because I not only was committed to a multi-month plan I spent a few hours on, but I would have been committed to falsely viewing the time as free form and following my epistemics.
Honestly I think the best thing for funding me and people like me[3] might be to embrace impact certificates/retroactive grant making. It avoids the problems that stem from premature project legibilization without leaving grantmakers funding a bunch of random bullshit. That’s probably a bigger deal than wording on a form.
I have gotten marginal exclusive retreat invites on the theory that “look she’s not aiming very high[4] but having her here will make everyone a little more honest and a little more grounded in reality”, and I think they were happy with that decision. TBC this was a pitch someone else made on my behalf I didn’t hear about until later.
relevant features of this category: doing lots of small projects that don’t make sense to lump together, scrupulous about commitments to the point it’s easy to create poor outcomes, have enough runway that it doesn’t matter when I get paid and I can afford to gamble on projects.
My complaint is that I think the existing applications don’t make it obvious that that’s an okay pitch to make. My goal is some combination of “get the forms changed to make it more obvious that this kind of pitch is okay” and “spread the knowledge that that this can work even if the form seems like the form wants something else”.
That seems like an easy win—and if the grantmaker is specifically not interested in pure model-based justifications, saying so would also be helpful so that honest model-based applicants don’t have to waste their time.
and fill out forms with the vibe that I’m definitely going to do these specific things and if I don’t have committed a moral fraud
That seems like a foolish grantmaking strategy—in the startup world, most VCs seem to encourage startups to pivot, kill unpromising projects, and assume that the first product idea isn’t going to be the last one because it takes time to find a compelling benefit and product-market fit. To insist that the grantee stake their reputation not only on successful execution but also on sticking to the original project idea seems like a way to help projects fail while selecting for a mixture of immaturity and dishonesty. That doesn’t mean I think those awarded grants are immoral—my hope is that most applicants are moral people and that such a rigid grantmaking process is just making the selection process marginally worse than it otherwise might be.
Honestly I think the best thing for funding me and people like me[3] might be to embrace impact certificates/retroactive grant making. It avoids the problems that stem from premature project legibilization without leaving grantmakers funding a bunch of random bullshit. That’s probably a bigger deal than wording on a form.
Yeah, I think this is an interesting space. Certainly much more work to make this work than changing the wording on a form though!
Sounds like we’re pretty much in agreement at least in terms of general principles.
I agree with your general principles here.
I think my statement of “nearly guaranteed to be false” was an exaggeration, or at least misleading for what you can expect after applying some basic filters and a reasonable definition of epistemics. I love QURI and manifold and those do fit best in the epistemics bucket, although aren’t central examples for me for reasons that are probably unfair to the epistemics category.
Guesstimate might be a good example project. I use guesstimate and love it. If I put myself in the shoes of its creator writing a grant application 6 or 7 years, I find it really easy to write a model-based application for funding and difficult to write a vision-based statement. It’s relatively easy to spell out a model of what makes BOTECs hard and some ideas for making them easier. It’s hard to say what better BOTECs will bring in the world. I think that the ~2016 grant maker should have accepted “look lots of people you care about do BOTECs and I can clearly make BOTECs better”, without a more detailed vision of impact.
I think it’s plausible grantmakers would accept that pitch (or that it was the pitch and they did accept it, maybe @ozziegooen can tell us?). Not every individual evaluator, but some, and as you say it’s good to have multiple people valuing different things. My complaint is that I think the existing applications don’t make it obvious that that’s an okay pitch to make. My goal is some combination of “get the forms changed to make it more obvious that this kind of pitch is okay” and “spread the knowledge that that this can work even if the form seems like the form wants something else”.
In terms of me personally… I think the nudges for vision have been good for me and the push/demands for vision have been bad. Without the nudges I probably am too much of a dilettante, and thinking about scope at all is good and puts me more in contact with reality. But the big rewards (in terms of money and social status) pushed me to fake vision and I think that slowed me down. I think it’s plausible that “give Elizabeth money to exude rigor and talk to people” would have been a good[1] use of a marginal x-risk dollar in 2018.[2]
During the post-scarcity days of 2022 there was something of a pattern of people offering me open ended money, but then asking for a few examples of projects I might do, and then asking for them to be more legible and the value to be immediately obvious, and fill out forms with the vibe that I’m definitely going to do these specific things and if I don’t have committed a moral fraud… So it ended up in the worst of all possible worlds, where I was being asked for a strong commitment without time to think through what I wanted to commit to. I inevitably ended up turning these down, and was starting to do so earlier and earlier in the process when the money tap was shut off. I think if I hadn’t had the presence of mind to turn these down it would have been really bad, because I not only was committed to a multi-month plan I spent a few hours on, but I would have been committed to falsely viewing the time as free form and following my epistemics.
Honestly I think the best thing for funding me and people like me[3] might be to embrace impact certificates/retroactive grant making. It avoids the problems that stem from premature project legibilization without leaving grantmakers funding a bunch of random bullshit. That’s probably a bigger deal than wording on a form.
where by good I mean “more impactful in expectation than the marginal project funded”.
I have gotten marginal exclusive retreat invites on the theory that “look she’s not aiming very high[4] but having her here will make everyone a little more honest and a little more grounded in reality”, and I think they were happy with that decision. TBC this was a pitch someone else made on my behalf I didn’t hear about until later.
relevant features of this category: doing lots of small projects that don’t make sense to lump together, scrupulous about commitments to the point it’s easy to create poor outcomes, have enough runway that it doesn’t matter when I get paid and I can afford to gamble on projects.
although the part where I count as “not ambitious” is a huge selection effect.
That seems like an easy win—and if the grantmaker is specifically not interested in pure model-based justifications, saying so would also be helpful so that honest model-based applicants don’t have to waste their time.
That seems like a foolish grantmaking strategy—in the startup world, most VCs seem to encourage startups to pivot, kill unpromising projects, and assume that the first product idea isn’t going to be the last one because it takes time to find a compelling benefit and product-market fit. To insist that the grantee stake their reputation not only on successful execution but also on sticking to the original project idea seems like a way to help projects fail while selecting for a mixture of immaturity and dishonesty. That doesn’t mean I think those awarded grants are immoral—my hope is that most applicants are moral people and that such a rigid grantmaking process is just making the selection process marginally worse than it otherwise might be.
Yeah, I think this is an interesting space. Certainly much more work to make this work than changing the wording on a form though!
Sounds like we’re pretty much in agreement at least in terms of general principles.