Hey there~ I’m Austin, currently building https://manifund.org. Always happy to meet LessWrong people; reach out at akrolsmir@gmail.com!
Austin Chen
My guess btw is that some donors like Michael have money parked in a DAF, and thus require a c3 sponsor like Manifund to facilitate that donation—until your own c3 status arrives, ofc.
(If that continues to get held up. but you receive an important c3 donation commitment in the meantime, let us know and we might be able to help—I think it’s possible to recharacterize same year donations after c3 status arrives, which could unblock the c4 donation cap?)
From the Manifund side: we hadn’t spoken with CAIP previously but we’re generally happy to facilitate grants to them, either for their specific project or as general support.
A complicating factor is that, like many 501c3s, we have a limited budget to be able to send towards c4s, eg I’m not sure if we could support their maximum ask of $400k on Manifund. I do feel happy to commit at least $50k of our “c4 budget” (which is their min ask) if they do raise that much through Manifund; beyond that, we should chat!
Kevin Roose
Manifund 2025 Regrants
Thanks to Elizabeth for hosting me! I really enjoyed this conversation; “winning” is a concept that seems important and undervalued among rationalists, and I’m glad to have had the time to throw ideas around here.
I do feel like this podcast focused a bit more on some of the weirder or more controversial choices I made, which is totally fine; but if I were properly stating the case for “what is important about winning” from scratch, I’d instead pull examples like how YCombinator won, or how EA has been winning relative to rationality in recruiting smart young folks. AppliedDivinityStudies’s “where are all the successful rationalists” is also great.
Very happy to answer questions ofc!
Fundraising for Mox: coworking & events in SF
San Francisco ACX Meetups Everywhere Spring 2025
Thanks for the feedback! I think the nature of a hackathon is that everyone is trying to get something that works at all, and “works well” is just a pipe dream haha. IIRC, there was some interest in incorporating this feature directly into Elicit, which would be pretty exciting.
Anyways I’ll try to pass your feedback to Panda and Charlie, but you might also enjoy seeing their source code here and submitting a Github issue or pull request: https://github.com/CG80499/paper-retraction-detection
Oh cool! Nice demo and happy to see it’s shipped and live, though I’d say the results were a bit disappointing on my very first prompt:
(if that’s not the kind of question you’re looking for, then I might suggest putting in some default example prompts to help the user understand what questions this is good for surfacing!)
Thanks! Appreciate the feedback for if we do a future hackathon or similar event~
Thanks, appreciate the thanks!
AI for Epistemics Hackathon
Strong upvoted—I don’t have much to add, but I really appreciated the concrete examples from what appears to be lived experience.
This company now exists! Brighter is currently doing a presale, for a floor lamp emitting 50k lumens, adjustable between 1800K-6500K: https://www.indiegogo.com/projects/brighter-the-world-s-brightest-floor-lamp#/. I expect it’s more aesthetic and turnkey compared to DIY lumenator options, but probably somewhat more expensive (MSRP is $1499, with early bird/package discounts down to $899).
Disclaimer, I’m an investor; I’ve seen early prototypes but have not purchased one myself yet.
I think credit allocation is extremely important to study and get right, because it tells you who to trust, who to grant resources to. For example, I think much of the wealth of modern society is downstream of sensible credit allocation between laborers, funders, and corporations in the form of equity and debt, allowing successful entrepreneurs and investors to have more funding to reinvest into good ideas. Another (non-monetary) example is authorship in scientific papers; there, correct credit allocation helps people in the field understand which researchers are worth paying attention to, whose studies ought to be funded, etc. As any mechanism designer can tell you, these systems are far from perfect, but I think still much much better than the default in the nonprofit world.
(I do agree that caringness is often a bigger bottleneck than funding, for many classes of important problems, such as trying to hire someone into a field)
Makes sense, thanks.
FWIW, I really appreciated that y’all posted this writeup about mentor selection—choosing folks for impactful, visible, prestigious positions is a whole can of worms, and I’m glad to have more public posts explaining your process & reasoning.
Curious, is the list of advisors public?
Thanks for writing this, David! Your sequence of notes on virtues is one of my favorites on this site; I often find myself coming back to them, to better understand what it might mean to eg Love. As someone who’s spent a lot of time now in EA, I appreciated that this piece was especially detailed, going down all kinds of great rabbitholes. I hope to leave more substantive thoughts at some future time, but for now: thank you again for your work on this.
Huh, seems pretty cool and big-if-true. Is there a specific reason you’re posting this now? Eg asking people for feedback on the plan? Seeking additional funders for your $25m Series A?