Hey there~ I’m Austin, currently building https://manifold.markets. Always happy to meet LessWrong people; reach out at akrolsmir@gmail.com, or find a time on https://calendly.com/austinchen/manifold !
Austin Chen
Mm I think it’s hard to get optimal credit allocation, but easy to get half-baked allocation, or just see that it’s directionally way too low? Like sure, maybe it’s unclear whether Hinge deserves 1% or 10% or ~100% of the credit but like, at a $100k valuation of a marriage, one should be excited to pay $1k to a dating app.
Like, I think matchmaking is very similarly shaped to the problem of recruiting employees, but there corporations are more locally rational about spending money than individuals, and can do things like pay $10k referral bonuses, or offer external recruiters 20% of their referee’s first year salary.
Basically: I don’t blame founders or companies for following their incentive gradients, I blame individuals/society for being unwilling to assign reasonable prices to important goods.
I think the bad-ness of dating apps is downstream of poor norms around impact attribution for matches made. Even though relationships and marriages are extremely valuable, individual people are not in the habit of paying that to anyone.
Like, $100k or a year’s salary seems like a very cheap value to assign to your life partner. If dating apps could rely on that size of payment when they succeed, then I think there could be enough funding for something at least a good small business. But I’ve never heard of anyone actually paying anywhere near that. (myself included—though I paid a retroactive $1k payment to the person who organized the conference I met my wife at)
I think keeper.ai tries to solve this with large bounties on dating/marriages, it’s one of the things I wish we pushed for more on Manifold Love. It seems possible to build one for the niche of “the ea/rat community”; Manifold Love, the checkboxes thing, dating docs got pretty good adoption for not that much execution.
(Also: be the change! I think building out OKC is one of the easiest “hello world” software projects one could imagine, Claude could definitely make a passable version in a day. Then you’ll discover a bunch of hard stuff around getting users, but it sure could be a good exercise.)
Thanks for forwarding my thoughts!
I’m glad your team is equipped to do small, quick grants—from where I am on the outside, it’s easy to accidentally think of OpenPhil as a single funding monolith, so I’m always grateful for directional updates that help the community understand how to better orient to y’all.
I agree that 3months seems reasonable when 500k+ is at stake! (I think, just skimming the application, I mentally rounded off “3 months or less” to “about 3 months”, as kind of a learned heuristic on how orgs relate to timelines they publish.)
As another data point from the Survival and Flourishing Funds, turnaround (from our application to decision) was about 5 months this year, for an ultimately 90k grant (we were applying for up to 1.2m). I think this year they were unusually slow due to changing over their processes; in past years it’s been closer to 2-3 months.
Our own philosophy at Manifund does emphasize “moving money quickly”, to almost a sacred level. This comes from watching programs like Fast Grants and Future Fund, and also our own lived experience as grantees. For grantees, knowing 1 month sooner that money is coming, often means that one can start hiring and executing 1 month sooner—and the impact of executing even 1 day sooner can sometimes be immense (see: https://www.1daysooner.org/about/ )
@Matt Putz thanks for supporting Gavin’s work and letting us know; I’m very happy to hear that my post helped you find this!
I also encourage others to check out OP’s RFPs. I don’t know about Gavin, but I was peripherally aware of this RFP, and it wasn’t obvious to me that Gavin should have considered applying, for these reasons:
Gavin’s work seems aimed internally towards existing EA folks, while this RFP’s media/comms examples (at a glance) seems to be aimed externally for public-facing outreach
I’m not sure what the typical grant size that the OP RFP is targeting, but my cached heuristic is that OP tends to fund projects looking for $100k+ and that smaller projects should look elsewhere (eg through EAIF or LTFF), due to grantmaker capacity constraints on OP’s side
Relatedly, the idea of filling out an OP RFP seems somewhat time-consuming and burdensome (eg somewhere between 3 hours and 2 days), so I think many grantees might not consider doing so unless asking for large amounts
Also, the RFP form seems to indicate a turnaround time of 3 months, which might have seemed too slow for a project like Gavin’s
I’m evidently wrong on all these points given that OP is going to fund Gavin’s project, which is great! So I’m listing these in the spirit of feedback. Some easy wins to encourage smaller projects to apply might be to update the RFP page to 1. list some example grants and grant sizes that were sourced through this, and 2. describe how much time you expect an applicant to take to fill out the form (something EA Funds does, which I appreciate, even if I invariably take much more time than they state).
Do you not know who the living Pope is, while still believing he’s the successor to Saint Peter and has authority delegated from Jesus to rule over the entire Church?
I understand that the current pope is Pope Francis, but I know much much more about the worldviews of folks like Joe Carlsmith or Holden Karnofsky, compared to the pope. I don’t feel this makes me not Catholic; I continue to go to church every Sunday, live my life (mostly) in accordance with Catholic teaching, etc. Similarly, I can’t name my senator or representative and barely know what Biden stands for, but I think I’m reasonably American.
All the people I know who worked on those trips (either as an organiser or as a volunteer) don’t think it helped their epistemics at all, compared to e.g. reading the literature on development economics.
I went on one of those trips as a middle schooler (to Mexico, not Africa). I don’t know that it helped my epistemics much, but I did get like, a visceral experience of what the life of someone in a third-world country would be like, that I wouldn’t have gotten otherwise and no amount of research literature reading would replicate.
I don’t literally think that every EA should book plane tickets to Africa, or break into a factory farm, or whatnot. (though: I would love to see some folks try this!) I do think there’s an overreliance on consuming research and data, and an underreliance on just doing things and having reality give you feedback.
Insofar as you’re thinking I said bad people, please don’t let yourself make that mistake, I said bad values.
I appreciate you drawing the distinction! The bit about “bad people” was more directed at Tsvi, or possibly the voters who agreevoted with Tsvi.
There’s a lot of massively impactful difference in culture and values
Mm, I think if the question is “what accounts for the differences between the EA and rationalist movements today, wrt number of adherents, reputation, amount of influence, achievements” I would assign credit in the ratio of ~1:3 to differences in (values held by individuals):systems. Where systems are roughly: how the organizations are set up, how funding and information flows through the ecosystem.
(As I write this, I realize that maybe even caring about adherents/reputation/influence/achievement in the first place is an impact-based, EA-frame, and the thing that Ben cares about is more like “what accounts for the differences in their philosophies or gestalt of what it feels like to be in the movement”; I feel like I’m lowkey failing an ITT here...)
Mm I basically agree that:
there are real value differences between EA folks and rationalists
good intentions do not substitute for good outcomes
However:
I don’t think differences in values explain much of the differences in results—sure, truthseeking vs impact can hypothetically lead one in different directions, but in practice I think most EAs and rationalists are extremely value aligned
I’m pushing back against Tsvi’s claims that “some people don’t care” or “EA recruiters would consciously choose 2 zombies over 1 agent”—I think ascribing bad intentions to individuals ends up pretty mindkilly
Basically insofar as EA is screwed up, its mostly caused by bad systems not bad people, as far as I can tell.
Mm I’m extremely skeptical that the inner experience of an EA college organizer or CEA groups team is usefully modeled as “I want recruits at all costs”. I predict that if you talk to one and asking them about it, you’d find the same.
I do think that it’s easy to accidentally goodhart or be unreflective about the outcomes of pursuing a particular policy—but I’d encourage y’all to extend somewhat more charity to these folks, who I generally find to be very kind and well-intentioned.
Some notes from the transcript:
I believe there are ways to recruit college students responsibly. I don’t believe the way EA is doing it really has a chance to be responsible. I would say, the way EA is doing it can’t filter and inform the way healthy recruiting needs to. And they’re funneling people, into something that naivete hurts you in. I think aggressive recruiting is bad for both the students and for EA itself.
Enjoyed this point—I would guess that the feedback loop from EA college recruiting is super long and is weakly aligned. Those in charge of setting recruiting strategy (eg CEA Groups team, and then university organizers) don’t see the downstream impacts of their choices, unlike in a startup where you work directly with your hires, and quickly see whether your choices were good or bad.
Might be worth examining how other recruiting-driven companies (like Google) or movements (...early Christianity?) maintain their values, or degrade over time.
Seattle EA watched a couple of the animal farming suffering documentaries. And everyone was of course horrified But, not everyone was ready to just jump on, let’s give this up entirely forever. So we started doing more research, and I posted about, a farm a couple hours away that did live tours, and that seemed like a reasonable thing to learn, like, a limited but useful thing.
Definitely think that on the margin, more “directly verifying base reality with your own eyes” would be good in EA circles. Eg at one point, I was very critical of those mission trips to Africa where high schoolers spend a week digging a well; “obviously you should just send cash!” But now I’m much more sympathetic.
This also stings a bit for Manifund; like 80% of what we fund is AI safety but I don’t really have much ability to personally verify that the stuff we funded is any good.
The natural life cycle of movements and institutions is to get captured and be pretty undifferentiated from other movements in their larger cultural context. They just get normal because normal is there for a reason and normal is easiest. And if you want to do better than that, if you want to keep high epistemics, because normal does not prioritize epistemics, you need to be actively fighting for it, and bringing a high amount of skill to it. I can’t tell you if EA is degrading at like 5 percent a year or 25 percent a year, I can tell you that it is not self correcting enough to escape this trap.
I think not enforcing an “in or out” boundary is big contributor to this degradation—like, majorly successful religions required all kinds of sacrifice and
What I think is more likely than EA pivoting is a handful of people launch a lifeboat and recreate a high integrity version of EA.
It feels like AI safety is the best current candidate for this, though that is also much less cohesive and not a direct successor for a bunch of ways. I too have been lately wondering what “Post EA” looks like.
I hear that as every true wizard must test the integrity of their teacher or of their school, Hogwarts, whatever the thing is. The reason you don’t get to graduate until you actually test the integrity of the school is because if you’re just taking it on its own word, then you could become a villain.
You have to respect your own moral compass to be able to be trusted.
Really liked this analogy!
Which EA leaders do you most resonate with?
I would suggest that if you don’t care about the movement leaders who have any steering power, you’re not in that movement.
I like this as a useful question to keep in mind, though I don’t think it’s totally explanatory. I think I’m reasonably Catholic, even though I don’t know anything about the living Catholic leaders.
Timothy: Give me a vision of a different world where ea would be better served by the by having leadership that actually was willing to own their power more
Elizabeth: which you’ll notice even holden won’t do
Timothy: yeah, he literally doesn’t want the power.
Elizabeth: Yeah, none of them do. CEA doesn’t want it.
I’ve been thinking that EA should try to elect a president, someone who is empowered but also accountable to the general people in the movement, a schelling person to be the face of EA. (plus of course, we’d get to debate stuff like optimal voting systems and enfranchisement—my kind of catnip)
Hm, I expect the advantage of far UV is that many places where people want to spend time indoors are not already well-ventilated, or that it’d be much more expensive to modify existing hvac setups vs just sticking a lamp on a wall.
I’m not at all familiar with the literature on safety; my understanding (based on this) is that no, we’re not sure and more studies would be great, but there’s a vicious cycle/chicken-and-egg problem where the lamps are expensive, so studies are expensive, so there aren’t enough studies, so nobody buys lamps, so lamp companies don’t stay in business, so lamps are expensive.
Another similar company I want someone to start is one that produces inexpensive, self-installable far UV lamps. My understanding is that far UV is safe to shine directly on humans (as opposed to standard UV), meaning that you don’t need high ceilings or special technicians to install the lamp. However, it’s a much newer technology with not very much adoption or testing, I think because of a combination of principal/agent problems and price; see this post on blockers to Far UV adoption.
Beacon does produce these $800 lamps, which are consumer friendly-ish. I bought one for the Manifold office, but due to a variety of trivial inconveniences (figuring out where to mount it; the mobile app not syncing with my phone) it’s still not active. I think a competent operator in this space could make a device that’s somewhat cheaper & easier to use, and hit a tipping point for widespread/viral adoption.
(If you or someone you know is interested in doing this and is looking for funding, reach out to me at austin@manifund.org!)
(maybe the part that seems unrealistic is the difficulty of eliciting values for the power set of possible coalitions, as generating a value for any one coalition feels like an expensive process, and the size of a power set grows exponentially with the number of players)
This is extremely well produced, I think it’s the best introduction to Shapley values I’ve ever seen. Kudos for the simple explanation and approachable designs!
(Not an indictment of this site, but with this as with other explainers, I still struggle to see how to apply Shapley values to any real world problems haha—unlike something like quadratic funding, which also sports fancy mechanism math but is much more obvious how to use)
Thanks for the correction! My own interaction with Lighthaven is event space foremost, then housing, then coworking; for the purposes of EA Community Choice we’re not super fussed about drawing clean categories, and we’d be happy to support a space like Lighthaven for any (or all) of those categories.
For now I’ve just added the your existing project into EA Community Choice; if you’d prefer to create a subproject with a different ask that’s fine too, I can remove the old one. I think adding the existing one is a bit less work for everyone involved—especially since your initial proposal has a lot more room for funding. (We’ll figure out how to do the quadratic match correctly on our side.)
I recommend adding “EA Community Choice” existing applications. I’ve done so for you now, so the project will be visible to people browsing projects in this round, and future donations made will count for quadratic funding match. Thanks for participating!
One person got some extra anxiety because their paragraph was full of TODOs (because it was positive and I hadn’t worked as hard fleshing out the positive mentions ahead of time).
I think you’re talking about me? I may have miscommunicated; I was ~zero anxious, instead trying to signal that I’d looked over the doc as requested, and poking some fun at the TODOs.FWIW I appreciated your process for running criticism ahead of time (and especially enjoyed the back-and-forth comments on the doc; I’m noticing that those kinds of conversations on a private GDoc seem somehow more vibrant/nicer than the ones on LW or on a blog’s comments.)
most catastrophes through both recent and long-ago history have been caused by governments
Interesting lens! Though I’m not sure if this is fair—the largest things that are done tend to get done through governments, whether those things are good or bad. If you blame catastrophes like Mao’s famine or Hitler’s genocide on governments, you should also credit things like slavery abolition and vaccination and general decline of violence in civilized society to governments too.
I’d be interested to hear how Austin has updated regarding Sam’s trustworthiness over the past few days.
Hm I feel like a bunch of people have updated majorly negatively, but I haven’t—only small amounts. I think he eg gets credit for the ScarJo thing. I am mostly withholding judgement, though; now that the NDAs have been dropped, curious to see what comes to light (if nothing does, that would be more positive credit towards Sam, and some validation to my point that NDAs were not really concealing much).
I mean, it’s obviously very dependent on your personal finance situation but I’m using $100k as an order of magnitude proxy for “about a years salary”. I think it’s very coherent to give up a year of marginal salary in exchange for finding the love of your life, rather than like $10k or ~1mo salary.
Of course, the world is full of mispricings, and currently you can save a life for something like $5k. I think these are both good trades to make, and most people should have a portfolio that consists of both “life partners” and “impact from lives saved” and crucially not put all their investment into just one or the other.