I think that only addresses a branch concern, not the main problem. It filters out some malicious actors, but certainly not all—you still get those who seek the grants IN ADDITION to other sources of revenue.
Meaning that, now that they’re living in the commune they’ll be more likely seek more funding for other stuff? Maybe. But you can just keep the barriers as high as they currently are for the other stuff, which would just mean slightly(?) more applicants to filter out at the initial stages.
More importantly, even if you can filter out the bad actors, you likely spend a lot on incompetent actors, who don’t produce enough value/progress to justify the grants, even if they mean well.
My model is that the type of person who would be willing to move to a commune and live amongst and bunch of alignment researchers is pretty likely to be highly motivated and slightly less likely to be competent. The combination of those two things makes me thing they’d be pretty productive. But even if they weren’t, the bar of eg. $20k/year/person is pretty low.
I don’t think those previous discussions are still happening very much—EA doesn’t have spare cash, AFAIK.
Thanks for adding some clarity here. I get that impression too but not confidently. Do you know if it’s because a majority of the spare cash was from FTX and that went away when FTX collapsed?
EA (18 months ago) had a lot of free/cheap capital and no clear models for how to use it in ways that actually improved the future.
That’s always seemed really weird to me. I see lots of things that can be done. Finding the optimal action or even a 90th+ percentile action might be difficult but finding an action that meets some sort of minimal threshold seems like it’s not a very high bar. And letting the former get in the way of the latter seems like it’s making the perfect the enemy of the good.
Ah, I see—I didn’t fully understand that you meant “require (and observe) the lifestyle” not just “grants big enough to do so, and no bigger”. That makes it quite a bit safer from fraud and double-dipping, and a LOT less likely (IMO) to get anyone particularly effective that’s not already interested.
Meaning that, now that they’re living in the commune they’ll be more likely seek more funding for other stuff? Maybe. But you can just keep the barriers as high as they currently are for the other stuff, which would just mean slightly(?) more applicants to filter out at the initial stages.
My model is that the type of person who would be willing to move to a commune and live amongst and bunch of alignment researchers is pretty likely to be highly motivated and slightly less likely to be competent. The combination of those two things makes me thing they’d be pretty productive. But even if they weren’t, the bar of eg. $20k/year/person is pretty low.
Thanks for adding some clarity here. I get that impression too but not confidently. Do you know if it’s because a majority of the spare cash was from FTX and that went away when FTX collapsed?
That’s always seemed really weird to me. I see lots of things that can be done. Finding the optimal action or even a 90th+ percentile action might be difficult but finding an action that meets some sort of minimal threshold seems like it’s not a very high bar. And letting the former get in the way of the latter seems like it’s making the perfect the enemy of the good.
Ah, I see—I didn’t fully understand that you meant “require (and observe) the lifestyle” not just “grants big enough to do so, and no bigger”. That makes it quite a bit safer from fraud and double-dipping, and a LOT less likely (IMO) to get anyone particularly effective that’s not already interested.