(Actual reality advisement, do not read if you’d rather not live in actual reality: Things really worth doing, thus in AGI alignment, are hard to come by; MIRI is bottlenecked more on ideas worth pursuing and people who can pursue them, than on funding, at this point. I think that under these conditions it does make sense for EA to ever spend money on anything else. Furthermore, EA does in fact seem bound and determined to spend money on anything else. I therefore think it’s fine for this post to pretend like anything else matters; much of EA with lots of available funding does assume that premise, so why not derive valid conclusions from that hypothetical and go ask where to pick up lots of QALYs cheap.)
MIRI is bottlenecked more on ideas worth pursuing and people who can pursue them, than on funding
Ideas come from (new) people, and you mentioned seed planting which should contribute to having such people in 4-6 years, seems like still a worthy thing to do for AGI if anything is worth doing for any cause at all (given your short timelines). If you agree what’s the bottleneck for that effort?
If ideas are the bottleneck, perhaps there should be monthly pitching sessions where people get to pitch their AI safety ideas to MIRI? Obviously, I imagine that you’d need to find someone who’d be a good filter—to ensure that there are only a reasonable number of pitches—whilst not cutting out any “crazy” ideas which are actually worth considering.
Yes, and if that’s bottlenecked by too few people being good filters, why not teach that?
I would guess that a number of smart people would be able to pick up the ability to spot doomed “perpetual motion alignment strategies” if you paid them a good amount to hang around you for a while.
(Actual reality advisement, do not read if you’d rather not live in actual reality: Things really worth doing, thus in AGI alignment, are hard to come by; MIRI is bottlenecked more on ideas worth pursuing and people who can pursue them, than on funding, at this point. I think that under these conditions it does make sense for EA to ever spend money on anything else. Furthermore, EA does in fact seem bound and determined to spend money on anything else. I therefore think it’s fine for this post to pretend like anything else matters; much of EA with lots of available funding does assume that premise, so why not derive valid conclusions from that hypothetical and go ask where to pick up lots of QALYs cheap.)
Ideas come from (new) people, and you mentioned seed planting which should contribute to having such people in 4-6 years, seems like still a worthy thing to do for AGI if anything is worth doing for any cause at all (given your short timelines). If you agree what’s the bottleneck for that effort?
If ideas are the bottleneck, perhaps there should be monthly pitching sessions where people get to pitch their AI safety ideas to MIRI? Obviously, I imagine that you’d need to find someone who’d be a good filter—to ensure that there are only a reasonable number of pitches—whilst not cutting out any “crazy” ideas which are actually worth considering.
Yes, and if that’s bottlenecked by too few people being good filters, why not teach that?
I would guess that a number of smart people would be able to pick up the ability to spot doomed “perpetual motion alignment strategies” if you paid them a good amount to hang around you for a while.