If ideas are the bottleneck, perhaps there should be monthly pitching sessions where people get to pitch their AI safety ideas to MIRI? Obviously, I imagine that you’d need to find someone who’d be a good filter—to ensure that there are only a reasonable number of pitches—whilst not cutting out any “crazy” ideas which are actually worth considering.
Yes, and if that’s bottlenecked by too few people being good filters, why not teach that?
I would guess that a number of smart people would be able to pick up the ability to spot doomed “perpetual motion alignment strategies” if you paid them a good amount to hang around you for a while.
If ideas are the bottleneck, perhaps there should be monthly pitching sessions where people get to pitch their AI safety ideas to MIRI? Obviously, I imagine that you’d need to find someone who’d be a good filter—to ensure that there are only a reasonable number of pitches—whilst not cutting out any “crazy” ideas which are actually worth considering.
Yes, and if that’s bottlenecked by too few people being good filters, why not teach that?
I would guess that a number of smart people would be able to pick up the ability to spot doomed “perpetual motion alignment strategies” if you paid them a good amount to hang around you for a while.