One important question is how long to delay attempting to build FAI, which requires balancing the risks that you’ll screw it up against the risks that someone else will screw it up first. For people in the SIAI who seriously think they have a shot at making FOOM-capable AI first, the primary thing I’d ask them to do is pay more attention to the considerations above when making that calculation.
But given that I think it’s extremely likely that some wealthier organization will end up steering any first-mover scenario (if one occurs, which it may not), I think it’s also worth putting some work into figuring out what the SIAI could do to get those people (whoever they end up being) to behave altruistically in that scenario.
One important question is how long to delay attempting to build FAI, which requires balancing the risks that you’ll screw it up against the risks that someone else will screw it up first. For people in the SIAI who seriously think they have a shot at making FOOM-capable AI first, the primary thing I’d ask them to do is pay more attention to the considerations above when making that calculation.
But given that I think it’s extremely likely that some wealthier organization will end up steering any first-mover scenario (if one occurs, which it may not), I think it’s also worth putting some work into figuring out what the SIAI could do to get those people (whoever they end up being) to behave altruistically in that scenario.