I certainly didn’t say that would be risk-free, but it interacts with other drag factors on very high estimates of risk.
If you’re making the point as part of an argument against “either Eliezer’s FAI plan succeeds, or the world dies” then ok, that makes sense. ETA: But it seems like it would be very easy to take “if humans can do it, then not very superintelligent AIs can” out of context, so I’d suggest some other way of making this point.
When strategy A to a large extent can capture the impacts of strategy.
Sorry, I’m still not getting it. What does “impacts of strategy” mean here?
If you’re making the point as part of an argument against “either Eliezer’s FAI plan succeeds, or the world dies” then ok, that makes sense. ETA: But it seems like it would be very easy to take “if humans can do it, then not very superintelligent AIs can” out of context, so I’d suggest some other way of making this point.
Sorry, I’m still not getting it. What does “impacts of strategy” mean here?