Why wouldnt something like optimize for your goals whilst ensuring that the risk of harming a human is below x percent?
Why wouldnt something like optimize for your goals whilst ensuring that the risk of harming a human is below x percent?