If we’re trying to discuss what makes airplanes fly or crash, should we assume that engineers have done their best and made every no-brainer change?
If you are in the business of pointing out to them potential problems they are not aware of, then yes, because they can be assumed to be aware of no brainer issues.
MIRI seeks to point out dangers in AI that aren’t the result of gross incompetence or deliberate attempts to weaponise AI: it’s banal to point out that these could read to danger.
If you are in the business of pointing out to them potential problems they are not aware of, then yes, because they can be assumed to be aware of no brainer issues.
MIRI seeks to point out dangers in AI that aren’t the result of gross incompetence or deliberate attempts to weaponise AI: it’s banal to point out that these could read to danger.