This sounds in the direction of modeling AGI researchers as selfish mutants. Other motivations (e.g. poor Friendliness >theories) and accidents (by researchers who don’t understand the danger, or underestimate what they’ve built) are also >likely.
This matters, since if AGI researchers aren’t selfish mutants, you can encourage them to see the need for safety, and >this is one goal of SIAI’s outreach.
AGI researchers might not be selfish mutants, but they could still be embedded in corporate structures which make them act that way. If they are a small startup where researchers are in charge, outreach could be useful. What if they’re in a big corporation, and they’re under pressure to ignore outside influences? (What kind of organisation is most likely to come up with a super-AI, if that’s how it happens?)
If FAI does become a serious concern, nothing would stop corporations from faking compliance but actually implementing flawed systems, just as many software companies put more effort into reassuring customers that their products are secure than actually fixing security flaws.
Realistically, how often do researchers in a particular company come to realise what they’re doing is dangerous and blow the whistle? The reason whistleblowers are lionised in popular culture is precisely because they’re so rare. Told to do something evil or dangerous, most people will knuckle under, and rationalise what they’re doing or deny responsibility.
I once worked for a company which made dangerously poor medical software—an epidemiological study showed that deploying their software raised child mortality—and the attitude of the coders was to scoff at the idea that what they were doing could be bad. They even joked about “killing babies”.
Maybe it would be a good idea to monitor what companies are likely to come up with an AGI. If you need a supercomputer to run one, then presumably it’s either going to be a big company or an academic project?
AGI researchers might not be selfish mutants, but they could still be embedded in corporate structures which make them act that way. If they are a small startup where researchers are in charge, outreach could be useful. What if they’re in a big corporation, and they’re under pressure to ignore outside influences? (What kind of organisation is most likely to come up with a super-AI, if that’s how it happens?)
If FAI does become a serious concern, nothing would stop corporations from faking compliance but actually implementing flawed systems, just as many software companies put more effort into reassuring customers that their products are secure than actually fixing security flaws.
Realistically, how often do researchers in a particular company come to realise what they’re doing is dangerous and blow the whistle? The reason whistleblowers are lionised in popular culture is precisely because they’re so rare. Told to do something evil or dangerous, most people will knuckle under, and rationalise what they’re doing or deny responsibility.
I once worked for a company which made dangerously poor medical software—an epidemiological study showed that deploying their software raised child mortality—and the attitude of the coders was to scoff at the idea that what they were doing could be bad. They even joked about “killing babies”.
Maybe it would be a good idea to monitor what companies are likely to come up with an AGI. If you need a supercomputer to run one, then presumably it’s either going to be a big company or an academic project?
Simpler to have infratructure to monitor all companies: corporate reputation systems.