assassinate any researchers who look like they’re on track to deploying an unFriendly AI, then destroy their labs and backups.
You need to think much more carefully about (a) the likely consequences of doing this (b) the likely consequences of appearing to be a person or organization that would do this.
Oh, I’m not saying that SIAI should do it openly. Just that, according to their belief system, they should sponsor false-flag cells who would (perhaps without knowing the master they truly serve). The absence of such false-flag cells indicates that SIAI aren’t doing it—although their presence wouldn’t prove they were. That’s the whole idea of “false-flag”.
If you really believed that unFriendly AI was going to dissolve the whole of humanity into smileys/jelly/paperclips, then whacking a few reckless computer geeks would be a small price to pay, ethical injunctions or no ethical injunctions. You know, “shut up and multiply”, trillion specks, and all that.
It seems to you that according to their belief system.
they should sponsor false-flag cells who would (perhaps without knowing the master they truly serve).
Given how obvious the motivation is, and the high frequency with which people independently conclude that SIAI should kill AI researchers, think about the consequences of anyone doing this for anyone actively worried about UFAI.
If you really believed that unFriendly AI was going to dissolve the whole of humanity into smileys/jelly/paperclips, then whacking a few reckless computer geeks would be a small price to pay, ethical injunctions or no ethical injunctions.
Ethical injunctions are not separate values to be traded off against saving the world; they’re policies you follow because it appears, all things considered, that following them has highest expected utility, even if in a single case you fallibly perceive that violating them would be good.
(If you didn’t read the posts linked from that wiki page, you should.)
You’re right that the motivation would be obvious today (to a certain tiny subset of geeky people). But what if there had been a decade of rising anti-AI feeling amongst the general population before the assassinations? Marches, direct actions, carried out with animal-rights style fervour? I’m sure that could all be stirred up with the right fanfiction (“Harry Potter And The Monster In The Chinese Room”).
I understand what ethical injunctions are—but would SIAI be bound by them given their apparent “torture someone to avoid trillions of people having to blink” hyper-utilitarianism?
I understand what ethical injunctions are—but would SIAI be bound by them given their apparent “torture someone to avoid trillions of people having to blink” hyper-utilitarianism?
If you think ethical injunctions conflict with hyper-utilitarianism, you don’t understand what they are. Did you read the posts?
You need to think much more carefully about (a) the likely consequences of doing this (b) the likely consequences of appearing to be a person or organization that would do this.
See also.
Oh, I’m not saying that SIAI should do it openly. Just that, according to their belief system, they should sponsor false-flag cells who would (perhaps without knowing the master they truly serve). The absence of such false-flag cells indicates that SIAI aren’t doing it—although their presence wouldn’t prove they were. That’s the whole idea of “false-flag”.
If you really believed that unFriendly AI was going to dissolve the whole of humanity into smileys/jelly/paperclips, then whacking a few reckless computer geeks would be a small price to pay, ethical injunctions or no ethical injunctions. You know, “shut up and multiply”, trillion specks, and all that.
It seems to you that according to their belief system.
Given how obvious the motivation is, and the high frequency with which people independently conclude that SIAI should kill AI researchers, think about the consequences of anyone doing this for anyone actively worried about UFAI.
Ethical injunctions are not separate values to be traded off against saving the world; they’re policies you follow because it appears, all things considered, that following them has highest expected utility, even if in a single case you fallibly perceive that violating them would be good.
(If you didn’t read the posts linked from that wiki page, you should.)
You’re right that the motivation would be obvious today (to a certain tiny subset of geeky people). But what if there had been a decade of rising anti-AI feeling amongst the general population before the assassinations? Marches, direct actions, carried out with animal-rights style fervour? I’m sure that could all be stirred up with the right fanfiction (“Harry Potter And The Monster In The Chinese Room”).
I understand what ethical injunctions are—but would SIAI be bound by them given their apparent “torture someone to avoid trillions of people having to blink” hyper-utilitarianism?
If you think ethical injunctions conflict with hyper-utilitarianism, you don’t understand what they are. Did you read the posts?