It seems to you that according to their belief system.
they should sponsor false-flag cells who would (perhaps without knowing the master they truly serve).
Given how obvious the motivation is, and the high frequency with which people independently conclude that SIAI should kill AI researchers, think about the consequences of anyone doing this for anyone actively worried about UFAI.
If you really believed that unFriendly AI was going to dissolve the whole of humanity into smileys/jelly/paperclips, then whacking a few reckless computer geeks would be a small price to pay, ethical injunctions or no ethical injunctions.
Ethical injunctions are not separate values to be traded off against saving the world; they’re policies you follow because it appears, all things considered, that following them has highest expected utility, even if in a single case you fallibly perceive that violating them would be good.
(If you didn’t read the posts linked from that wiki page, you should.)
You’re right that the motivation would be obvious today (to a certain tiny subset of geeky people). But what if there had been a decade of rising anti-AI feeling amongst the general population before the assassinations? Marches, direct actions, carried out with animal-rights style fervour? I’m sure that could all be stirred up with the right fanfiction (“Harry Potter And The Monster In The Chinese Room”).
I understand what ethical injunctions are—but would SIAI be bound by them given their apparent “torture someone to avoid trillions of people having to blink” hyper-utilitarianism?
I understand what ethical injunctions are—but would SIAI be bound by them given their apparent “torture someone to avoid trillions of people having to blink” hyper-utilitarianism?
If you think ethical injunctions conflict with hyper-utilitarianism, you don’t understand what they are. Did you read the posts?
It seems to you that according to their belief system.
Given how obvious the motivation is, and the high frequency with which people independently conclude that SIAI should kill AI researchers, think about the consequences of anyone doing this for anyone actively worried about UFAI.
Ethical injunctions are not separate values to be traded off against saving the world; they’re policies you follow because it appears, all things considered, that following them has highest expected utility, even if in a single case you fallibly perceive that violating them would be good.
(If you didn’t read the posts linked from that wiki page, you should.)
You’re right that the motivation would be obvious today (to a certain tiny subset of geeky people). But what if there had been a decade of rising anti-AI feeling amongst the general population before the assassinations? Marches, direct actions, carried out with animal-rights style fervour? I’m sure that could all be stirred up with the right fanfiction (“Harry Potter And The Monster In The Chinese Room”).
I understand what ethical injunctions are—but would SIAI be bound by them given their apparent “torture someone to avoid trillions of people having to blink” hyper-utilitarianism?
If you think ethical injunctions conflict with hyper-utilitarianism, you don’t understand what they are. Did you read the posts?