All the discussion so far indicates that Eliezer’s AI will definitely kill me, and some others posting here, as soon as he turns it on.
It seems likely, if it follows Eliezer’s reasoning, that it will kill anyone who is overly intelligent. Say, the top 50,000,000 or so.
(Perhaps a special exception will be made for Eliezer.)
Hey, Eliezer, I’m working in bioinformatics now, okay? Spare me!
Eliezer: If you create a friendly AI, do you think it will shortly thereafter kill you? If not, why not?
All the discussion so far indicates that Eliezer’s AI will definitely kill me, and some others posting here, as soon as he turns it on.
It seems likely, if it follows Eliezer’s reasoning, that it will kill anyone who is overly intelligent. Say, the top 50,000,000 or so.
(Perhaps a special exception will be made for Eliezer.)
Hey, Eliezer, I’m working in bioinformatics now, okay? Spare me!
Eliezer: If you create a friendly AI, do you think it will shortly thereafter kill you? If not, why not?