It isn’t clear whether AGI would be as powerful as SI’s views imply.
Yes. There’s something weird going on there. EY seems to want to constrain AI in various ways—to be friendly, to be Bayesian and so on—but how, then is the “G” justifiied? Human intelligence is general enough to consider and formulate multiple theories of probability. Why should we consider something as being at least as smart as us and at least as general as us, when we can think things it can’t think.
“Friendliness” is (the way I understand it) a constraint on the purposes and desired consequences of the AI’s actions, not on what it is allowed to think. It would be able to think of non-Friendly actions, if only for the purposes of e.g. averting them when necessary.
As for Bayesianism, my guess is that even a Seed AI has to start somehow. There’s no necessary constraint on it remaining Bayesian if it manages to figure out some even better theory of probability (or if it judges that a theory humans have developed is better). If an AI models itself performing better according to its criteria if it used some different theory, it will ideally self-modify to use that theory...
Yes. There’s something weird going on there. EY seems to want to constrain AI in various ways—to be friendly, to be Bayesian and so on—but how, then is the “G” justifiied? Human intelligence is general enough to consider and formulate multiple theories of probability. Why should we consider something as being at least as smart as us and at least as general as us, when we can think things it can’t think.
“Friendliness” is (the way I understand it) a constraint on the purposes and desired consequences of the AI’s actions, not on what it is allowed to think. It would be able to think of non-Friendly actions, if only for the purposes of e.g. averting them when necessary.
As for Bayesianism, my guess is that even a Seed AI has to start somehow. There’s no necessary constraint on it remaining Bayesian if it manages to figure out some even better theory of probability (or if it judges that a theory humans have developed is better). If an AI models itself performing better according to its criteria if it used some different theory, it will ideally self-modify to use that theory...