This is an introductory textbook for students who haven’t been exposed to these ideas before. The paragraph makes a lot more sense under that assumption, than under the assumption they are trying to be technically correct down to every term they use.
Perhaps. But considering that we are talking about chapter 26 of a 27 chapter textbook, and that the authors spent 5 pages explaining the concept of “mechanism design” back in section 17.6, and also considering that every American student learns about the political concept of “checks and balances” back in high school, I’m going to stick with the theory that they either misunderstood Yudkowsky, or decided to disagree with him without calling attention to the fact.
ETA: Incidentally, if the authors are inserting their own opinion and disagreeing with Yudkowsky, I tend to agree with them. In my (not yet informed opinion), Eliezer dismisses the possibility of a multi-agent solution too quickly.
I favor a multi-agent solution which includes both human and machine agents. But, yes, a multi-machine solution may well differ from a unified artificial rational agent. For one thing, the composite will not be itself a rational agent (it may split its charitable contributions between two different charities, for example. :)
ETA: More to the point, a singleton must self-modify to ‘grow’ in power and intelligence, and will strive to preserve its utility function (values) in the course of doing so. A coalition, on the other hand, grows in power by creating or enlisting new members. So, for example, rogue AI’s can be incorporated into a coalition, whereas a singleton will have to defeat and destroy them. Furthermore, the political balance within a coalition may shift over time, as agents who are willing to delay gratification gain in power, and agents who demand instant gratification lose relative power. And as the political balance shifts, so does the effective composite utility function.
More to the point, a singleton must self-modify to ‘grow’ in power and intelligence, and will strive to preserve its utility function (values) in the course of doing so. A coalition, on the other hand, grows in power by creating or enlisting new members. So, for example, rogue AI’s can be incorporated into a coalition, whereas a singleton will have to defeat and destroy them. Furthermore, the political balance within a coalition may shift over time, as agents who are willing to delay gratification gain in power, and agents who demand instant gratification lose relative power. And as the political balance shifts, so does the effective composite utility function.
It sounds as though you are thinking about the early days.
ISTM that a single creature could grow in the manner you describe a coalition growing—by assimilation and compromise. Its might not naturally favour behaving in that way—but it is possible to make an agent with whatever values you like.
More to the point, if a single creature forms from a global government, or the internet, it will probably start off in a pretty inclusive state. Only the terrorists will be excluded. There is no monopolies and mergers commission at that level, just a hangover from past, fragmented times.
This is an introductory textbook for students who haven’t been exposed to these ideas before. The paragraph makes a lot more sense under that assumption, than under the assumption they are trying to be technically correct down to every term they use.
Perhaps. But considering that we are talking about chapter 26 of a 27 chapter textbook, and that the authors spent 5 pages explaining the concept of “mechanism design” back in section 17.6, and also considering that every American student learns about the political concept of “checks and balances” back in high school, I’m going to stick with the theory that they either misunderstood Yudkowsky, or decided to disagree with him without calling attention to the fact.
ETA: Incidentally, if the authors are inserting their own opinion and disagreeing with Yudkowsky, I tend to agree with them. In my (not yet informed opinion), Eliezer dismisses the possibility of a multi-agent solution too quickly.
A multi-machine solution? Is that so very different from one machine with a different internal architecture?
I favor a multi-agent solution which includes both human and machine agents. But, yes, a multi-machine solution may well differ from a unified artificial rational agent. For one thing, the composite will not be itself a rational agent (it may split its charitable contributions between two different charities, for example. :)
ETA: More to the point, a singleton must self-modify to ‘grow’ in power and intelligence, and will strive to preserve its utility function (values) in the course of doing so. A coalition, on the other hand, grows in power by creating or enlisting new members. So, for example, rogue AI’s can be incorporated into a coalition, whereas a singleton will have to defeat and destroy them. Furthermore, the political balance within a coalition may shift over time, as agents who are willing to delay gratification gain in power, and agents who demand instant gratification lose relative power. And as the political balance shifts, so does the effective composite utility function.
It sounds as though you are thinking about the early days.
ISTM that a single creature could grow in the manner you describe a coalition growing—by assimilation and compromise. Its might not naturally favour behaving in that way—but it is possible to make an agent with whatever values you like.
More to the point, if a single creature forms from a global government, or the internet, it will probably start off in a pretty inclusive state. Only the terrorists will be excluded. There is no monopolies and mergers commission at that level, just a hangover from past, fragmented times.