Sadly Vladimir this failure to understand stakeholder theory is endemic in AI discussions. Friendly AI cannot possibly be defined as being “if it doesn’t create value for you it’s not FAI” because value is arbitrary. There are some people who want to die and others to want to live being the stark example. Everyone being killed is thus value for some and not value for others and vice versa.
What we end up with is having to define friendly as being “creating value for the largest possible number of human stakeholders even if some of them lose”.
For example, someone who derives value from ordering people around or having everyone else be their personal slaves such as Caligula or the ex-dictator Gaddafi doesn’t (didn’t....) see value in self-rule for the people and thus fought hard to maintain the status quo, murdering many people in the process.
In any scenario whereby you consider the wants of those who seek most of the world’s resources or domination over others, you’re going to end up with an impossible conundrum for any putative FAI.
So given that scenario, what is really in all of our best interests if some of us aren’t going to get what we want and there is only one Earth?
So given that scenario, what is really in all of our best interests if some of us aren’t going to get what we want and there is only one Earth?
One answer I’ve seen is that the AI will create as many worlds as necessary in order to accommodate everyone’s desires in a reasonably satisfactory fashion. So, Gaddafi will get a world of his own, populated by all the people who (for some reason) enjoy being oppressed. If an insufficient number of such people exist, the FAI will create a sufficient number of non-sentient bots to fill out the population.
The AI can do all this because, as a direct consequence of its ability to make itself smarter exponentially, it will quickly acquire quasi-godlike powers, by, er, using some kind of nanotechnology or something.
By extrapolation it seems likely that the cheapest implementation of the different-worlds-for-conflicting-points of view is some kind of virtual reality if it proves too difficult to give each human it’s own material world.
Yes, and in the degenerate case, you’d have one world per human. But I doubt it would come to that, since a). we humans really aren’t as diverse as we think, and b). many of us crave the company of other humans. In any case, the FAI will be able to instantiate as many simulations as needed, because it has the aforementioned nano-magical powers.
Indeed. It’s likely that many of the simulations would be shared.
What I find interesting to speculate on then is whether we might be either forcibly scanned into the simulation or plugged into some kind of brain-in-a-vat scenario a la the matrix.
Perhaps the putative AI might make the calculation that most humans would ultimately be OK with one of those scenarios.
What I find interesting to speculate on then is whether we might be either forcibly scanned into the simulation or plugged into some kind of brain-in-a-vat scenario a la the matrix.
Meh… as far as I’m concerned, those are just implementation details. Once your AI gets a hold of those nano-magical quantum powers, it can pretty much do anything it wants, anyway.
Sadly Vladimir this failure to understand stakeholder theory is endemic in AI discussions. Friendly AI cannot possibly be defined as being “if it doesn’t create value for you it’s not FAI” because value is arbitrary. There are some people who want to die and others to want to live being the stark example. Everyone being killed is thus value for some and not value for others and vice versa.
What we end up with is having to define friendly as being “creating value for the largest possible number of human stakeholders even if some of them lose”.
For example, someone who derives value from ordering people around or having everyone else be their personal slaves such as Caligula or the ex-dictator Gaddafi doesn’t (didn’t....) see value in self-rule for the people and thus fought hard to maintain the status quo, murdering many people in the process.
In any scenario whereby you consider the wants of those who seek most of the world’s resources or domination over others, you’re going to end up with an impossible conundrum for any putative FAI.
So given that scenario, what is really in all of our best interests if some of us aren’t going to get what we want and there is only one Earth?
One answer I’ve seen is that the AI will create as many worlds as necessary in order to accommodate everyone’s desires in a reasonably satisfactory fashion. So, Gaddafi will get a world of his own, populated by all the people who (for some reason) enjoy being oppressed. If an insufficient number of such people exist, the FAI will create a sufficient number of non-sentient bots to fill out the population.
The AI can do all this because, as a direct consequence of its ability to make itself smarter exponentially, it will quickly acquire quasi-godlike powers, by, er, using some kind of nanotechnology or something.
By extrapolation it seems likely that the cheapest implementation of the different-worlds-for-conflicting-points of view is some kind of virtual reality if it proves too difficult to give each human it’s own material world.
Yes, and in the degenerate case, you’d have one world per human. But I doubt it would come to that, since a). we humans really aren’t as diverse as we think, and b). many of us crave the company of other humans. In any case, the FAI will be able to instantiate as many simulations as needed, because it has the aforementioned nano-magical powers.
Indeed. It’s likely that many of the simulations would be shared.
What I find interesting to speculate on then is whether we might be either forcibly scanned into the simulation or plugged into some kind of brain-in-a-vat scenario a la the matrix.
Perhaps the putative AI might make the calculation that most humans would ultimately be OK with one of those scenarios.
Meh… as far as I’m concerned, those are just implementation details. Once your AI gets a hold of those nano-magical quantum powers, it can pretty much do anything it wants, anyway.