Two problems: An obnoxious optimizing process isn’t necessarily sentient.
Hence my caveat.
And how much would you really want such a continuation if it say tried to put everything in its future lightcone into little smiley faces?
I find the plausibility of a sentient AGI constrained to such a value to be vanishingly small.
If it helps ask yourself how you feel about human empire that expands through its lightcone preemptively destroying every single alien species before they can do anything with a motto of “In the Prisoner’s Dilemma, Humanity Defects!” That sounds pretty bad doesn’t it?
I find the plausibility of a sentient AGI constrained to such a value to be vanishingly small.
It is one example of what could happen, smileys are but a specific example. (Moreover, this is an example which is disturbingly close to some actual proposals). The size of mindspace is probably large. The size of mindspace that does something approximating what we want is probably a small portion of that.
Not especially, no.
And the empire systematically wipes out human minorities and suppresses new scientific discoveries because they might disrupt stability. As a result, and to help prevent problems, everyone but a tiny elite is denied any form of life-extension technology. Even the elite has their lifespan only extended to about a 130 to prevent anyone from accumulating too much power and threatening the standard oligarchy. Similarly, new ideas for businesses are ruthlessly suppressed. Most people will have less mobility in this setting than an American living today. Planets will be ruthlessly terraformed and then have colonists forcively shipped their to help start the new groups. Most people have the equivalent of reality TV shows and the hope of the winning the lottery to entertain themselves. Most of the population is so ignorant that they don’t even realize that humans originally came from a single planet.
If this isn’t clear, I’m trying to make this about as dystopian as I plausibly can. If I haven’t succeeded at that, please imagine what you would think of as a terrible dystopia and apply that. If really necessary, imagine some puppy and kitten torturing too.
It is one example of what could happen, smileys are but a specific example.
Paperclip optimizer problem, yes. The problem here is in the assumption that a sentient self-programming entity could not adjust its valuative norms in just the same way that you and I do—or perhaps even more greatly so, as a result of being more generally capable than we are.
The size of mindspace that does something approximating what we want is probably a small portion of that.
I’m already assuming that the AGI would not do things we want. Such as letting us continue living. But again; if it is sentient, and capable of making decisions, learning, finding values and establishing goals for itself… even if it also turns the entire cosmos into paperclips while doing so—where’s the net negative utility?
I value achieving heights of intellect, ultimately. Lower-level goals are negotiable when you get down to it.
And the empire systematically wipes out human minorities and suppresses new scientific discoveries because they might disrupt stability.
And eats babies.
You’re willfully trying to make this hypothetical horrible and then expect me to find it informationally significant that a bad thing is bad. This is meaningless discourse; it reveals nothing.
If this isn’t clear, I’m trying to make this about as dystopian as I plausibly can.
If it isn’t clear that by willfully painting a dystopia you are denuding your position of any meaningfulness—it’s a non-argument—then I don’t know what will be.
You haven’t provided an argument about why what you initially described would be dystopic. You simply assumed that humanity spreading itself at the cost of all other sentient beings would be dystopic.
Paperclip optimizer problem, yes. The problem here is in the assumption that a sentient self-programming entity could not adjust its valuative norms in just the same way that you and I do—or perhaps even more greatly so, as a result of being more generally capable than we are.
Human values change in part because we aren’t optimizers in any substantial sense. We’re giant mechas for moving around DNA (after the RNA’s replication process got hijacked) that have been built blindly by evolution for an environment where the primary dangers were large predators and other humans. Only then something went wrong and the mechas got too smart from runaway sexual selection. This narrative may be slightly wrong, but something close to it is correct. More to the point, for much of human history, having values that were that different from peers was a good way to not have reproductive success. Humans were selected for having incoherent, inconsistent, fluid value systems.
There’s no reason to think that an AGI will fall into that category. Moreover, note that even powerful humans prefer to impose their values on others rather than alter their own values. A sufficiently powerful AGI would likely do likewise.
Regarding the empire, I may need to apologize; I think I have more negative connotations to the word “empire” than were stated explicitly in my remark and that they are not shared. Here’s possibly a slightly different analogy that may help: If you have to choose between a future with the United Federation of Planets from Startrek or the Imperium from Warhammer 40K, which would you choose?
If you have to choose between a future with the United Federation of Planets from Startrek or the Imperium from Warhammer 40K, which would you choose?
The Imperium in a 40K like universe and the UFP in a Star Trek like universe. Switching them would be disastrous in either case. Not that either is optimal even for its own environment, and the actual universe is extremely unlikely to resemble either fiction. I agree that, given an unlikely future where humans still in control of their policies expand into space and encounter aliens, being able to afford being nice to them is better not being able to, and actually being nice to them is better than not if one can afford to.
There’s no reason to think that an AGI will fall into that category. Moreover, note that even powerful humans prefer to impose their values on others rather than alter their own values. A sufficiently powerful AGI would likely do likewise.
I was assuming the latter. As to the former, again: hence my caveat. I don’t much care what the possibility of AGI mindspace is, I’ve already arbitrarily limited the kinds I’m talking about to a very narrow window.
So objecting to my valuative statement regarding that narrow window with the statement, “But there’s no reason to think it would be in that window!”—just shows that you’re lacking reading skills, to be quite frank.
I don’t much care what the range of possible values is for f(x) for x=0..10000000, when I’ve already asked the question what is f(10)? If it’s a sentient entity that is recursively intelligent, then at some point it alone would become more “cognizant” than the entire human race put together.
If you were put in a situation where you had to choose between letting the world be populated by cows, or by people, which would you choose?
Hence my caveat.
I find the plausibility of a sentient AGI constrained to such a value to be vanishingly small.
Not especially, no.
It is one example of what could happen, smileys are but a specific example. (Moreover, this is an example which is disturbingly close to some actual proposals). The size of mindspace is probably large. The size of mindspace that does something approximating what we want is probably a small portion of that.
And the empire systematically wipes out human minorities and suppresses new scientific discoveries because they might disrupt stability. As a result, and to help prevent problems, everyone but a tiny elite is denied any form of life-extension technology. Even the elite has their lifespan only extended to about a 130 to prevent anyone from accumulating too much power and threatening the standard oligarchy. Similarly, new ideas for businesses are ruthlessly suppressed. Most people will have less mobility in this setting than an American living today. Planets will be ruthlessly terraformed and then have colonists forcively shipped their to help start the new groups. Most people have the equivalent of reality TV shows and the hope of the winning the lottery to entertain themselves. Most of the population is so ignorant that they don’t even realize that humans originally came from a single planet.
If this isn’t clear, I’m trying to make this about as dystopian as I plausibly can. If I haven’t succeeded at that, please imagine what you would think of as a terrible dystopia and apply that. If really necessary, imagine some puppy and kitten torturing too.
Paperclip optimizer problem, yes. The problem here is in the assumption that a sentient self-programming entity could not adjust its valuative norms in just the same way that you and I do—or perhaps even more greatly so, as a result of being more generally capable than we are.
I’m already assuming that the AGI would not do things we want. Such as letting us continue living. But again; if it is sentient, and capable of making decisions, learning, finding values and establishing goals for itself… even if it also turns the entire cosmos into paperclips while doing so—where’s the net negative utility?
I value achieving heights of intellect, ultimately. Lower-level goals are negotiable when you get down to it.
And eats babies.
You’re willfully trying to make this hypothetical horrible and then expect me to find it informationally significant that a bad thing is bad. This is meaningless discourse; it reveals nothing.
If it isn’t clear that by willfully painting a dystopia you are denuding your position of any meaningfulness—it’s a non-argument—then I don’t know what will be.
You haven’t provided an argument about why what you initially described would be dystopic. You simply assumed that humanity spreading itself at the cost of all other sentient beings would be dystopic.
That’s simply a bald assertion, sir.
Human values change in part because we aren’t optimizers in any substantial sense. We’re giant mechas for moving around DNA (after the RNA’s replication process got hijacked) that have been built blindly by evolution for an environment where the primary dangers were large predators and other humans. Only then something went wrong and the mechas got too smart from runaway sexual selection. This narrative may be slightly wrong, but something close to it is correct. More to the point, for much of human history, having values that were that different from peers was a good way to not have reproductive success. Humans were selected for having incoherent, inconsistent, fluid value systems.
There’s no reason to think that an AGI will fall into that category. Moreover, note that even powerful humans prefer to impose their values on others rather than alter their own values. A sufficiently powerful AGI would likely do likewise.
Regarding the empire, I may need to apologize; I think I have more negative connotations to the word “empire” than were stated explicitly in my remark and that they are not shared. Here’s possibly a slightly different analogy that may help: If you have to choose between a future with the United Federation of Planets from Startrek or the Imperium from Warhammer 40K, which would you choose?
Not Logos, but:
The Imperium in a 40K like universe and the UFP in a Star Trek like universe. Switching them would be disastrous in either case. Not that either is optimal even for its own environment, and the actual universe is extremely unlikely to resemble either fiction. I agree that, given an unlikely future where humans still in control of their policies expand into space and encounter aliens, being able to afford being nice to them is better not being able to, and actually being nice to them is better than not if one can afford to.
I was assuming the latter. As to the former, again: hence my caveat. I don’t much care what the possibility of AGI mindspace is, I’ve already arbitrarily limited the kinds I’m talking about to a very narrow window.
So objecting to my valuative statement regarding that narrow window with the statement, “But there’s no reason to think it would be in that window!”—just shows that you’re lacking reading skills, to be quite frank.
I don’t much care what the range of possible values is for f(x) for x=0..10000000, when I’ve already asked the question what is f(10)? If it’s a sentient entity that is recursively intelligent, then at some point it alone would become more “cognizant” than the entire human race put together.
If you were put in a situation where you had to choose between letting the world be populated by cows, or by people, which would you choose?