So, we have an “artificial general intelligence whose intellect dwarfs our own” which can “anticipate our needs and desires”… but we still have poverty and the risk of nuclear war? How is that possible?
The existence of a super intelligent AGI would not somehow magic the knowledge of nuclear ordinance out of existence, nor would that AGI magically make the massive stockpiles of currently existing ordinance disappear. Getting governments to destroy those stockpiles for the foreseeable future is a political impossibility. The existence of a grand AGI doesn’t change the nature of humanity, nor does it change how politics work.
This goes the same with the rich and the working classes, the existence of a super intelligent AGI does not mean that the world will magically overnight transform into a communist paradise. Of course you do have a sound point if you state that once the AGI has reached a certain point and its working machines are so sophisticated and common that such a paradise is possible to create. That does not mean it would be politically expedient enough to actually form.
However, lets assume that a communist paradise is formed and it is at this point that mankind realizes that the AGI is doing everything and as such we have very little meaning in our own existence. At this point if we begin to go down the path of transhumanism with cybernetics then there still would be a point in which these technologies are still quite rare and therefore rationed. What many don’t realize is that in the end a communist system and a capitalist system behave similarly when there is a resource or production shortfall. The only difference is that in a capitalist system money determines who gets the limited resource, in any communist system politics would determine who gets the limited resource.
So in the end, even in a world in which we laze about, money doesn’t exist and the AGI builds everything for us, new technologies that are still limited in number means that there will be people who have more than others. More than that I do not see people submitting to an AGI to determine who gets what, as such the distribution of the product of the AGI’s work would be born of a human political system and clearly there would be people who game the system better gaining much more resources than everyone else, just like some people are better at business in our modern capitalist world.
I think most people here envision a full-blown AGI as being in control and not constrained by politics. If a government were to refuse to surrender its nuclear-weapons stockpile, the AGI would tell it “You’re not responsible enough to play with such dangerous toys, silly” and just take it away.
This is what we call ‘superlative futurism’ and is basically theological thinking applied to the topic.
When you assume the superlative, you can handwave such things. Its no different from “god is the greatest thing that can exist” and all the flawed arguments that come from that.
My nightmare was a concept of how things would rationally likely to happen. Not how they ideally would happen. I had envisioned an AGI that was subservient to us and was everything that mankind hopes for. However, I also took into account human sentiment which would not tolerate the AGI simply taking nuclear weapons away, or really the AGI forcing us to do anything.
As soon as the AGI makes any visible move to command and control people the population of the world would scream out about the AGI trying to “enslave” humanity. Efforts to destroy the machine would happen almost instantly.
Human sentiment and politics need always be taken into account.
I had envisioned an AGI that was subservient to us and was everything that mankind hopes for.
What exactly the AGI will be subservient to? A government? Why wouldn’t it tell the AGI to kill all its enemies and make sure no challenges to this government’s power ever arise?
A slave AGI is a weapon and will be used as such.
The common assumption on LW is that after a point the humanity will not be able to directly control an AGI (in the sense of telling it what to do) and will have to, essentially, surrender its sovereignty to it. That is exactly the reason why so much attention is paid to ensuring that this AGI will be “friendly”.
A weapon is no more than a mere tool. It is a thing that when controlled and used properly magnifies the force that the user is capable of. Due to this relationship I point out that an AGI that is subservient to man is not a weapon, for a weapon is a tool with which to do violence, that is physical force upon another. Instead an AGI is a tool that can be transformed into many types of tools. A possible tool that it can be transformed into is in fact a weapon, however as I have pointed out that does not mean that the AGI will always be a weapon.
Power is neither good nor ill. Uncontrolled, uncontested power however is dangerous. Would you start a fire without anything to contain it? For sentient beings we posses social structures, laws and reprisals to manage and regulate the behavior of the powerful force that is man. If even man is managed and controlled and managed into the most intelligent manner we can muster then why would an AGI be free of any such restraint? If sentient being A is due to its power cannot be trusted to operate without rules then how can we trust sentient being B whom is much more powerful to operate without any constraints? Its a logic hole.
A gulf of power is every bit as dangerous. When power between two groups is too disparate then there is a situation of potentially dangerous instability. As such its important for mankind to seek to improve itself so as to shrink this gap between power. Controlling an AGI and using it as a tool to improve man is one potential option to shrink this potential gulf in power.
So, we have an “artificial general intelligence whose intellect dwarfs our own” which can “anticipate our needs and desires”… but we still have poverty and the risk of nuclear war? How is that possible?
The existence of a super intelligent AGI would not somehow magic the knowledge of nuclear ordinance out of existence, nor would that AGI magically make the massive stockpiles of currently existing ordinance disappear. Getting governments to destroy those stockpiles for the foreseeable future is a political impossibility. The existence of a grand AGI doesn’t change the nature of humanity, nor does it change how politics work.
This goes the same with the rich and the working classes, the existence of a super intelligent AGI does not mean that the world will magically overnight transform into a communist paradise. Of course you do have a sound point if you state that once the AGI has reached a certain point and its working machines are so sophisticated and common that such a paradise is possible to create. That does not mean it would be politically expedient enough to actually form.
However, lets assume that a communist paradise is formed and it is at this point that mankind realizes that the AGI is doing everything and as such we have very little meaning in our own existence. At this point if we begin to go down the path of transhumanism with cybernetics then there still would be a point in which these technologies are still quite rare and therefore rationed. What many don’t realize is that in the end a communist system and a capitalist system behave similarly when there is a resource or production shortfall. The only difference is that in a capitalist system money determines who gets the limited resource, in any communist system politics would determine who gets the limited resource.
So in the end, even in a world in which we laze about, money doesn’t exist and the AGI builds everything for us, new technologies that are still limited in number means that there will be people who have more than others. More than that I do not see people submitting to an AGI to determine who gets what, as such the distribution of the product of the AGI’s work would be born of a human political system and clearly there would be people who game the system better gaining much more resources than everyone else, just like some people are better at business in our modern capitalist world.
I think most people here envision a full-blown AGI as being in control and not constrained by politics. If a government were to refuse to surrender its nuclear-weapons stockpile, the AGI would tell it “You’re not responsible enough to play with such dangerous toys, silly” and just take it away.
This is what we call ‘superlative futurism’ and is basically theological thinking applied to the topic.
When you assume the superlative, you can handwave such things. Its no different from “god is the greatest thing that can exist” and all the flawed arguments that come from that.
Well, of course, but this whole field is basically applied theology.
My nightmare was a concept of how things would rationally likely to happen. Not how they ideally would happen. I had envisioned an AGI that was subservient to us and was everything that mankind hopes for. However, I also took into account human sentiment which would not tolerate the AGI simply taking nuclear weapons away, or really the AGI forcing us to do anything.
As soon as the AGI makes any visible move to command and control people the population of the world would scream out about the AGI trying to “enslave” humanity. Efforts to destroy the machine would happen almost instantly.
Human sentiment and politics need always be taken into account.
What exactly the AGI will be subservient to? A government? Why wouldn’t it tell the AGI to kill all its enemies and make sure no challenges to this government’s power ever arise?
A slave AGI is a weapon and will be used as such.
The common assumption on LW is that after a point the humanity will not be able to directly control an AGI (in the sense of telling it what to do) and will have to, essentially, surrender its sovereignty to it. That is exactly the reason why so much attention is paid to ensuring that this AGI will be “friendly”.
A weapon is no more than a mere tool. It is a thing that when controlled and used properly magnifies the force that the user is capable of. Due to this relationship I point out that an AGI that is subservient to man is not a weapon, for a weapon is a tool with which to do violence, that is physical force upon another. Instead an AGI is a tool that can be transformed into many types of tools. A possible tool that it can be transformed into is in fact a weapon, however as I have pointed out that does not mean that the AGI will always be a weapon.
Power is neither good nor ill. Uncontrolled, uncontested power however is dangerous. Would you start a fire without anything to contain it? For sentient beings we posses social structures, laws and reprisals to manage and regulate the behavior of the powerful force that is man. If even man is managed and controlled and managed into the most intelligent manner we can muster then why would an AGI be free of any such restraint? If sentient being A is due to its power cannot be trusted to operate without rules then how can we trust sentient being B whom is much more powerful to operate without any constraints? Its a logic hole.
A gulf of power is every bit as dangerous. When power between two groups is too disparate then there is a situation of potentially dangerous instability. As such its important for mankind to seek to improve itself so as to shrink this gap between power. Controlling an AGI and using it as a tool to improve man is one potential option to shrink this potential gulf in power.