Con: Everybody will probably die. This solution magnifies instability in the system. One person being any one of insane, evil or careless could potentially create an extinction event. At the very least they could cause mass destruction within a country that takes huge efforts to crush.
I agree that it’s possible that in this scenario everyone will die but I am not sure why you seem to think it is the most likely outcome. Considering the fact that governments will probably have large numbers of these, or comparable weapons before the people do, or that they will create comparable weapons in the event that they observe their populace building weapons using 3-D printers, I think it’s more likely that the power that the people wield via killer robots (including criminal organizations) will be kept in check than that any of these groups will be able to rove around and kill everyone. Perhaps you envision a more complex chain of events unfolding? Do you expect a clusterfuck? Or is there some other course that you think things would take? What and why?
I agree that it’s possible that in this scenario everyone will die but I am not sure why you seem to think it is the most likely outcome.
We are considering a scenario where technology has been developed and disseminated sufficiently to allow Joe Citizen to produce autonomous killer robots with his home based general purpose automated manufacturing device. People more intelligent, educated, resourceful and motivated than Joe Citizen are going to be producing things even more dangerous. And produce things that produce things that… I just assume that kind of environment is not stable.
Ok, so it’s not the killer robots you envision killing off humanity, it’s the other technologies that would likely be around at that time, and/or the whole mixture of insanity put together?
Ok, so it’s not the killer robots you envision killing off humanity, it’s the other technologies that would likely be around at that time, and/or the whole mixture of insanity put together?
In particular the technologies being used to create killer robots and so necessarily around at the time. Sufficiently general small scale but highly complex manufacturing capability combined with advanced mobile automation. The combination is already notorious).
You know, we’ve invented quite a few weapons over time and have survived quite a few “replicators” (the black death will be my #1 example)… we’re not dead yet and I’m wondering if there are some principles keeping us alive which you and I have overlooked.
For a shot at what those could be:
1) Regarding self-replicators:
Self-replicators make near perfect copies of themselves and so they are optimized to work in most, but not all situations. This means that there’s a very good chance that at least some of a given species will survive whatever the self-replicators are doing.
Predators strike prey as terrifying, but their weakness is that they depend on the prey. Predators of all kinds die when they run out of prey. Some prey probably always hides, so unless the predator is really intelligent, it is likely that some prey will survive and will get a break from the predators, which they can use to develop strategies.
2) Regarding weapons:
For this discussion, we’ve been talking almost exclusively about offensive weapons. However, governments create defenses as well—probably, they often do this with the intent of countering their own offensive weapons. I don’t know much about what sorts of defensive weapons there could be in the future, do you? If not, this lack of info about defensive weapons might be causing us to exaggerate the risk of offensive weapons.
Governments must value defense, or else they would not invest in it and would instead take those resources and put them into offense. Looking at it this way, I realize that offense is slowed down by defense, and/or there may be a certain ratio of defensive power to offensive power that is constantly maintained due to the fact that it’s an intelligent agent that’s creating these and they’re motivated to have both offense and defense. If defense keeps pace with offense for this or any other reason (maybe reasons having to do with the insights that technological advancement provides) then there may be far less risk than we’re perceiving.
If we reach maximum useful offense (I’ll roughly define this as the ability to destroy every person or autonomous weapon who is threat in the world instantly and with specific targeting capabilities) there will be no point in focusing on offensive weapons anymore. If maximum useful offense is reached, (or perhaps an even earlier point… maybe one where the offensive capabilities of the enemy are too harrowing and your own are overkill) then this would be the point at which that balance in what we focus on would likely shift. By focusing primarily or solely on defense, we could enter an era where war is infeasible. Though after all the factors that would have a lasting effect on whether it was easier to make progress in defense or offense faded (such as factories to build defensive items or laborers trained in defense) we’d be back to square one. But a “defense era” might give us time to solve the problem—after we have all woken up to how critical it is, and also have specifics on the situation.
Con: Everybody will probably die. This solution magnifies instability in the system. One person being any one of insane, evil or careless could potentially create an extinction event. At the very least they could cause mass destruction within a country that takes huge efforts to crush.
I agree that it’s possible that in this scenario everyone will die but I am not sure why you seem to think it is the most likely outcome. Considering the fact that governments will probably have large numbers of these, or comparable weapons before the people do, or that they will create comparable weapons in the event that they observe their populace building weapons using 3-D printers, I think it’s more likely that the power that the people wield via killer robots (including criminal organizations) will be kept in check than that any of these groups will be able to rove around and kill everyone. Perhaps you envision a more complex chain of events unfolding? Do you expect a clusterfuck? Or is there some other course that you think things would take? What and why?
We are considering a scenario where technology has been developed and disseminated sufficiently to allow Joe Citizen to produce autonomous killer robots with his home based general purpose automated manufacturing device. People more intelligent, educated, resourceful and motivated than Joe Citizen are going to be producing things even more dangerous. And produce things that produce things that… I just assume that kind of environment is not stable.
Ok, so it’s not the killer robots you envision killing off humanity, it’s the other technologies that would likely be around at that time, and/or the whole mixture of insanity put together?
In particular the technologies being used to create killer robots and so necessarily around at the time. Sufficiently general small scale but highly complex manufacturing capability combined with advanced mobile automation. The combination is already notorious).
You know, we’ve invented quite a few weapons over time and have survived quite a few “replicators” (the black death will be my #1 example)… we’re not dead yet and I’m wondering if there are some principles keeping us alive which you and I have overlooked.
For a shot at what those could be:
1) Regarding self-replicators:
Self-replicators make near perfect copies of themselves and so they are optimized to work in most, but not all situations. This means that there’s a very good chance that at least some of a given species will survive whatever the self-replicators are doing.
Predators strike prey as terrifying, but their weakness is that they depend on the prey. Predators of all kinds die when they run out of prey. Some prey probably always hides, so unless the predator is really intelligent, it is likely that some prey will survive and will get a break from the predators, which they can use to develop strategies.
2) Regarding weapons:
For this discussion, we’ve been talking almost exclusively about offensive weapons. However, governments create defenses as well—probably, they often do this with the intent of countering their own offensive weapons. I don’t know much about what sorts of defensive weapons there could be in the future, do you? If not, this lack of info about defensive weapons might be causing us to exaggerate the risk of offensive weapons.
Governments must value defense, or else they would not invest in it and would instead take those resources and put them into offense. Looking at it this way, I realize that offense is slowed down by defense, and/or there may be a certain ratio of defensive power to offensive power that is constantly maintained due to the fact that it’s an intelligent agent that’s creating these and they’re motivated to have both offense and defense. If defense keeps pace with offense for this or any other reason (maybe reasons having to do with the insights that technological advancement provides) then there may be far less risk than we’re perceiving.
If we reach maximum useful offense (I’ll roughly define this as the ability to destroy every person or autonomous weapon who is threat in the world instantly and with specific targeting capabilities) there will be no point in focusing on offensive weapons anymore. If maximum useful offense is reached, (or perhaps an even earlier point… maybe one where the offensive capabilities of the enemy are too harrowing and your own are overkill) then this would be the point at which that balance in what we focus on would likely shift. By focusing primarily or solely on defense, we could enter an era where war is infeasible. Though after all the factors that would have a lasting effect on whether it was easier to make progress in defense or offense faded (such as factories to build defensive items or laborers trained in defense) we’d be back to square one. But a “defense era” might give us time to solve the problem—after we have all woken up to how critical it is, and also have specifics on the situation.