The argument works also in the other direction. You would never be convinced that an AGI won’t be capable of killing all humans because you can always say “oh well, you are just failing to see what a real superintelligence could do” , as if there weren’t important theoretical limits to what can be planned in advanced
I’m not the one relying on specific, cogent examples to reach his conclusion about AI risk. I don’t think it’s a good way of reasoning about the problem, and neither do I think those “important theoretical limits” are where you think they are.
If you really really really need a salient one (which is a handicap), how about “doing the same thing Stalin did”, since an AI can clone itself and doesn’t need to sleep or rest.
I’m not the one asking for specific examples is a pretty bad argument isn’t it? If you make an extraordinary claim I would like to see some evidence (or at least a plausible scenario) and I am failing to see any. You could say that the burden of proof is in those claiming that an AGI won’t be almighty/powerful enough to cause doom, but I’m not convinced of that either
I’m sorry, I didn’t get the Stalin argument, what do you mean?
I’m sorry, I didn’t get the Stalin argument, what do you mean?
From ~1930-1950, Russia’s government was basically entirely controlled by this guy named Joseph Stalin. Joseph Stalin was not a superintelligence and not particularly physically strong. He did not have direct telepathic command over the people in the coal mines or a legion of robots awaiting his explicit instructions, but he was able to force anybody in Russia to do anything he said anyways. Perhaps a superintelligent AI that, for some absolutely inconceivable reason, could not master macro or micro robotics could work itself into the same position.
This is one of literally hundreds of potential examples. I know for almost a fact that you are smart enough to generate these. I also know you’re going to do the “wow that seems complicated/risky wouldn’t you have to be absurdly smart to pull that off with 99% confidence, what if it turns out that’s not possible even if...” thing. I don’t have any specific action plans to take over the world handy that are so powerfully persuasive that you will change your mind. If you don’t get it fairly quickly from the underlying mechanics of the pieces in play (very complicated world, superintelligent ai, incompatible goals) then there’s nothing I’m going to be able to do to convince you.
If you make an extraordinary claim I would like to see some evidence (or at least a plausible scenario) and I am failing to see any. You could say that the burden of proof is in those claiming that an AGI won’t be almighty/powerful enough to cause doom, but I’m not convinced of that either
“Which human has the burden of proof” is irrelevant to the question of whether or not something will happen. You and I will not live to discuss the evidence you demand.
I think saying “there is nothing I’m going to be able to do to convince you” is an attempt to shut down discussion. It’s actually kind of a dangerous mindset: if you don’t think there’s any argument that can convince an intelligent person who disagrees with you, it fundamentally means that you didn’t reach your current position via argumentation. You are implicitly conceding that your belief is not based on rational argument—for, if it were, you could spell out that argument.
It’s OK to not want to participate in every debate. It’s not OK to butt in just to tell people to stop debating, while explicitly rejecting all calls to provide arguments yourself.
If you don’t think there’s any argument that can convince an intelligent person who disagrees with you, it fundamentally means that you didn’t reach your current position via argumentation. You are implicitly conceding that your belief is not based on rational argument—for, if it were, you could spell out that argument.
The world is not made of arguments. Most of the things you know, you were not “argued” into knowing. You looked around at your environment and made inferences. Reality exists distinctly from the words that we say to each other and use to try to update each others’ world-models.
It doesn’t mean that.
You’re right that I just don’t want to participate further in the debate and am probably being a dick.
The argument works also in the other direction. You would never be convinced that an AGI won’t be capable of killing all humans because you can always say “oh well, you are just failing to see what a real superintelligence could do” , as if there weren’t important theoretical limits to what can be planned in advanced
I’m not the one relying on specific, cogent examples to reach his conclusion about AI risk. I don’t think it’s a good way of reasoning about the problem, and neither do I think those “important theoretical limits” are where you think they are.
If you really really really need a salient one (which is a handicap), how about “doing the same thing Stalin did”, since an AI can clone itself and doesn’t need to sleep or rest.
(Edited)
I’m not the one asking for specific examples is a pretty bad argument isn’t it? If you make an extraordinary claim I would like to see some evidence (or at least a plausible scenario) and I am failing to see any. You could say that the burden of proof is in those claiming that an AGI won’t be almighty/powerful enough to cause doom, but I’m not convinced of that either
I’m sorry, I didn’t get the Stalin argument, what do you mean?
I’ve edited the comment to clarify.
From ~1930-1950, Russia’s government was basically entirely controlled by this guy named Joseph Stalin. Joseph Stalin was not a superintelligence and not particularly physically strong. He did not have direct telepathic command over the people in the coal mines or a legion of robots awaiting his explicit instructions, but he was able to force anybody in Russia to do anything he said anyways. Perhaps a superintelligent AI that, for some absolutely inconceivable reason, could not master macro or micro robotics could work itself into the same position.
This is one of literally hundreds of potential examples. I know for almost a fact that you are smart enough to generate these. I also know you’re going to do the “wow that seems complicated/risky wouldn’t you have to be absurdly smart to pull that off with 99% confidence, what if it turns out that’s not possible even if...” thing. I don’t have any specific action plans to take over the world handy that are so powerfully persuasive that you will change your mind. If you don’t get it fairly quickly from the underlying mechanics of the pieces in play (very complicated world, superintelligent ai, incompatible goals) then there’s nothing I’m going to be able to do to convince you.
“Which human has the burden of proof” is irrelevant to the question of whether or not something will happen. You and I will not live to discuss the evidence you demand.
I think saying “there is nothing I’m going to be able to do to convince you” is an attempt to shut down discussion. It’s actually kind of a dangerous mindset: if you don’t think there’s any argument that can convince an intelligent person who disagrees with you, it fundamentally means that you didn’t reach your current position via argumentation. You are implicitly conceding that your belief is not based on rational argument—for, if it were, you could spell out that argument.
It’s OK to not want to participate in every debate. It’s not OK to butt in just to tell people to stop debating, while explicitly rejecting all calls to provide arguments yourself.
The world is not made of arguments. Most of the things you know, you were not “argued” into knowing. You looked around at your environment and made inferences. Reality exists distinctly from the words that we say to each other and use to try to update each others’ world-models.
It doesn’t mean that.
You’re right that I just don’t want to participate further in the debate and am probably being a dick.