I disbelieve that an AGI will kill all humans in a very short window of time
Most arguments for that are:
I can come up with ideas to do that and I am a simple human
we don’t know what plans an AGI could come up with.
Intelligence is dangerous and has successfully exterminate other species
I am not convinced by those arguments
You can’t, you are just fooling yourself into believing that you can. Or at least that’s my impression after talking/reading what many people are saying when they think they have a plan for successfully killing humanity in 5 minutes. This is a pretty bad failure of rationality, I am pointing that out. The same people who think about these plans are probably not taking the effort to see why these plans might go wrong. If these plans go wrong, an AGI won’t execute them, and that gives us time, which already invalidates the premise
This is totally true, but if is also a weak argument. I have an intuitive understanding on how difficult is to do X and this makes me skeptical of it. For instance, if you said to me that you have in your garage a machine that can put into orbit a satellite of 1000 kilos and it s made out of paper only, I would be skeptical of it. I won’t say is physically impossible but I would assign to that a very low probability.
Yes. But put a naked human in the wild and it will easily killed by lions. It might survive for a while, but it won’t be able to kill all lions everywhere in a blip of time
I disbelieve that an AGI will kill all humans in a very short window of time
Most arguments for that are:
I can come up with ideas to do that and I am a simple human
we don’t know what plans an AGI could come up with.
Intelligence is dangerous and has successfully exterminate other species
I am not convinced by those arguments
You can’t, you are just fooling yourself into believing that you can. Or at least that’s my impression after talking/reading what many people are saying when they think they have a plan for successfully killing humanity in 5 minutes. This is a pretty bad failure of rationality, I am pointing that out. The same people who think about these plans are probably not taking the effort to see why these plans might go wrong. If these plans go wrong, an AGI won’t execute them, and that gives us time, which already invalidates the premise
This is totally true, but if is also a weak argument. I have an intuitive understanding on how difficult is to do X and this makes me skeptical of it. For instance, if you said to me that you have in your garage a machine that can put into orbit a satellite of 1000 kilos and it s made out of paper only, I would be skeptical of it. I won’t say is physically impossible but I would assign to that a very low probability.
Yes. But put a naked human in the wild and it will easily killed by lions. It might survive for a while, but it won’t be able to kill all lions everywhere in a blip of time