Saying “I disbelieve <claim>” is not an argument even when <claim> is very well defined. Saying “I disbelieve <X>, and most arguments for <Y> are of the form <X> ⇒ <Y>, so I’m not convinced of <Y>” is admittedly more of an argument than the original statement, but I’d still classify it as not-an-argument unless you provide justification for why <X> is false, especially when there’s strong reason to believe <X>, and strong reason to believe <Y> even if <X> is false! I think your whole post is of this latter type of statement.
I did not find your post constructive because it made a strong & confident claim in the title, then did not provide convincing argumentation for why that claim was correct, and did not provide any useful information relevant to the claim which I did not already know. Afterwards I thought reading the post was a waste of time.
I’d like to see an actual argument which engages with the prior-work in this area in a non-superficial way. If this is what you mean by writing up your thoughts in a lengthier way, then I’m glad to hear you are considering this! If you mean you’ll provide the same amount of information and same arguments, but in a way which would take up more of my time to read, then I’d recommend against doing that.
I disbelieve that an AGI will kill all humans in a very short window of time
Most arguments for that are:
I can come up with ideas to do that and I am a simple human
we don’t know what plans an AGI could come up with.
Intelligence is dangerous and has successfully exterminate other species
I am not convinced by those arguments
You can’t, you are just fooling yourself into believing that you can. Or at least that’s my impression after talking/reading what many people are saying when they think they have a plan for successfully killing humanity in 5 minutes. This is a pretty bad failure of rationality, I am pointing that out. The same people who think about these plans are probably not taking the effort to see why these plans might go wrong. If these plans go wrong, an AGI won’t execute them, and that gives us time, which already invalidates the premise
This is totally true, but if is also a weak argument. I have an intuitive understanding on how difficult is to do X and this makes me skeptical of it. For instance, if you said to me that you have in your garage a machine that can put into orbit a satellite of 1000 kilos and it s made out of paper only, I would be skeptical of it. I won’t say is physically impossible but I would assign to that a very low probability.
Yes. But put a naked human in the wild and it will easily killed by lions. It might survive for a while, but it won’t be able to kill all lions everywhere in a blip of time
Saying “I disbelieve <claim>” is not an argument even when <claim> is very well defined. Saying “I disbelieve <X>, and most arguments for <Y> are of the form <X> ⇒ <Y>, so I’m not convinced of <Y>” is admittedly more of an argument than the original statement, but I’d still classify it as not-an-argument unless you provide justification for why <X> is false, especially when there’s strong reason to believe <X>, and strong reason to believe <Y> even if <X> is false! I think your whole post is of this latter type of statement.
I did not find your post constructive because it made a strong & confident claim in the title, then did not provide convincing argumentation for why that claim was correct, and did not provide any useful information relevant to the claim which I did not already know. Afterwards I thought reading the post was a waste of time.
I’d like to see an actual argument which engages with the prior-work in this area in a non-superficial way. If this is what you mean by writing up your thoughts in a lengthier way, then I’m glad to hear you are considering this! If you mean you’ll provide the same amount of information and same arguments, but in a way which would take up more of my time to read, then I’d recommend against doing that.
I disbelieve that an AGI will kill all humans in a very short window of time
Most arguments for that are:
I can come up with ideas to do that and I am a simple human
we don’t know what plans an AGI could come up with.
Intelligence is dangerous and has successfully exterminate other species
I am not convinced by those arguments
You can’t, you are just fooling yourself into believing that you can. Or at least that’s my impression after talking/reading what many people are saying when they think they have a plan for successfully killing humanity in 5 minutes. This is a pretty bad failure of rationality, I am pointing that out. The same people who think about these plans are probably not taking the effort to see why these plans might go wrong. If these plans go wrong, an AGI won’t execute them, and that gives us time, which already invalidates the premise
This is totally true, but if is also a weak argument. I have an intuitive understanding on how difficult is to do X and this makes me skeptical of it. For instance, if you said to me that you have in your garage a machine that can put into orbit a satellite of 1000 kilos and it s made out of paper only, I would be skeptical of it. I won’t say is physically impossible but I would assign to that a very low probability.
Yes. But put a naked human in the wild and it will easily killed by lions. It might survive for a while, but it won’t be able to kill all lions everywhere in a blip of time