Has it? I’m under the impression technology has lead to much more genocide and war. WWI and WWII were dependent on automatic weapons, the Holocaust was additionally dependent on trains etc., the Rwandan genocide was dependent on radio.
Technology mainly has the ability to be net good despite this because:
Technology also leads to more growth, better/faster recovery after war, etc..
War leads to fear of war, so with NATO, nuclear disarmament, etc., people are reducing the dangers of war
But it’s not clear that point 2 is going to be relevant until after AI has been applied in war, and the question is whether that will be too late. Basically we could factor P(doom) into P(doom|AI gets used in war)P(AI gets used in war). Though of course that’s only one of multiple dangers.
50% of the humans currently on Earth want kill me because of my political/religious beliefs.
Has it? I’m under the impression technology has lead to much more genocide and war.
You’re impression is wrong. Technology is (on average) a civilizing force.
Which political/religious beliefs?
I’m not going into details about which people want to murder me and why for the obvious reason. You can probably easily imagine any number of groups whose existence is tolerated in America but not elsewhere.
but it just shows the percentage of years with wars without taking the severity of the wars into account.
If you look at the probability of dying by violence, it shows a similar trend
This stuff is long-tailed, so past average is no indicator of future averages.
I agree that tail risks are important. What I disagree with is that only tail risks from AGI are important. If you wish to convince me that tail-risks from AGI are somehow worse than (nuclear war, killer drone swarms, biological weapons, global warming, etc) you will need evidence. Otherwise, you have simply recreated the weak argument (which I already agree with) “AGI will be different, therefore it could be bad”.
If you look at the probability of dying by violence, it shows a similar trend
Probability normalizes by population though.
I agree that tail risks are important. What I disagree with is that only tail risks from AGI are important.
My claim is not that the tail risks of AGI are important, my claim is that AGI is a tail risk of technology. Like the correct way to handle tail risks of a broad domain like technology is to perform root cause analysis into narrower factors like “AGI”, “nuclear weapons” vs “speed boats” etc., so you can specifically address the risks of severe stuff like AGI without getting caught up in basic stuff like speed boats.
My claim is not that the tail risks of AGI are important, my claim is that AGI is a tail risk of technology.
Okay, I’m not really sure why we’re talking about this, then.
Consider this post a call to action of the form “please provide reasons why I should update away from the expert-consensus that AGI is probably going to turn out okay”
I agree talking about how we could handle technological changes as a broader framework is a meaningful and useful thing to do. I’m just don’t think it’s related to this post.
My previous comment was in opposition to “handling technological changes as a broader framework”. Like I was saying, you shouldn’t use “technology” broadly as a reference at all, you should consider narrower categories like AGI which individually have high probabilities of being destructive.
narrower categories like AGI which individually have high probabilities of being destructive.
If AGI has a “high probably of being destructive”, show me the evidence. What amazingly compelling argument has led you to have beliefs that are wildly different from the expert-consensus?
Has it? I’m under the impression technology has lead to much more genocide and war. WWI and WWII were dependent on automatic weapons, the Holocaust was additionally dependent on trains etc., the Rwandan genocide was dependent on radio.
Technology mainly has the ability to be net good despite this because:
Technology also leads to more growth, better/faster recovery after war, etc..
War leads to fear of war, so with NATO, nuclear disarmament, etc., people are reducing the dangers of war
But it’s not clear that point 2 is going to be relevant until after AI has been applied in war, and the question is whether that will be too late. Basically we could factor P(doom) into P(doom|AI gets used in war)P(AI gets used in war). Though of course that’s only one of multiple dangers.
Which political/religious beliefs?
You’re impression is wrong. Technology is (on average) a civilizing force.
I’m not going into details about which people want to murder me and why for the obvious reason. You can probably easily imagine any number of groups whose existence is tolerated in America but not elsewhere.
You link this chart:
… but it just shows the percentage of years with wars without taking the severity of the wars into account.
Your link with genocides includes genocides linked with colonialism, but colonialism seems driven by technological progress to me.
This stuff is long-tailed, so past average is no indicator of future averages. A single event could entirely overwhelm the average.
See also this classical blogpost: https://blog.givewell.org/2015/07/08/has-violence-declined-when-large-scale-atrocities-are-systematically-included/
If you look at the probability of dying by violence, it shows a similar trend
I agree that tail risks are important. What I disagree with is that only tail risks from AGI are important. If you wish to convince me that tail-risks from AGI are somehow worse than (nuclear war, killer drone swarms, biological weapons, global warming, etc) you will need evidence. Otherwise, you have simply recreated the weak argument (which I already agree with) “AGI will be different, therefore it could be bad”.
Probability normalizes by population though.
My claim is not that the tail risks of AGI are important, my claim is that AGI is a tail risk of technology. Like the correct way to handle tail risks of a broad domain like technology is to perform root cause analysis into narrower factors like “AGI”, “nuclear weapons” vs “speed boats” etc., so you can specifically address the risks of severe stuff like AGI without getting caught up in basic stuff like speed boats.
Okay, I’m not really sure why we’re talking about this, then.
Consider this post a call to action of the form “please provide reasons why I should update away from the expert-consensus that AGI is probably going to turn out okay”
I agree talking about how we could handle technological changes as a broader framework is a meaningful and useful thing to do. I’m just don’t think it’s related to this post.
My previous comment was in opposition to “handling technological changes as a broader framework”. Like I was saying, you shouldn’t use “technology” broadly as a reference at all, you should consider narrower categories like AGI which individually have high probabilities of being destructive.
If AGI has a “high probably of being destructive”, show me the evidence. What amazingly compelling argument has led you to have beliefs that are wildly different from the expert-consensus?
I’ve already posted my argument here, I don’t know why you have dodged responding to it.
my apologizes. that is in a totally different thread, which I will respond to.