Everyone who earns money exerts some control by buying food or whatever else they buy. This directs society to work on producing those goods and services. There’s also political/military control, but it’s also (a much narrower set of) humans who have that kind of control too.
Actually, this is the sun controlling the world, not humans. The sun exerts control by permitting plants to grow, and their fruit creates an excess of organic energy, which permits animals like humans to live. Humans have rather limited choice here; we can redirect the food by harvesting it and guarding against adversaries, but the best means to do so are heavily constrained by instrumental matters.
Locally, there is some control in that people can stop eating food and die, or overeat and become obese. Or they can choose what kinds of food to eat. But this seems more like “control yourself” than “control the world”. The farmers can choose how much food to supply, but if a farmer doesn’t supply what is needed, then some other farmer elsewhere will supply it, so that’s more “control your farm” than “control the world”.
I’ve begun worshipping the sun for a number of reasons. First of all, unlike some other gods I could mention, I can see the sun. It’s there for me every day. And the things it brings me are quite apparent all the time: heat, light, food, and a lovely day. There’s no mystery, no one asks for money, I don’t have to dress up, and there’s no boring pageantry. And interestingly enough, I have found that the prayers I offer to the sun and the prayers I formerly offered to ‘God’ are all answered at about the same 50% rate.
so, do you nonetheless expect humans to still control the world?
I personally don’t control the world now. I (on average) expect to be treated about as well by our new AGI overlords as I am treated by the current batch of rulers.
Why do you expect that? Our current batch of rulers need to treat humans reasonably well in order for their societies to be healthy. Is there a similar principle that makes AI overlords need to treat us well?
50% of the humans currently on Earth want kill me because of my political/religious beliefs. My survival depends on the existence of a nice game-theory equilibrium, not because of the benevolence of other humans. I agree (note the 1 bit) that the new game-theory equilibrium after AGI could be different. However, historically, increasing the level of technology/economic growth has led to less genocide/war/etc, not more.
Has it? I’m under the impression technology has lead to much more genocide and war. WWI and WWII were dependent on automatic weapons, the Holocaust was additionally dependent on trains etc., the Rwandan genocide was dependent on radio.
Technology mainly has the ability to be net good despite this because:
Technology also leads to more growth, better/faster recovery after war, etc..
War leads to fear of war, so with NATO, nuclear disarmament, etc., people are reducing the dangers of war
But it’s not clear that point 2 is going to be relevant until after AI has been applied in war, and the question is whether that will be too late. Basically we could factor P(doom) into P(doom|AI gets used in war)P(AI gets used in war). Though of course that’s only one of multiple dangers.
50% of the humans currently on Earth want kill me because of my political/religious beliefs.
Has it? I’m under the impression technology has lead to much more genocide and war.
You’re impression is wrong. Technology is (on average) a civilizing force.
Which political/religious beliefs?
I’m not going into details about which people want to murder me and why for the obvious reason. You can probably easily imagine any number of groups whose existence is tolerated in America but not elsewhere.
but it just shows the percentage of years with wars without taking the severity of the wars into account.
If you look at the probability of dying by violence, it shows a similar trend
This stuff is long-tailed, so past average is no indicator of future averages.
I agree that tail risks are important. What I disagree with is that only tail risks from AGI are important. If you wish to convince me that tail-risks from AGI are somehow worse than (nuclear war, killer drone swarms, biological weapons, global warming, etc) you will need evidence. Otherwise, you have simply recreated the weak argument (which I already agree with) “AGI will be different, therefore it could be bad”.
If you look at the probability of dying by violence, it shows a similar trend
Probability normalizes by population though.
I agree that tail risks are important. What I disagree with is that only tail risks from AGI are important.
My claim is not that the tail risks of AGI are important, my claim is that AGI is a tail risk of technology. Like the correct way to handle tail risks of a broad domain like technology is to perform root cause analysis into narrower factors like “AGI”, “nuclear weapons” vs “speed boats” etc., so you can specifically address the risks of severe stuff like AGI without getting caught up in basic stuff like speed boats.
My claim is not that the tail risks of AGI are important, my claim is that AGI is a tail risk of technology.
Okay, I’m not really sure why we’re talking about this, then.
Consider this post a call to action of the form “please provide reasons why I should update away from the expert-consensus that AGI is probably going to turn out okay”
I agree talking about how we could handle technological changes as a broader framework is a meaningful and useful thing to do. I’m just don’t think it’s related to this post.
My previous comment was in opposition to “handling technological changes as a broader framework”. Like I was saying, you shouldn’t use “technology” broadly as a reference at all, you should consider narrower categories like AGI which individually have high probabilities of being destructive.
narrower categories like AGI which individually have high probabilities of being destructive.
If AGI has a “high probably of being destructive”, show me the evidence. What amazingly compelling argument has led you to have beliefs that are wildly different from the expert-consensus?
It has also led to many shifts in power between groups based on how well they exploit reality. From hunter-gatherers to agriculture, to grand armies spreading an empire, to ideologies changing the fates of entire countries, and to economic & nuclear super-powers making complex treaties.
Doom aside, do you expect AI to be smarter than humans? If so, do you nonetheless expect humans to still control the world?
Do humans control the world right now?
Okay, I’ll be the idiot who gives the obvious answer: Yeah, pretty much.
Who, by what metric, in what way?
Everyone who earns money exerts some control by buying food or whatever else they buy. This directs society to work on producing those goods and services. There’s also political/military control, but it’s also (a much narrower set of) humans who have that kind of control too.
Actually, this is the sun controlling the world, not humans. The sun exerts control by permitting plants to grow, and their fruit creates an excess of organic energy, which permits animals like humans to live. Humans have rather limited choice here; we can redirect the food by harvesting it and guarding against adversaries, but the best means to do so are heavily constrained by instrumental matters.
Locally, there is some control in that people can stop eating food and die, or overeat and become obese. Or they can choose what kinds of food to eat. But this seems more like “control yourself” than “control the world”. The farmers can choose how much food to supply, but if a farmer doesn’t supply what is needed, then some other farmer elsewhere will supply it, so that’s more “control your farm” than “control the world”.
The world revolves around the sun.
-- George Carlin
I personally don’t control the world now. I (on average) expect to be treated about as well by our new AGI overlords as I am treated by the current batch of rulers.
Why do you expect that? Our current batch of rulers need to treat humans reasonably well in order for their societies to be healthy. Is there a similar principle that makes AI overlords need to treat us well?
50% of the humans currently on Earth want kill me because of my political/religious beliefs. My survival depends on the existence of a nice game-theory equilibrium, not because of the benevolence of other humans. I agree (note the 1 bit) that the new game-theory equilibrium after AGI could be different. However, historically, increasing the level of technology/economic growth has led to less genocide/war/etc, not more.
Has it? I’m under the impression technology has lead to much more genocide and war. WWI and WWII were dependent on automatic weapons, the Holocaust was additionally dependent on trains etc., the Rwandan genocide was dependent on radio.
Technology mainly has the ability to be net good despite this because:
Technology also leads to more growth, better/faster recovery after war, etc..
War leads to fear of war, so with NATO, nuclear disarmament, etc., people are reducing the dangers of war
But it’s not clear that point 2 is going to be relevant until after AI has been applied in war, and the question is whether that will be too late. Basically we could factor P(doom) into P(doom|AI gets used in war)P(AI gets used in war). Though of course that’s only one of multiple dangers.
Which political/religious beliefs?
You’re impression is wrong. Technology is (on average) a civilizing force.
I’m not going into details about which people want to murder me and why for the obvious reason. You can probably easily imagine any number of groups whose existence is tolerated in America but not elsewhere.
You link this chart:
… but it just shows the percentage of years with wars without taking the severity of the wars into account.
Your link with genocides includes genocides linked with colonialism, but colonialism seems driven by technological progress to me.
This stuff is long-tailed, so past average is no indicator of future averages. A single event could entirely overwhelm the average.
See also this classical blogpost: https://blog.givewell.org/2015/07/08/has-violence-declined-when-large-scale-atrocities-are-systematically-included/
If you look at the probability of dying by violence, it shows a similar trend
I agree that tail risks are important. What I disagree with is that only tail risks from AGI are important. If you wish to convince me that tail-risks from AGI are somehow worse than (nuclear war, killer drone swarms, biological weapons, global warming, etc) you will need evidence. Otherwise, you have simply recreated the weak argument (which I already agree with) “AGI will be different, therefore it could be bad”.
Probability normalizes by population though.
My claim is not that the tail risks of AGI are important, my claim is that AGI is a tail risk of technology. Like the correct way to handle tail risks of a broad domain like technology is to perform root cause analysis into narrower factors like “AGI”, “nuclear weapons” vs “speed boats” etc., so you can specifically address the risks of severe stuff like AGI without getting caught up in basic stuff like speed boats.
Okay, I’m not really sure why we’re talking about this, then.
Consider this post a call to action of the form “please provide reasons why I should update away from the expert-consensus that AGI is probably going to turn out okay”
I agree talking about how we could handle technological changes as a broader framework is a meaningful and useful thing to do. I’m just don’t think it’s related to this post.
My previous comment was in opposition to “handling technological changes as a broader framework”. Like I was saying, you shouldn’t use “technology” broadly as a reference at all, you should consider narrower categories like AGI which individually have high probabilities of being destructive.
If AGI has a “high probably of being destructive”, show me the evidence. What amazingly compelling argument has led you to have beliefs that are wildly different from the expert-consensus?
I’ve already posted my argument here, I don’t know why you have dodged responding to it.
my apologizes. that is in a totally different thread, which I will respond to.
It has also led to many shifts in power between groups based on how well they exploit reality. From hunter-gatherers to agriculture, to grand armies spreading an empire, to ideologies changing the fates of entire countries, and to economic & nuclear super-powers making complex treaties.
Soon anyone will be able to build a drone which will fly around a globe and will kill exact person they hate.