That rush of confidence and almost righteousness you had when you posted that? No offense, but learn to recognize that feeling, it’s oversimplification or maybe wishful thinking.
MIRI depends on private donations. Private donors need to make money in order to donate it. MIRI depends on the power grid, sewer lines, police to deter street crime, and access to its FDIC-insured bank account. MIRI does business with various other businesses some of which depend on government contracts to remain in business, and all of which are also embedded in an enormously complex web of private and government entities. Individual members of MIRI are also part of this web, and if they can’t buy groceries or pay the rent, they will either a) stop showing up for work so they can deal with their immediate problems or b) continue showing up for work, let their stuff get repossessed, and then either b.1) become a burden on someone else (maybe MIRI while its liquid assets last) or b.2) starve.
I don’t want this situation to be true any more than you do. I find it abhorrently bad design. But none of that changes that it is true, and denying it amounts to contributing to the problem by refusing to consider how one can insulate oneself and what one cares about from it.
please do not pretend that you know what other people are feeling
Is this a thing any human can really do? I mean, we evolved to quickly recognize emotions in others. I agree that in the context of an Internet discussion, the sensorial bandwith is so low that miscalibration is frequent, and so we should strive to achieve “non empathy”. We have though a whole section of our brain devoted just to that, so I guess that this particular bias is hopeless.
Is this a thing any human can really do? I mean, we evolved to quickly recognize emotions in others.
I agree that in the context of an Internet discussion, the sensorial bandwith is so low that miscalibration is frequent, and so we should strive to achieve “non empathy”. We have though a whole section of our brain devoted just to that, so I guess that this particular bias is hopeless.
You can certainly use the part of the ‘empathy’ modelling capacity to realise that telling other people what they feel tends to piss people off when you are wrong (and sometimes also when you are right). Failing to adequately account for such likely reactions isn’t an inevitability, it is a social skills failure and a fairly blatant one at that.
Most people learn not to do this after embarassing themselves a couple of times when it backfires. (An except is when attempting to deliberately provoke or one-up another (“U mad bro?”)).
Failing to adequately account for such likely reactions isn’t an inevitability, it is a social skills failure and a fairly blatant one at that.
That’s not what I was noticing: telling other people how they feel might surely be a social failure, but not pretending to know how they feel? That seems much harder, since we have evolved to base our social interactions on the ability to guess the emotions of the other members of the pack.
To a first approximation, I suspect that a lot of the boring existential risks that don’t involve things that tile the universe with stuff look pretty much the same by the time they roll around to the place you live: Democratic Republic of the Congo
Oh, yeah, sorry, I forgot to make the connection back to non-boring existential threats:
To a first approximation, I suspect that a lot of the boring existential risks that don’t involve things that tile the universe with stuff look pretty much the same by the time they roll around to the place you live: Democratic Republic of the Congo and people get so preoccupied with not starving that they lose interest in friendly AI and rationality except in its most instrumental applications.
The rush of adrenalin and almost righteousness I had when I posted the above response? I really need to recognize that feeling, it’s called being pissed off at a mental model I have of a certain world view from which I would expect a response similar to the parent post.
ChrisitanKI, I am sorry. Your later posts show that you are seriously addressing the topic and are not the straw-man I was attacking. Also, you at no point said “this does not affect MIRI so it’s not a problem”. You were only rebutting my assertion that a treasury default is an existential risk. I should have responded strictly to that instead of getting personal.
Thank you to MrMind for saying:
we evolved to quickly recognize emotions in others. I agree that in the context of an Internet discussion, the sensorial bandwith is so low that miscalibration is frequent, and so we should strive to achieve “non empathy”
and to wedrifid for saying:
You can certainly use the part of the ‘empathy’ modelling capacity to realise that telling other people what they feel tends to piss people off when you are wrong (and sometimes also when you are right).
This was what I needed, apparently, to come around.
That rush of confidence and almost righteousness you had when you posted that? No offense, but learn to recognize that feeling, it’s oversimplification or maybe wishful thinking.
There not much emotion in the lines I wrote. You are the person who’s emotional because of some perceived danger to yourself.
There no wishful thinking behind the notion that a lot of scientific research is dangerous. I’m in favor of scientific research because without it I will certainly die in the next hundred years. On the other hand the idea that scientific research reduces existential is naive.
Yudkowsky was working on building AGI when he got the insight that the likely outcome of building an AGI is that the AGI goes bad and kills everyone. Then he grew up and thought about whether persuing that problem is the right thing to do.
There are way to many scientists who just naively want to believe that they are doing good, when they are endangering humanity. The idea that everyone is on the same time when it comes to reducing X-risks is wishful thinking.
Bottom line, MIRI and similar projects only exist in countries rich enough to have the time and resources to devote to future risks. If you believe that MIRI reduces existential risks, then something that is a risk to MIRI is itself an existential risk to some extent.
Not necessarily. Couldn’t one argue that technological advancement is neutral? It’d be hard for farmers to detect and blow up incoming asteroids, for example.
Don’t think “neutral” is the right word, it’s more like technological progress has two consequences pushing in different directions. On the one hand, tech makes humanity better equipped to deal with existential risk that is there regardless (e.g. asteroids). On the other hand, tech creates new kinds of existential risk (e.g. grey goo). Which effect is stronger/more important is debatable.
Yes, it does. If the main thing you care about is existential risk than getting rid of all technological advancement is benefitial.
The average technological advance raises existential risk. Pushing technology for it’s own sake in the hope that it solve existential risk doesn’t make sense.
I believe the most likely existential risk is a Malthusian Crunch.
Unlike many of the optimistic transhumanists out there, I believe that we are in a constant race between technology opening up new resources (or more efficient use of existing ones) and runaway population growth (which contributes to an astonishing array of seemingly unrelated world problems). Whenever technology starts to lose you have overshoot followed by civilizational collapse.
We have only a limited number of such collapse cycles before we exhaust whatever the rate limiting resources turn out to be and permanently foreclose on expanding beyond Earth and having any sort of shot at being the species that beats The Great Filter.
Moreover, a collapse happening pretty much guarantees that anybody who is cryosuspended before that time will permanently and irrevokably die with no hope of reprieve.
Population growth rates are not steady-state. They are a function of many things, notably the prevalent wealth and education (which tend to go together) in a society. So far all human societies which reached a certain level of wealth sharply curtailed their growth rates and in many cases actually sent them into negatives.
So far all human societies which reached a certain level of wealth sharply curtailed their growth rates and in many cases actually sent them into negatives.
...and this wealth is possible because of technological growth. We might make the world wealthy enough fast enough to bring population far enough down to be sustainable, but it still amounts to a race between technology and population growth, which was my original point: invent or die
Your original point was that each technological advance enables another jump in the population.
My point is that in reality this does not happen: a certain level of technology/wealth/education (already attained in large parts of the world) stops population growing. It does not enable further expansion.
My point is that in reality this does not happen: a certain level of technology/wealth/education (already attained in large parts of the world) stops population growing. It does not enable further expansion.
Well, we better hope that this trend causes the population to level off fast enough to avoid overshoot. Maybe we should be just a little bit curious about how likely that is. And we should remember to offset the population growth slowing with the fact that per-capita resource demand increases.
Unlike many of the optimistic transhumanists out there, I believe that we are in a constant race between technology opening up new resources (or more efficient use of existing ones) and runaway population growth (which contributes to an astonishing array of seemingly unrelated world problems).
Population growth is primarily a problem in Africa. With present technology it can mean genocide in Africa. Civilizational collapse in Africa is a humanitarian tradgedy but it shouldn’t bring down Europe, the Americas or China.
Europe, the Americas, and China are all part of the same global economy. They bid for the same collection of fixed resources and space. They share the same commons and the same tragedy of the commons. It’s the same way that just because you’re not directly linked to the government doesn’t mean that you won’t be affected by it collapsing, but on a global scale.
Okay, so I’ll try to learn from recent previous experience and not flame.
Deep breath.
Very partial list of how population growth can bring down Europe, the Americas, China, and everyplace else:
Waves of refugees straining the local infrastructure past the breaking point.
Demand for petroleum rising faster than the rate of new oil reserves being discovered and faster than alternative technologies can be developed and brought to market on a sufficient scale.
Ditto for accessible deposits of some metals.
Pollution.
Pandemics spreading from regions of high population density to everywhere.
Deforestation.
Global warming and sea level rise.
Competition for resources leading to wars.
Environmentalists like to view this as our species being irresponsible. They’re not seeing the big picture. At any given level of resources and technology, there is a finite carrying capacity. If we exceed that carrying capacity, we will have a die-off soon after no matter how “responsible” we are. If there were only a few million of us on the planet we could spend our days hunting endangered species from 1 mile-per-gallon SUVs that run on coal and melted plastic and still be okay.
Waves of refugees straining the local infrastructure past the breaking point.
It’s not easy to migrate away from Africa and it’s a matter of political willingness to accept “waves of refugees”.
The highest demand of petroleum and metals doesn’t come from those place with high population growth.
A US citizen consumes ten times the amount of energy as a nigerian. And Nigeria is a country that uses a lot of energy for an African nation because it has oil.
A Amercian house cat consumes produces more CO_2 than some Africans.
If we exceed that carrying capacity, we will have a die-off soon after no matter how “responsible” we are.
There no reason that everyone has to die.
But in reflection I grant you that developing alternative energy technology might reduce some risks.
If we exceed that carrying capacity, we will have a die-off soon after no matter how “responsible” we are.
But being responsible can mean using only a tenth as much energy which means you could have ten times as many people.
When it comes to the issue of overpopulation we face the trend that birth rates go down. The problem moves in the right direction. As far as current trends go it’s unlikely that population will double.
It’s not easy to migrate away from Africa and it’s a matter of political willingness to accept “waves of refugees”.
Well, it’s happening in Europe already. The US is having immigration issues of its own as well.
When it comes to the issue of overpopulation we face the trend that birth rates go down. The problem moves in the right direction. As far as current trends go it’s unlikely that population will double.
As I said to Lumifer, birth rates are going down because of wealth, which is driven by technology. As you pointed out, though:
A US citizen ten times the amount of energy as a nigerian.
...wealth is also accompanied by increased resource demand which may cancel out the time that diminished population growth buys us.
That’s all pretty standard scaremongering that has been making the rounds since early 1970s. There were no signs it’s likely to happen then and there are no signs it’s likely to happen now.
A large stable trend of rising resource and energy scarcity (and consequently their prices) across most resources and kinds of energy. Scarcity growing fast enough so that it’s unreasonable to expect that technology will compensate for that.
Granted, these are from biased sources, because most sources are biased. But we must balance that against our own confirmation bias. I don’t have to agree with them on their proposed solutions in order to recognize that there is a credible problem.
Scarcity growing fast enough so that it’s unreasonable to expect that technology will compensate for that.
I think it would be very worthwhile to think about what exactly would be a reasonable rate at which we can expect technology to compensate. That’s what I’m trying to say—not that the scaremongers are right, but that we don’t have good estimates for demand growth versus technological growth. Actually we have excellent estimates for demand growth, it’s the compensating technological growth rates that are problematic to accurately forecast. If we can’t reliably forecast them, I submit that the safe course of action is to pour resources into many different types of basic and applied research instead believing that the current rate of progress will suffice with absolutely no evidence (other than “been okay so far”) to back it up.
That rush of confidence and almost righteousness you had when you posted that? No offense, but learn to recognize that feeling, it’s oversimplification or maybe wishful thinking.
MIRI depends on private donations. Private donors need to make money in order to donate it. MIRI depends on the power grid, sewer lines, police to deter street crime, and access to its FDIC-insured bank account. MIRI does business with various other businesses some of which depend on government contracts to remain in business, and all of which are also embedded in an enormously complex web of private and government entities. Individual members of MIRI are also part of this web, and if they can’t buy groceries or pay the rent, they will either a) stop showing up for work so they can deal with their immediate problems or b) continue showing up for work, let their stuff get repossessed, and then either b.1) become a burden on someone else (maybe MIRI while its liquid assets last) or b.2) starve.
I don’t want this situation to be true any more than you do. I find it abhorrently bad design. But none of that changes that it is true, and denying it amounts to contributing to the problem by refusing to consider how one can insulate oneself and what one cares about from it.
Your comment would be better without the first paragraph.
You are not a Legilimens; please do not pretend that you know what other people are feeling. It’s both epistemically and conversationally rude.
“please do not pretend that you know what other people are feeling” should be emblazoned in giant glowing text above every internet forum.
Is this a thing any human can really do? I mean, we evolved to quickly recognize emotions in others.
I agree that in the context of an Internet discussion, the sensorial bandwith is so low that miscalibration is frequent, and so we should strive to achieve “non empathy”. We have though a whole section of our brain devoted just to that, so I guess that this particular bias is hopeless.
You can certainly use the part of the ‘empathy’ modelling capacity to realise that telling other people what they feel tends to piss people off when you are wrong (and sometimes also when you are right). Failing to adequately account for such likely reactions isn’t an inevitability, it is a social skills failure and a fairly blatant one at that.
Most people learn not to do this after embarassing themselves a couple of times when it backfires. (An except is when attempting to deliberately provoke or one-up another (“U mad bro?”)).
That’s not what I was noticing: telling other people how they feel might surely be a social failure, but not pretending to know how they feel? That seems much harder, since we have evolved to base our social interactions on the ability to guess the emotions of the other members of the pack.
We also evolved the capacity to suspect that a thought we have might be wrong, and to develop notions of confidence.
Don’t strive for ‘non empathy’. Strive for ‘not being overconfident’. Also, keep in mind Scalzi’s maxim, “The failure mode of clever is asshole”
Also keep in mind Voltaire’s maxim, “A witty saying proves nothing”. ;)
Did it look like I was trying to prove something with that? Once you’ve seen it, you can judge it for yourself.
The proposed scenario was “prolonged recession with severe government austerity”, not “zombie apocalypse”.
To a first approximation, I suspect that a lot of the boring existential risks that don’t involve things that tile the universe with stuff look pretty much the same by the time they roll around to the place you live: Democratic Republic of the Congo
Oh, yeah, sorry, I forgot to make the connection back to non-boring existential threats:
To a first approximation, I suspect that a lot of the boring existential risks that don’t involve things that tile the universe with stuff look pretty much the same by the time they roll around to the place you live: Democratic Republic of the Congo and people get so preoccupied with not starving that they lose interest in friendly AI and rationality except in its most instrumental applications.
The rush of adrenalin and almost righteousness I had when I posted the above response? I really need to recognize that feeling, it’s called being pissed off at a mental model I have of a certain world view from which I would expect a response similar to the parent post.
ChrisitanKI, I am sorry. Your later posts show that you are seriously addressing the topic and are not the straw-man I was attacking. Also, you at no point said “this does not affect MIRI so it’s not a problem”. You were only rebutting my assertion that a treasury default is an existential risk. I should have responded strictly to that instead of getting personal.
Thank you to MrMind for saying:
and to wedrifid for saying:
This was what I needed, apparently, to come around.
There not much emotion in the lines I wrote. You are the person who’s emotional because of some perceived danger to yourself.
There no wishful thinking behind the notion that a lot of scientific research is dangerous. I’m in favor of scientific research because without it I will certainly die in the next hundred years. On the other hand the idea that scientific research reduces existential is naive.
Yudkowsky was working on building AGI when he got the insight that the likely outcome of building an AGI is that the AGI goes bad and kills everyone. Then he grew up and thought about whether persuing that problem is the right thing to do.
There are way to many scientists who just naively want to believe that they are doing good, when they are endangering humanity. The idea that everyone is on the same time when it comes to reducing X-risks is wishful thinking.
Bottom line, MIRI and similar projects only exist in countries rich enough to have the time and resources to devote to future risks. If you believe that MIRI reduces existential risks, then something that is a risk to MIRI is itself an existential risk to some extent.
If there no one rich enough to engage in AGI research, you don’t need MIRI to prevent existential risk.
Doesn’t this extend to a generalized argument against technological advancement, since any of it might cause existential risks?
Not necessarily. Couldn’t one argue that technological advancement is neutral? It’d be hard for farmers to detect and blow up incoming asteroids, for example.
Don’t think “neutral” is the right word, it’s more like technological progress has two consequences pushing in different directions. On the one hand, tech makes humanity better equipped to deal with existential risk that is there regardless (e.g. asteroids). On the other hand, tech creates new kinds of existential risk (e.g. grey goo). Which effect is stronger/more important is debatable.
Yes, it does. If the main thing you care about is existential risk than getting rid of all technological advancement is benefitial.
The average technological advance raises existential risk. Pushing technology for it’s own sake in the hope that it solve existential risk doesn’t make sense.
Do you mean at the current level of technology or do you mean at all times everywhere?
For example our ancestors were nearly wiped out by an ice age...
I think if I sample technology the average technology got developed in the 20st or 21st century.
Which ice age do you mean? The last one? What evidence do you have for that claim?
See e.g. this or this.
I believe the most likely existential risk is a Malthusian Crunch.
Unlike many of the optimistic transhumanists out there, I believe that we are in a constant race between technology opening up new resources (or more efficient use of existing ones) and runaway population growth (which contributes to an astonishing array of seemingly unrelated world problems). Whenever technology starts to lose you have overshoot followed by civilizational collapse.
We have only a limited number of such collapse cycles before we exhaust whatever the rate limiting resources turn out to be and permanently foreclose on expanding beyond Earth and having any sort of shot at being the species that beats The Great Filter.
Moreover, a collapse happening pretty much guarantees that anybody who is cryosuspended before that time will permanently and irrevokably die with no hope of reprieve.
Empirically, in reality, there is no runaway population growth.
Empirically, what level of population growth would it take for you to consider it runaway?
Population growth rates are not steady-state. They are a function of many things, notably the prevalent wealth and education (which tend to go together) in a society. So far all human societies which reached a certain level of wealth sharply curtailed their growth rates and in many cases actually sent them into negatives.
...and this wealth is possible because of technological growth. We might make the world wealthy enough fast enough to bring population far enough down to be sustainable, but it still amounts to a race between technology and population growth, which was my original point: invent or die
Your original point was that each technological advance enables another jump in the population.
My point is that in reality this does not happen: a certain level of technology/wealth/education (already attained in large parts of the world) stops population growing. It does not enable further expansion.
Where did I say that?
Here is my actual original point.
Well, we better hope that this trend causes the population to level off fast enough to avoid overshoot. Maybe we should be just a little bit curious about how likely that is. And we should remember to offset the population growth slowing with the fact that per-capita resource demand increases.
Let’s continue this thread here please.
Population growth is primarily a problem in Africa. With present technology it can mean genocide in Africa. Civilizational collapse in Africa is a humanitarian tradgedy but it shouldn’t bring down Europe, the Americas or China.
Or, a briefer version of the below:
Europe, the Americas, and China are all part of the same global economy. They bid for the same collection of fixed resources and space. They share the same commons and the same tragedy of the commons. It’s the same way that just because you’re not directly linked to the government doesn’t mean that you won’t be affected by it collapsing, but on a global scale.
Okay, so I’ll try to learn from recent previous experience and not flame.
Deep breath.
Very partial list of how population growth can bring down Europe, the Americas, China, and everyplace else:
Waves of refugees straining the local infrastructure past the breaking point.
Demand for petroleum rising faster than the rate of new oil reserves being discovered and faster than alternative technologies can be developed and brought to market on a sufficient scale.
Ditto for accessible deposits of some metals.
Pollution.
Pandemics spreading from regions of high population density to everywhere.
Deforestation.
Global warming and sea level rise.
Competition for resources leading to wars.
Environmentalists like to view this as our species being irresponsible. They’re not seeing the big picture. At any given level of resources and technology, there is a finite carrying capacity. If we exceed that carrying capacity, we will have a die-off soon after no matter how “responsible” we are. If there were only a few million of us on the planet we could spend our days hunting endangered species from 1 mile-per-gallon SUVs that run on coal and melted plastic and still be okay.
It’s not easy to migrate away from Africa and it’s a matter of political willingness to accept “waves of refugees”. The highest demand of petroleum and metals doesn’t come from those place with high population growth. A US citizen consumes ten times the amount of energy as a nigerian. And Nigeria is a country that uses a lot of energy for an African nation because it has oil.
A Amercian house cat consumes produces more CO_2 than some Africans.
There no reason that everyone has to die.
But in reflection I grant you that developing alternative energy technology might reduce some risks.
But being responsible can mean using only a tenth as much energy which means you could have ten times as many people.
When it comes to the issue of overpopulation we face the trend that birth rates go down. The problem moves in the right direction. As far as current trends go it’s unlikely that population will double.
Well, it’s happening in Europe already. The US is having immigration issues of its own as well.
As I said to Lumifer, birth rates are going down because of wealth, which is driven by technology. As you pointed out, though:
...wealth is also accompanied by increased resource demand which may cancel out the time that diminished population growth buys us.
By the way, the emphasis on Africa is misplaced. It might, as a continent, have the highest growth rates, but most populated countries are outside of Africa, and most of them have grown by more than 20% between 1990 and 2010.
That’s all pretty standard scaremongering that has been making the rounds since early 1970s. There were no signs it’s likely to happen then and there are no signs it’s likely to happen now.
What would be the signs that we would be observing if it were likely to happen?
A large stable trend of rising resource and energy scarcity (and consequently their prices) across most resources and kinds of energy. Scarcity growing fast enough so that it’s unreasonable to expect that technology will compensate for that.
Here then.
Granted, these are from biased sources, because most sources are biased. But we must balance that against our own confirmation bias. I don’t have to agree with them on their proposed solutions in order to recognize that there is a credible problem.
I think it would be very worthwhile to think about what exactly would be a reasonable rate at which we can expect technology to compensate. That’s what I’m trying to say—not that the scaremongers are right, but that we don’t have good estimates for demand growth versus technological growth. Actually we have excellent estimates for demand growth, it’s the compensating technological growth rates that are problematic to accurately forecast. If we can’t reliably forecast them, I submit that the safe course of action is to pour resources into many different types of basic and applied research instead believing that the current rate of progress will suffice with absolutely no evidence (other than “been okay so far”) to back it up.
Let’s continue this discussion here please.