This area could really use better economic analysis. It seems obvious to me that some subset of workers can be pushed below subsistence, at least locally (imagine farmers being unable to afford rent because mechanized cotton plantations can out-bid them for farmland). Surely there are conditions where this would be true for most humans.
There should be a simple one-sentence counter-argument to “Trade opportunities always increases population welfare”, but I’m not sure what it is.
I appreciate your desire for this clarity, but I think the counter argument might actually just be “the oversimplifying assumption that everyone’s labor just ontologically goes on existing is only true if society (and/or laws and/or voters-or-strongmen) make it true on purpose (which they tended to do, for historically contingent reasons, in some parts of Earth, for humans, and some pets, between the late 1700s and now)”.
You could ask: why is the holocene extinction occurring when Ricardo’s Law of Comparative Advantage says that wooly mammoths (and many amphibian species) and cave men could have traded…
...but once you put it that way, it is clear that it really kinda was NOT in the narrow short term interests of cave men to pay the costs inherent in respecting the right to life and right to property of beasts that can’t reason about natural law.
Turning land away from use by amphibians and towards agriculture was just… good for humans and bad for frogs. So we did it. Simple as.
The math of ecology says: life eats life, and every species goes extinct eventually. The math of economics says: the richer you are, the more you can afford to be linearly risk tolerant (which is sort of the definition of prudent sanity) for larger and larger choices, and the faster you’ll get richer than everyone else, and so there’s probably “one big rich entity” at the end of economic history.
Once humans close their heart to other humans and “just stop counting those humans over there as having interests worth calculating about at all” it really does seem plausible that genocide is simply “what many humans would choose to do, given those (evil) values”.
I think this is sort of the “ecologically economic core” of Eliezer’s position: kindness is simply not a globally instrumentally convergent tactic across all possible ecological and economic regimes… right now quite a few humans want there to not be genocide and slavery of other humans, but if history goes in a sad way in the next ~100 years, there’s a decent chance the other kind of human (the ones that quite like the long term effects of the genocide and/or enslavement other sapient beings) will eventually get their way and genocide a bunch of other humans.
If all of modern morality is a local optimum that is probably not the global optimum, then you might look out at the larger world and try and figure out what naturally occurs when the powerful do as they will, and the weak cope as they can...
Once the billionaires like Putin and Xi and Trump and so on don’t need human employees any more, its seems plausible they could aim for a global Earth population of humans of maybe 20,000 people, plus lots and lots of robot slaves?
It seems quite beautiful and nice to be here, now, with so many people having so many dreams, and so many of us caring about caring about other sapient beings… but unless we purposefully act to retain this moral shape, in ourselves and in our digital and human progeny, we (and they) will probably fall out of this shape in the long run.
And that would be sad. For quite a few philosophic reasons, and also for over 7 billion human reasons.
And personally, I think the only way to “keep the party going” even for a few more centuries or millennia is to become extremely wealthy.
I think we should be mining asteroids, and building fusion plants, and building new continents out of ice, and terraforming Venus and Mars, and I think we should build digital people who know how precious and rare humane values so they can enjoy the party with us, and keep it going for longer than we could plausibly hope to (since we tend to be pretty terrible at governing ourselves).
But we shouldn’t believe good outcomes are inevitable or even likely, because they aren’t. If something slightly smarter than us with a feasible doubling time of weeks instead of decades arrives, we could be the next frogs.
This area could really use better economic analysis. It seems obvious to me that some subset of workers can be pushed below subsistence, at least locally (imagine farmers being unable to afford rent because mechanized cotton plantations can out-bid them for farmland). Surely there are conditions where this would be true for most humans.
There should be a simple one-sentence counter-argument to “Trade opportunities always increases population welfare”, but I’m not sure what it is.
I appreciate your desire for this clarity, but I think the counter argument might actually just be “the oversimplifying assumption that everyone’s labor just ontologically goes on existing is only true if society (and/or laws and/or voters-or-strongmen) make it true on purpose (which they tended to do, for historically contingent reasons, in some parts of Earth, for humans, and some pets, between the late 1700s and now)”.
You could ask: why is the holocene extinction occurring when Ricardo’s Law of Comparative Advantage says that wooly mammoths (and many amphibian species) and cave men could have traded…
...but once you put it that way, it is clear that it really kinda was NOT in the narrow short term interests of cave men to pay the costs inherent in respecting the right to life and right to property of beasts that can’t reason about natural law.
Turning land away from use by amphibians and towards agriculture was just… good for humans and bad for frogs. So we did it. Simple as.
The math of ecology says: life eats life, and every species goes extinct eventually. The math of economics says: the richer you are, the more you can afford to be linearly risk tolerant (which is sort of the definition of prudent sanity) for larger and larger choices, and the faster you’ll get richer than everyone else, and so there’s probably “one big rich entity” at the end of economic history.
Once humans close their heart to other humans and “just stop counting those humans over there as having interests worth calculating about at all” it really does seem plausible that genocide is simply “what many humans would choose to do, given those (evil) values”.
Slavery is legal in the US, after all. And the CCP has Uighur Gulags. And my understanding is that Darfur is headed for famine?
I think this is sort of the “ecologically economic core” of Eliezer’s position: kindness is simply not a globally instrumentally convergent tactic across all possible ecological and economic regimes… right now quite a few humans want there to not be genocide and slavery of other humans, but if history goes in a sad way in the next ~100 years, there’s a decent chance the other kind of human (the ones that quite like the long term effects of the genocide and/or enslavement other sapient beings) will eventually get their way and genocide a bunch of other humans.
If all of modern morality is a local optimum that is probably not the global optimum, then you might look out at the larger world and try and figure out what naturally occurs when the powerful do as they will, and the weak cope as they can...
Once the billionaires like Putin and Xi and Trump and so on don’t need human employees any more, its seems plausible they could aim for a global Earth population of humans of maybe 20,000 people, plus lots and lots of robot slaves?
It seems quite beautiful and nice to be here, now, with so many people having so many dreams, and so many of us caring about caring about other sapient beings… but unless we purposefully act to retain this moral shape, in ourselves and in our digital and human progeny, we (and they) will probably fall out of this shape in the long run.
And that would be sad. For quite a few philosophic reasons, and also for over 7 billion human reasons.
And personally, I think the only way to “keep the party going” even for a few more centuries or millennia is to become extremely wealthy.
I think we should be mining asteroids, and building fusion plants, and building new continents out of ice, and terraforming Venus and Mars, and I think we should build digital people who know how precious and rare humane values so they can enjoy the party with us, and keep it going for longer than we could plausibly hope to (since we tend to be pretty terrible at governing ourselves).
But we shouldn’t believe good outcomes are inevitable or even likely, because they aren’t. If something slightly smarter than us with a feasible doubling time of weeks instead of decades arrives, we could be the next frogs.