Absent Transhumanism and Transformative Technologies, Which Utopia is Left?
Assume for the time being that it will forever remain beyond the scope of science to change Human Nature. AGI is also impossible, as is Nanotech, BioImmortality, and those things.
Douglas Adams mice finished their human experiment, giving to you, personally, the job of redesigning earth, and specially human society, according to your wildest utopian dreams, but you can’t change the unchangeables above.
You can play with architecture, engineering, gender ratio, clothing, money, science grants, governments, feeding rituals, family constitution, the constitution itself, education, etc… Just don’t forget if you slide something too far away from what our evolved brains were designed to accept, things may slide back, or instability and catastrophe may ensue.
Finally, if you are not the kind of utilitarian that assigns exactly the same amount of importance to your desires, and to that of others, I want you to create this Utopia for yourself, and your values, not everyone.
The point of this exercise is: The vast majority of folk not related to this community that I know, when asked about an ideal world, will not change human nature, or animal suffering, or things like that, they’ll think about changing whatever the newspaper editors have been writing about last few weeks. I am wondering if there is symmetry here, and folks from this community here do not spend that much time thinking about those kinds of change which don’t rely on transformative technologies. It is just an intuition pump, a gedankenexperiment if you will. Force your brain to face this counterfactual reality, and make the best world you can given those constraints. Maybe, if sufficiently many post here, the results might clarify something about CEV, or the sociology of LessWrongers...
- Compiling my writings for Lesswrong and others. by 22 Jul 2014 8:11 UTC; 3 points) (
- 14 Mar 2015 19:58 UTC; 0 points) 's comment on [Link] On Saving the World and Other Delusions by (
Leave stuff like ratios of reproductive sex alone; I don’t think this is a direct obstacle to something people’d recognize as a utopia.
Divide the Earth up by ecoregions. Extensive surveys of natural resources, landforms, biomes, economically-significant species, ecosystem-provided services, existing human use and infrastructure, so on. “Urban” counts as a biome here.
Priority technologies to research, develop and deploy as universally-accessible things:
-Passive or solar-charged night-vision glasses. This is to reduce the need for artificial electric lighting at night; its creation was a real game-changer, but the energy consumption is extreme.
-A reversible, safe, efficient oral contraceptive for folks with danglybits. This is to be paired with widespread distribution of existing birth-control methods.
-Dump tons of money into research for efficient hydrogen-fuelled and electric transport; also kick off a nuclear renaissance with an emphasis on modern design and organizing/management principles without the neglect incentives currently possessed by private industrial NPOs. Expand supplemental renewables production. Every continent gets an elevated high-speed rail network.
Relocalize culture. Start with food. Create new or revived forms of cuisine and food culture that are specific to an area; identify candidate wild foods and likely yields, promote them vigorously, and incorporate them into the diet; initially as supplemental or occasional foods, but with an eye towards transition to primarily-local resources. This model is not even vaguely allergic to agriculture, but it is focused on providing food for local people in a sustainable manner rather than engaging in high-volume agricultural trade. Over-bred domestic animal stocks designed for factory-farming will be phased out. More robust, mixed-trait varietals will be emphasized, as will best practices for handling (some of Temple Grandin’s work is useful to this idea). Wild stock in general will be promoted, though the development of sustainable populations and management programs for harvesting may take a bit. One sort of long-distance agribusiness trade that may be a good idea is crops and livestock from similar biomes, in an effort to maximize productivity. Focus on more microlivestock as well, up to and including bugs. Then take all of it and that have each bioregion figure out a pretty good diet that balances nutrient needs and caloric abundance during the critical phases of childhood, design a local cuisine, and go.
Some areas simply won’t be compatible with settled habitation, even after all that. So create some technologies to facilitate a sort of moden nomadism—the modern Inuit would probably be a lot better off if, like their ancestors, they were legally allowed to do mobile living with the kids and, as a consequence, get a diet rich in organ meats from local wildlife. Portable, clean electric generators, high-end durable vehicles of appropriate sorts, long-range communication technologies for areas off the beaten path, ubiquitous first-aid kits and training—emphasis of state services in such areas should be on medical and emergency fast-response. Other important stuff: Rural communications and education.
Economics: Automate the hell out of whatever is amenable to having the hell automated out of it and still capable of producing a decent product. We don’t get nanotech, but 3D printing is fair game—subsidize it for startups.
I’m still chewing on to what degree the actual ownership vs management structures look like here, and have been writing this since I woke up. Will come back and poke it more in a bit...
EDIT: S’more thoughts on economics:
I’m envisioning a lot of production being done in a massively-automated fashion enabled by dedicated, highly-trained staff, who work for...well, call them concerns. They’re kinda hard for me to pin down as purely state or private. It seems like a lot of frameworks could fit in here—Jamais Cascio’s “Robonomics” scenario, in which private companies who pay a flat income tax and obey regulatory restrictions but no payroll taxes could probably be slotted in as readily as an idealized “the state owns the means, cooperatives license them” sorta Marxist-like scenario. However you want to construct it, the goal is “productive economy to enable a measure of global trade and undergird a social support system”—basically, I want to see free medical care (in terms of the end-user’s experience of it), a basic income guaruntee, and support for other things that may not be immediately profitable in and of themselves but produce significant social value.
I’ve thought in a lot of situations about a two-tiered currency system: essentially, basic income guarantee, necessities, most manufactured goods and social services are payed for with one kind of currency; bespoke items, luxury goods, non-necessities and so on are payed for with another. Basically representing automated versus direct human labor.
Law enforcement: I don’t see it disappearing given the constraints here, alas. Some remedies off the top of my head include an emphasis on beat cops who patrol their own living areas, and neighborhood police booths like in Japan (remove the social distance); backup in the form of general support for an area, but ultimately answerable to the local side of things. This is coupled with eliminating victimless crimes (drugs and sex work are immediate examples), and a prison system that looks a lot more like Norway’s rehabilitation approach to things.
Ecology: no single unifying strategy here, but I don’t think we have to consume everything; moreover, it’s ecologically and economically beneficial not to. Rewilding is good for some areas and ecosystems, but sometimes we improve on them.
The net picture looks sorta like this, at least in my head:
-People are born into relatively healthy, prosperous communities that have no explicit need to travel, but travel is easy and affordable. Work is not the same thing as employment; the choice between starvation and an exploitative economic arrangement is absent. Basic needs are meet. Personal and cultural autonomy are both significant. Most people have enjoyable lives; people who don’t fit where they are have some ability to get out of there and find a niche that suits them better. The economy chugs along pretty steadily most of the time, with few fantastic booms but few real crises. But it’s not stagnant either—this is a terrific climate for science and intellectual inquiry, as free time abounds for many and there is not so much pressure to choose between that and starving to death. People who like it competitive are free to try and get into a concern, but work and employment are not synonyms—lots of people do socially-useful work without being employed. People who just don’t like other people and want to bog off into the wilderness or something and rough it can do that—and if they decide to come back later society is still there for them. There’s still incentive to find something socially-, commercially- or academically-relevant to people because human labor is incentivized seperately from the production that drives the core economy; even if you just write poetry, someone might conceivably pay you a few human-dollars now and again for it—human labor currency can sometimes behave like a reputation economy.
Less harm, less disutiility, higher average wellbeing, a rich and stimulating life for most people, good health, less basis for direct conflict...
(Oh, I can see issues. What about climate change? Reorganizing ecoregions is bound to upset more than a few applecarts—it’s very well and good to talk about a Northern Forager-styled society in the Great Lakes area, but what happens when winters become warm enough that maple syrup can’t be produced? What kind of economic and social shakedowns result? This definitely needs to be explored in more depth than I can managed right now...)
Your first point makes me realize that gene therapy or something of the sort for cat-level night vision would be really cool.
I’m not very sure that the reason we have industrial concentrated agriculture is lack of the right plants and knowledge of how to make them tasty. Now that I think about it, I’m not sure why agriculture is as spacially concentrated as it is. Knowledge of how to grow particular plants in a particular climate? (in which case you’re right) Infrastructure? Water and terrain?
Wouldn’t it? I think it’s proscribed by the terms of the thought experiment, but if it weren’t I’d include an option to subsidize the creation of a working tapetum lucidum in whoever wants one.
Kind of missing my point, though my point is sorta nested under layers of assumptions in that particular case. Lemme take you through it.
I live in a cold climate with extreme seasonal variation. Winters here are very harsh. But I can buy bananas at the grocery store whenever I want, anytime. (This is for really weird, complex historical and geopolitical reasons that I’m prone to rambling about in their own right, but we’ll ignore that for now.)
And that’s good, if you think about it, because there’s very little fruit here that grows under agricultural conditions, let alone wild ones. Oh, there’s been progress made in cold-climate cultivars of apple, pear and a few other things here, but those are all very recent developments. Forget citrus, forget bananas. If you want your vitamins, you need fruits and veggies (or organ meats, but those are not a common element of the mainstream diet here). However, the climate is wholly unsuited to producing them for most of the year, and we have a short growing season even when it happens.
Under the current system, we mostly grow corn and soy, and sell it to people far away. That’s economically productive, but it takes a lot of land. It also depends on a huge, massive global trade network reliant on just-in-time delivery and a consume, consume, consume mindset in the relevant trade partners and their cultural substrate. That particular thing is what I see as part of the problem that needs solving—not trade itself, not even long-distance trade; those are good things—just the relevant complex of players, relationships and motives and its direct and external effects on much of what I value.
My priorities are roughly as follows:
-Increase resilience (failures of the just-in-time delivery system are disastrous; monocultures are fragile) and sustainability
-Enrich local cultures and make them more fun and interesting, and more distinctive.
-Encourage diverse, robust and complex polycultures with narrower peaks and valleys over just-good-enough monocultures that either boom or bust.
-Make it so that the majority of local inhabitants of a region can have a life worth living where they’re at, regardless of available opportunities to leave and explore or settle someplace else.
That suggests to me that the current approach to food production in my home area is not gonna fly under the system I’m thinking of. The problem is that climate is a serious impediment to alternatives. Not an absolute barrier—there have been a lot of human beings living here since long before the settlers arrived, in conditions of relative abundance even—but still a significant one given the current population. Basically, land use patterns gotta shift, and for that to really work out okay in a place that spends most of the year agriculturally fallow, we need a more varied diet that makes better use of local and seasonal resources and doesn’t share the mainstream culture’s resistance, aversion or neglect toward certain types of food.
What I have in mind is something like permaculture food forests, supplemented with lower-key, locally-focused agriculture and greenhouses, as well as sustainable harvest of wild resources. A bit of farming, a bit of foraging, a bit of high-tech, a bit of permaculture. I envision the results looking a bit like this: http://troutcaviar.blogspot.com/
HERE ARE THE CONSTRAINTS: Assume for the time being that it will forever remain beyond the scope of science to change Human Nature. AGI is also impossible, as is Nanotech, BioImmortality, and those things.
Biological tinkering, so long as it doesn’t make changes in the range of human temperament or huge changes in longevity would seem to be permitted. I think we’re stuck with the usual amount of status seeking, cruelty, violence, and inertia, though those vary quite a bit from one time and place to another. I don’t know what the best we could get would be just using ordinary cultural methods.
Some increase of lifespan—say to 150 years with a fairly short unhealthy bit at the end—would also seem to be within the challenge.
Your description of more varied and resilient agriculture so that food doesn’t need to be hauled as far seems doable. I’m not sure if it needs much cheaper energy to keep the greenhouses warm and lit or much more expensive energy so that people will have an incentive to develop local food.
I’ve wondered whether good local lives would require a good bit of telecommuting. Rural poverty seems (to my casual knowledge) to be very intractable—that’s part of why poor people move from farms to cities. On the other hand, the problem may be more political than I realize.
I’m definitely not an expert in this sort of sociology, but I have lived in rural areas for a good chunk of my life and I’m familiar with a variety of ways of living under those constraints. The impression I get is that rural poverty is largely due to infrastructure and availability issues, and to lack of economies of scale.
Ten miles outside my hometown there’s still electricity and good roads, and the people making their living there (as opposed to people that prefer rural life but work in town) tend to be fairly affluent ranchers and vintners or their lower-middle-class employees. Twenty miles outside town there’s no central electricity let alone water, the roads are dirt tracks or poorly maintained asphalt and don’t get plowed in the winter, and about the only commercial enterprises worth talking about are forestry and a couple of mines. The few people living permanently under those conditions are truly poor. Not because of lack of marketable skills—I spent one summer staying with a family friend who lived in a one-room log cabin, and he was one of the more gifted mechanics I’ve met—but because lack of infrastructure makes labor-saving measures a lot harder and more expensive, and because low population density makes niches for comparative advantage a lot rarer and shallower.
These problems strike me as technical more than political, and I’m not aware of any candidate technical solutions that’d level them completely. But there are some technical advances that’d mitigate them considerably. Affordable and reliable wind and solar would make labor-saving technology less dependent on the power grid and would enable network connectivity; network connectivity allows some knowledge work to be done rurally, makes education a lot easier, and makes distribution of goods simpler (you still have to ship out production, but payment, marketing, and some support can be done digitally). If we’re really interested in tackling rural poverty as a political issue, any of this could be subsidized, although that doesn’t strike me as a great utilitarian move given the greater efficiency of urbanized settlement.
I’d consider that to be almost definitionally transformative; on the other hand, that moves the window for “transformative” into serious overlap with a lot of contemporary tech, so I suppose it’s a matter of taste.
Oh, they vary amazingly alright. In my ideal case, status-seeking has plenty of safe outlets, but the prevailing cultural norm (insofar as we want any of those to extend to a huge portion of the population) also looks a lot less competitive than is normal in Western culture. I’d like to remove incentives for all but the most banal forms of cruelty (probably no force on Earth can prevent human children from picking on each other, but what does that look like, and how does it play out in terms of long-term social relations between folks?), but there’ll always be someone who tortures squirrels to death for fun or just has a bit of a sadistic streak around other humans. Violence is trickier—my goal is to remove much of the incentive for large-scale violence, and create some ways of mitigating the smaller-scale power of violence to do harm.
I’m not so sure about that—we don’t have any good sense that 150 years is even a plausible lifespan for an unmodified human, and the Gompertz function is hard at work through the centenarian mark. Humans join elephants and a few other mammals in having remarkably long lifespans for our body weight; I suspect we’re pretty near to what is plausible, barring some kind of really weird, exceptional circumstance. (I suppose now, when there are more human beings than ever, we have the best chance yet of someone living to a record-shattering old age, but I’d still be surprised if 150 was achievable...)
Depends where you are, of course—greenhouses probably can’t be made economical in Nunavut, but they’re already viable in parts of southern Canada.
Rural poverty is complicated—I think it does have political dimensions, but a lot of that comes down to cities as economic engines and rural areas as being primarily harvest-zones when in the past they were more just places most people lived.
The Navajo are probably better off in the long run without uranium mining on their lands, but the existing structure of things makes it needful for rural folks to have income in the same sense that folks in the city do, with many fewer opportunities for it, and less bargaining power. Same holds true in much of Appalachia—many folks there don’t like the coal mines, but need jobs, and get to watch their own communities pay the cost in environmental externalities.
One thing that occurs to me is that for rural-urban interfacing, this two-tiered currency idea (machine vs human labor) suggests that rural folks might wind up doing well—when human labor is a much more important part of the local economy, payment is in human dollars, which are fungible for fancier and more-desireable stuff.
Localizing agriculture would be quite expensive in terms of resources, as would phasing out existing engineered varieties. I’d focus on removing current farm subsidies, which tend to overly promote factory-style agribusiness, and plow more resources into preserving existing varieties of crops and livestock. Other than that, though, I don’t think changes are warranted.
Terms of the thought experiment: I get to wave my hands and change society to look like something else, absent trasnhumanist technological favorites. If it’s expensive, it’s okay in this situation to assume that the investment is made anyway.
Wow, thanks for sharing that much time to think about it! Many ideas there I’ve never had heard of, as someone who likes to write utopias quite a bit, I enjoyed my reading time.
If I can’t rewire animals brains to stop suffering, screw it. Pave over every forest and jungle and other natural habitat on the Earth. Drive every animal species extinct but us (save pets people want to keep). Find a plant species that can generate oxygen and food without being pollinated by insects. (algae, maybe?) Also, save dolphins and great apes, who I would probably count as people, rather than “pseudo-people,” like other animals, construct luxurious habitats for them. Find some way to limit their population besides Malthusian scarcity (sterilization of most adults, probably).
Get rid of religions, try and make everyone as wealthy as possible, try and invent drugs that don’t screw up your brain that much but still feel awesome, make sure they are (obviously legal) and widely available. Eliminate every communicable disease, including STDs. Invent some kind of male-usable birth control. Eliminate nudity and sex taboos, and hold enormous public orgies every day, convince Eliezer Yudkowsky that the singularity is not gonna happen and he should spend all day writing HPMOR.
It still amazes me how many people read Brave New World and think Huxley’s dystopia sounds like Utopia.
Haven’t read it, actually.
Those unaware of the past are doomed to repeat its errors (a traditional justification for exhaustively studying the history of philosophy before being allowed to actually do philosophy oneself).
Read it, and as you go, you can tick off the whole of your shopping list. One might give only half a point for “wealthy”, as only the Alphas get to be wealthy, but everyone’s happy with their lot. They’re made that way in artificial wombs.
But Brave New World is not history. It’s fiction.
I read the plot summary on Wikipedia. It looks like what is wrong with Brave New World is not present in my utopia. I don’t want to dumb people down or physically weaken them as they develop, I don’t want to brainwash them. I don’t want everyone believing “‘ending is better than mending’ or ‘more stiches less riches’” I don’t want to get rid of families, and have the very concept be considered “pornographic.” I don’t want spending time alone to be frowned upon. I don’t want people to die at age 60, having been conditioned that it’s not a bad thing because they have no family and no one will mourn them. I don’t want everyone locked into a single job by brainwashing for their entire lives. I don’t want Shakespeare (or any literature) banned. I also don’t want a caste system.
Also, soma doesn’t sound like the kind of drug I was imagining, it sounds from the plot summary like it takes away your emotions and is used to quell riots. I’m thinking more like “crack, minus addiction and brain damage, and imprecision in dosing. (maybe some other downsides I missed, because I am not that knowledgeable about drugs)”
Orgies, drugs (of the right hypothetical variety), and mass atheism bad in some hidden, profound way that I would understand if I read Brave New World. Brave New World just associates them with bad stuff. Huxley was not doing a careful analysis of what they would be like. He was trying to write an interesting story.
This reminds me of a livejournal entry Yvain wrote about sci fi dystopias. Brave New World is very much a “rigged thought experiment.”
That said, I don’t think I’d be comfortable with elimination of all nudity and sex taboos coupled with massive public orgies, because I’d rather be able to exercise more discrimination with regards to whom I have sexual contact with without being regarded as weird and prudish. If we’re not rewriting human nature here, then anyone not participating in all the public sex is probably going to be stigmatized.
I also definitely wouldn’t be happy having all the natural ecosystems developed and most of the biosphere driven to extinction. Some people don’t give a crap about nature, and I can intellectually accept that there’s no reason other people have to actually like it, but it still makes me uncomfortable even if they’re not actually acting to destroy it, much like you’d probably feel discomforted to hear someone argue that there’s no reason for any more works of fiction to be made, ever, and all the people engaged in creating it and the resources dedicated to storing it should be redirected to more useful things.
I liked the link.
I would probably be uncomfortable with nudity and public sex too (at first), but it’s not really a part of my personality that I like. If I had the chance, I would basically just try to get used to it. I don’t want to make people who wouldn’t want to adapt adapt, but I would rather that future people were not limited like me than that things stayed nice for people who had been raised in the bad old days like you and me.
I understand your reaction to the thought of killing Mother Nature. I would do it with some regret. I agree that she is beautiful, and I would miss her from an aesthetic point of view. But I think it’s worth it.
The idea of wiping out other species to prevent their suffering strikes me as pretty bizarre. It’s the same sort of extension of Negative Utilitarianism that leads to the suggestion that we should do the same to humanity, and I don’t think that’s a very practical approach to utility maximizing.
In any case, I doubt most of the natural world suffers nearly as much as the philosophers in that link suggest, partly because I suspect a lack of abstract awareness and other neurological faculties limits the ways in which most animals can suffer, and partly due to the same hedonic treadmill tendencies that exist in humans.
It seems like we have two disagreements. The first is whether there are living conditions to which death is preferable, and the second is over how bad the conditions wild animals live in are. About the first:
I’m not a negative utilitarian, I don’t think suffering should lexically override happiness. I just think the suffering outweighs the happiness here because there is more of it. I definitely don’t think humanity should be wiped out too. Humanity wouldn’t otherwise be living in conditions worse than nonexistence, and has a good chance of living in better conditions in the future. Humanity is also the only potential manifestation of good in the universe, as far as we know.
If you have a problem with wanting to kill someone to put them out of their misery in general, what if you were going to be tortured forever? Wouldn’t you want to die then? If staying alive seems like it should lexically override pain when you look at a single individual, think about all the future individuals who you probably don’t think have any special claim to life that comes with already existing, whose suffering you would be preventing by killing the present generation. If the species is expected to continue long enough, barring time-discounting, they should vastly outweigh the cost of killing the current. And no matter how long the species is going to continue, it’s going to die some time, so you’re really only moving an event forward in time, not introducing it from nowhere.
About the second:
I see no reason to expect that if animals have reduced awareness and other neurological faculties that reduce their ability to suffer, it wouldn’t also limit their ability to experience positive things too. Even if with what they’re lacking, they suffer only 1⁄10 as much as humans, the vast numbers of animals in the world seem to outweigh that.
The possibility of a hedonic treadmill in animals is something to keep in mind, but I suspect that it is not as evolutionarily helpful in short lived animals that aren’t likely to live for many years after a major negative event. The Wikipedia article said it took weeks in humans for the treadmill to kick in and make it so “positive emotions actually outweighed their negative ones.”
There is an obvious evolutionary force that would push animals like humans that can live for decades to mentally recover from terrible circumstances, but there is nothing for all of the animals that are hit by something they have a low chance of surviving. If a gazelle breaks a bone in its leg, it is basically dead, and there is no selective pressure to keep its mind in operable condition.
And most animals don’t live the kind of lives that shape the genes of their species. Most animals die before reproducing. I expect that the genes of most animals are tailored to benefit the lucky ones who aren’t infected by some parasite, and who can find enough food.
I agree that there are circumstances to which death is preferable, although I’ve argued a number of times on this site that people who’re making that decision with respect to themselves are usually in a bad position to do so.
I strongly disagree that the conditions on wild animals are that bad.
There’s a very strong selective pressure for animals to be adapted to their own specific living circumstances. Animals can certainly become upset or depressed when removed from circumstances they’re comfortable with, witness the preponderance of zoo animals whose habitats aren’t made sufficiently reflective of what the animals would have to live with in the wild. They often become visibly depressed or neurotic, despite living much safer, physically healthier, and longer lives.
As for the hedonic treadmill, if a human is hit by something they have a low chance of surviving, they’re probably not going to survive. That’s tautological. But that doesn’t mean that practically any injury an animal receives is probably going to result in its death. It’s not as if humans have a an evolutionary pressure to be able to bounce back from ailments that other animals simply don’t have.
Try watching some amputee dogs. See if they seem so miserable.
I just watched some youtube videos about amputee dogs, including this one: http://www.youtube.com/watch?v=iJxEIXRz_Kk
This was the first one I found that had any information about the dog’s reaction after the amputation, and much later. It says the dog took 4 weeks to “start acting like himself,” and still whined at night, 6 weeks later. This seems about the same timescale as humans adapting to disabilities, so you’re right about hedonic treadmills in dogs. Probably a lot of other animals have them too. There’s still all the animals that don’t have time in the rest of their lives to get used to what happens to them. But you have made me up my estimate of how good the average animal life is.
...why?
...why?
Because I think the average wild animal life is worse than nonexistence, and there are quite a lot of them enduring such lives.
How would I go about (in principle) verifying whether you are correct?
You could (in principle) verify that the average animal life was Mestroyer::worse than nonexistence by spying on the operation of the brain of every animal on Earth, and seeing how much each was put into and kept in states that caused them to try to get out of those states, also weighted according to how high a priority getting out of those states gets, how often the things that they tried to prevent from happening to themselves happened anyway, weighted according to how hard they tried, or would try, if there was any course of action available to them to avoid them, how much time they spent thinking about their damaged bodies, how much they are changed by signals indicating damage coming from their bodies, and how intense those signals are compared to the minimum intensity that triggers the mind to try to avoid the stimulus.
Multiply each of those amounts by a weighting constant I am not exactly sure of (what units would I use?), add them together, and subtract the whole thing from the amount each is put into and kept in states that cause them to try to stay in those states, or where not being in those states causes them to try and get into those states, weighted according to how hard they try, how much the things they tried hard to make happen to themselves happened, and how much things they did not plan to have happen to themselves, but would have if they understood how they could get them, weighted by how much they would sacrifice to get those things, how much they were changed by signals that had the effect of making them seek the stimulus more, weighted by how intense those signals are compared to the minimum intensity that triggers the mind to seek the stimulus.
The second part should also be multiplied by some weighting constants (I bet they are roughly the same as the amount that the average human cares about these things happening to human minds, that’s the best I can tie them down). Then divide by the number of minds you summed up stuff from. That number will be negative iff I’m right.
The second part is much simpler. I am correct about that iff there are at least 10^11 minds capable of all of those things, with a negative average of the per-individual quantity I described on Earth (ignoring quantum mechanics).
shudders Given my sense that the externalities of destroying all animal life on Earth apart from humans and a couple pets include destroying those humans and their pets, I think you might qualify as some sort of inverted utility monster.
Upvoted because of the frank and detailed reduction of pleasure, pain, and preferences in general.
There are reversable vasectomies out there, one specially promising was on the verge of becoming available. Also an israeli scientist made the Male Pill, but that will still take a few years before approval.
Ending animal suffering is more important to you than understanding the only known instance of life and evolution in the universe ?
Or would you consider holding off at least until the global genome project is complete ?
Yes.
It depends how long I had to hold off. Realistically, I think capturing a few individuals to study from each species could probably be done in the course of wiping most out, so even though both operations would take a while, the gene-cataloging would not significantly slow down the killing (pipelining!)), so I would do it.
This one.
Depending on the extent of my god mode, I’d either reorganize the planet into a planetary transportation government and regional city-states—the planetary transportation government runs an intercontinental rail system that connects every city-state and enforces with overwhelming military might (provided by feudal grants from city states) only one right, that of emigration (not immigration; city states can refuse to permit people to stay within their borders, they’re simply forbidden from preventing people from leaving).
Or, if I’m playing full god mode, I’d dismantle the local planets, turn them into a reorientable Dyson sphere around the sun, and use a combination of solar sails and selective reflection to turn our entire solar system into a fusion-powered galactic spaceship, and cruise the galaxy looking for something more interesting. (By absorbing solar emissions on one hemisphere of the sun, and on the other hemisphere reflecting half back into the sun and letting half escape, the energy of the sun can be used to accelerate it, albeit very slowly. If this still sounds ridiculous, imagine shoving the sun into a rocket; that’s kind of what would be happening, only with ridiculously low thrust.)
Your initial request doesn’t exactly limit the scope of powers in any foreseeable way, except to limit the means.
I think this would fall under the heading of “transformative technologies.” Anything sufficient to bring about a total post-scarcity society is probably outside the scope of the question.
Considering that you can not change human nature, do you believe that the central government would continue to limit themselves to this one power? With overwhelming military power, they could increase their power unchecked. Is the feudal grants supposed to limit this in some way, or are there other ideas you had in mind for preventing this?
The feudal grants are intended to prevent the central government from overstepping its boundaries, yes. I expect the most likely failure mode of this government wouldn’t be from the central government, but the regional governments, and either because they decide they want the central government to have more power, or because they decide they want a stronger central government that provides more guarantees.
I think the most obvious failure mode is that the right of emigration will turn out to be impossible to enforce.
Sounds like the logical extention to libertarianism ideas that accepts the concept of social contract. I think some sort of externality management needs to exist as well.
The planetary transportation government I find really intriguing for some reason. First I have ever heard of anything like it. Is it based off of something?
Not to my knowledge, although it’s possible I owe the idea to something I have since forgotten.
I believe it evolved in political arguments I’ve had, from noticing that restricted emigration is one of the cornerstones of tyrannies, however. The railway was added at some point as a means of ensuring even landlocked nations with no immigration-friendly neighbors would have conduits out.
It reminds me of descriptions in Incandescence, by Greg Egan. Have you read it?
Nope. Should I?
Maybe?
Greg Egan writes the sort of hard SF where the characters are strictly secondary, and at least half the fun of reading them is trying to figure out the local laws of physics before the characters do. His newer works are generally written better, but they’re all good.
I’d recommend starting with Incandescence, thus. His newest story (the Orthogonal trilogy) is noticeably better, but it’s also only two-thirds done.. so read that next.
Lots of his stories are online, including some free short stories.
Well, within the constraints of human nature, some societies seem to have much higher levels of trust than others; in some communities you can leave your doors unlocked while you leave your home for a vacation, in others people take practically feudal levels of fortification to feel safe from their neighbors. Crank up the levels of trust and transparency and bring as much of the world as is sustainable to a first world standard of living and you may have the best we could do with present day technology.
Lack of transformative technologies pretty much precludes bringing the whole world to an industrial standard of living in the long term (or even keeping the current first world population living at that standard indefinitely,) but we might be able to stay within sustainable levels if in general our goods were designed to be as enduring as possible and geared towards reuse rather than recycling. Goods production would be lower, so the economy would have to be much more service oriented. This isn’t the sort of thing I generally bother with even back of the envelope feasibility calculations for though, since even if it turned out to be totally feasible in principle to build a sustainable global society at first world standards with present day technology, getting society to adopt the necessary changes would be practically impossible, so transformative technologies are a safer bet.
Also, I’d expect both gradual and dramatic improvement in technology. It took until the middle ages to figure out that people needed left and right shoes rather than an identical shoe for each foot. It took until very recently to figure out how to make reasonably cheap computers and that they should be linked through search engines.
There is no reason to think that we’re close to inventing all the possible cool stuff.
What is the evidence that we have any idea at all about the contents of the social / normative limits our brains are able to accept?
I would say that since reading history appears to help in reducing social and organizational mistakes, this is a good indicator that we are capable of learning this kind of things.
Things like the Stanford Prison Experiment also seem to give us good data as for what kinds of systems will make peoples’ brains go poof, and more research can probably help us pinpoint exactly the empirical clusters of the failure modes (and more importantly, by reflection, the empirical clusters of the winning strategies).
There is also a cool book by Pinker talking about the Better Angels of Our Nature, which addresses that. And papers about forced marriages, and other stuff in Evo Psych.
When I first started thinking about politics, what struck me most is that idealists all had the same goal. People living in tight-knit communities, though free to leave; spontaneously sharing and cooperating in everyday life; working lightly to meet their needs, then devoting themselves to fulfilling, usually collective projects in their copious leisure time; swords hammered into plows, yadda yadda. The obvious flaw is that anyone slightly more selfish immediately ruins the system. There’s also coordination problems, where you can collect five of your friends to go build a house but never a large enough group to run a hospital.
The first step is to solve scarcity. We get behavior a lot like that Norman Rockwell utopia on various Internet communities, where information is copied at near-zero cost and time and skill are the only limited resources.
Well, that, and status. Here it’s harder; the small communities make egalitarian pressure possible, but that also creates conformity pressure which is not cool. Maybe take another leaf from the Internet and encourage people to self-sort into small ponds where they can be big fish.
Are you … sure that’s what idealists want?
Not literally all idealists. Like, transhumanists and hardcore libertarians and people who like war for its own sake exist. But yes, all idealists I met, read, or heard of in my first phase of political reasoning (like, from four to twelve, so nothing more advanced than the Communist Manifesto) wanted something like this. The commies and the socialists and the very optimistic social-democrats and the hippies and the extreme right-wing racists and the anarchists and the prairie muffins and the politically naive cultists and the wheel reinventors.
Zing!
See, those are the ones I was thinking of.
… and those aren’t. Thank you.
I’m not sure why you think transhumanists and hardcore libertarians would be part of the exception you’re making. I’m kind of both of those things, yet I think the idealist utopia you describe, modulo a few nitpicks, is roughly what I’m after as well.
Transhumanists usually lose the small communities, hardcore libertarians lose or tone way down the collectivism.
Libertarians don’t have any problem with voluntary, opt-in collectivism… I mean that’s kind of what corporations are.
Fair point, but that thing is opt-out. You’re born and raised (if people are still being produced) in one of the communities, are socially expected to share, and if you decide you don’t like it you have to leave. I suppose we can still make Proudhon cry by having shared resources be truly collectively owned, so that if you leave you’re given portable possessions equal to the value of your share of community resources like plots of lands and machines.
Interesting. If many idealists share the same goal, what exactly is stopping them from doing it? I am asking seriously. Is it just not being automatically strategic, is there a conflict between professed far-mode ideals and everyday near-mode wants, or which other problems are the greatest and most frequent?
I am not thinking about outright utopia, just about something that could improve the quality of my life by shifting it in the direction you described—finding people with the same values and similar hobbies as me, moving to live near each other, sharing and cooperating in everyday life… all that within the context of the existing society.
Technically, it would be relatively easy to do. Moving from one place to another is an inconvenience, but it would pay off in a long term. The real problem is finding the right people—a group of people with compatible values and goals, sympathetic to each other, and trusting each other enough to engage in such long-term project. For example, I am rather picky about people; I would prefer to live with people of near-LW levels of rationality. (On the other hand I would prefer to stay in my country, as opposed to moving near SI).
I don’t know how much this is just my personal problem (e.g. just a lack of social skills to find the right people), or how much this is the weak point of most utopias—you could imagine the utopia with the right group of people, but that group does not exist in the real life. It feels like it should exist, but that is a conjuction fallacy. You need people with similar goals and similar values and wanting to join the experiment and wanting it approximately at the same time and you need them to like and trust each other (for N people that means N×(N-1)/2 good relations and trust); and all this together is just too unlikely, especially for values of N equal to or greater than 10.
Or, if this does not seem like the greatest obstacle for you, then what (besides akrasia) is in your opinion stopping most people from better approximating their utopias?
Individuals can’t really do that, unless they’re willing to pay the cost of being cut off from the world. Hippies communes have few luxuries. Plus, these groups aren’t very stable, because of that pesky interpersonal conflict and freedom to leave.
What you can do is optimize your group of friends, which basically everyone already does. In ancient times people couldn’t guess where to move to to find good friends, except if a place got a reputation as a hotspot for a particular group which was unpopular elsewhere. (You’re a gay small town boy? Go to LA.) Now that the Internet exists, people can make friends and then move to be near them, though the improvement is rarely worth the cost of moving.
Globally speaking:
Magically creating this utopia would fail, because of conflicts over scarce resources. Taking scarcity out of the equation, you could still get huge fights and very unstable communities.
Getting from here to this utopia has been tried. Usually there’s some more or less dubious theory that points to a group of bad guys and jumps to the conclusion that removing the bad guys will create utopia. Various ideologies are distinguished by choice of bad guys and method of removal. (People currently in power are a popular choice, but you can always default to Jews.) While this method has produced poor results, I would be hard-pressed to think of a better one.
Moderate politicians could attempt to move incrementally along their preferred path to utopia. The optimistic view is that they wildly disagree on how to create utopia (e.g. will redistribution or trickle-down economics best solve hunger?) and are thus working at cross-purposes.
I kind of wish we had signatures here so I could put this in mine.
This is what I tried to say. The scarcity is a problem in itself, but it is not the true reason why we don’t have utopias.
Yes, this model is very popular, because it allows one to work altruistically for the benefit of humanity, while getting a lot of power and a freedom to kill people they dislike as a bonus.
But there are also other models. Having enough money, one could just build a new place and only let the good guys in. No need for killing, you just need one enthusiastic millionaire to sponsor the project. If the new place is small and not isolated from the rest of the civilization, the people can still participate in life as usual; they would just have their community as a bonus. Actually, people can cooperate and share their property even without a big investment, if they live reasonably close to each other. They just need to define that X, Y and Z are members of the community; they all share property with each other, but don’t share with the outsiders. That’s it.
Again, I think the true reason why this does not happen more often, are the interpersonal conflicts. People living in the utopia usually realize that they don’t like it… although they would like it, if they could replace their real human neighbors with the preferred kind of imaginary people.
Sometimes the utopia proponents admit that their utopia would require “education” of people. But to me it feels (maybe I am too ungenerous here) that they consider themselves ready for the utopia, and it is just the unenlightened masses who need some brainwashing. Also I see a problem here that without some “pilot project” how will we test whether the proposed education works for the utopia or not.
I would like to see more people who have their ideas of utopia, but who admit that they could be wrong and that their ideas need to be tested experimentally first. Then we could have a Scientific Utopiology, which would be a huge improvement from the usual “mass murders first, realize the obvious (for unbelievers) problems later”.
I’ve had many ideas for possible utopias, and read about many more, but I seem to always stumble on the same problems (or if I don’t, someone else usually points out (correctly) that my solution for one of these is flawed):
Expertise verification. How do I know that you know what the hell you’re talking about? (having members of the society all trained in hardcore bayesian rationality would help, but obtaining evidence that another rationalist has the evidence that they seem to have or claim to have is still costly, and arguing to an aumann agreement can waste tons of time depending on situation)
Scarcity of resources needs to be solved somehow, i.e. Sci-fi technology usually necessary.
Advanced cross-domain logistics involving math beyond the ken of most mortals. No, really. Counter-weighting evaluations of city design, efficient transport, short transit times, aesthetics, tribal / social proximity, local diversification, interpersonal relationships, all thrown into a big mess of predictive algorithms that are somehow supposed to take into account possible future desires of unknown people and unknown events.
Interpersonal relationships. Someone is going to seriously want to kill someone else eventually, period. None of my ideas nor the ideas I’ve seen so far even hint at a realistic solution. A good utopia should also encourage and help forging good social groups and meeting awesome people and making really great friends and so on… nope, still no solution there either. All my attempts at a solution for that last run into the “Wow, too hard maths” problems of logistics in the above point.
Memetic preference effects. When a certain project is really cool to work on, everyone wants to be working on that project. Just basic math and social science is more than enough to understand that making sure that enough people are working hard enough on the really important problems that need solving is hard, especially if you can’t just throw money at a few of them and tell them to shut up and work. To a lesser extent, people also simply usually just want the easier or more impressive tasks and jobs, so the important but non-mentally-available or “icky” stuff (like, say, sanitation technology AKA toilets and sewers) gets left very far behind.
Intentional communities do exist in the real world. It is not inconceivable that LessWrong meetup communities will eventually evolve along such lines.
If changing human nature or building AGI is impossible, we could still explore how close we can get to this. Research the most efficient form of education and group cooperation. Research the most powerful forms of non-general artificial intelligence, design better expert systems, etc. These things could still be enough meta to influence many other aspects of human life.
Instead of nanotech, we could improve other forms of automated technology. Even without the ability to manipulate atoms, building things automatically from very small (but greater than nano-) pieces could be awesome. Instead of bio-immortality, we could still invent better medicine, construct artificial limbs, extend life.
I did not want to ruin your thought experiment, just to say that in given areas less than perfect could still be great. Now let’s assume that we already did what we could there, and that there is no artificial intelligence smart enough to give us advice about the strategy the humankind should choose. What is next?
That would mean a kind of intelligence explosion quite soon. For the sake of this discussion, we should freeze any kind of techno progress, I think.
It seems to me that this “Research the most efficient form of education and group cooperation. Research the most powerful forms of non-general artificial intelligence, design better expert systems, etc. These things could still be enough meta to influence many other aspects of human life.” is what Leverage Reserach is mostly about.
I vaguely object to the common practice of soliciting responses, and implying that the results will/may be meaningful, without simultaneously precommitting to a particular mapping of raw results to inferred meaning. (The precommitment can be done while keeping the mapping secret, by using a hash algorithm.)
Okay, I’ll bite:
0) I’m not sure it’s best that anyone exist at all, but for the sake of a post let’s assume they should.
1) Assuming the resources (which seem to be implied in the ability to change the gender ratio in the first place) nix men entirely. I’m probably more skeptical than the average LWer that traditionally male pathologies are inherent to my sex, but there’s a decent chance I’m wrong about that, so there’s that. More importantly this gives public institutions veto power over the creation of new people in a way that isn’t bodily intrusive.
2) People own their bodies; the state owns the means of production, which are rented out by cooperatives. Investment decisions are guided by prediction markets. Public goods are provided by the state on a universalistic basis. (This is assuming we still have to deal with scarcity.)
3) Raising children is a compensated service to the state, in principle not separate from other forms of market production. (More tightly regulated, obviously.) Reiterating my caveat that I’m more of a constructionist than the typical LWer, eugenic opportunities present themselves, as do opportunities for speeding up the destruction of ascribed ethnic status.
4) Legislators chosen by sortition, either from the general public or some sort of ascetic public service corps.
5) If we have the resources, Destroy Nature (but keep public gardens large enough to hike in.) If we can’t get rid of most other animal life, at least get rid of factory animal farming.
I don’t think this would be fully stable (I don’t think I could endorse anything that would) and I don’t doubt it would horrify plenty of people, but if I could press a button this is what I’d press it for.
Mice.
We have probably barely tapped human potential in terms of education, dissemination of best practices, physical training, mental training, rationality training, diet, and drugs. These could make huge inroads into mental illness, motivation, learning, and thereby mental health and productivity.
I love it when someone asks the community for creative ideas. They’re always interesting.
Without the possibility of technologic advancement, I don’t really feel that utopia is a worthwhile goal. Every version just feels like stagnation, which bothers me. I don’t see much point in life if everything’s all planned out.
And any plan we could propose would eventually fall out of fashion unless measures were taken to prevent societal change. Some configurations are more preferable than others, sure, but in the end the deal is radical, unprecedented change, cyclical rise and fall of civilizations, or stasis. The last is boring, the middle is boring but commonly accepted, and the last is scary. Take out the scary and you have boring and more boring.
If we’re going for stasis, I vote for some kind of enforced anarchy or nuking the world. Those are at least somewhat interesting.
He didn’t specify stasis.
“Utopia” is in the title and specifies stasis. Any plan specifies stasis on some level because you can’t plan unending change.
If the configuration of society is isolated as the only element of importance, and you have full control over the configuration of society, and you need to give an acceptable solution to the configuration of society, you get stasis in the configuration of society or there’s no point.
You’re bringing a lot of assumptions to the table. Basically every statement you made just now was not included in the OP.
You can design a system to be flexible and resilient in the face of inevitable change, without actually planning the change itself. Will that work? Well, probably not, just as most static utopias probably won’t work.
You can also plan/allow for change that isn’t about the configuration of society (nowhere was it implied at all that configuration of society was the only variable you cared about. The only stipulation was that manipulation of biology, nanotech, etc, were the only variables you couldn’t mess with).
Well that’s silly. If I can manipulate everything except the human condition and technology, then I can create such an extreme surplus of resources that all the problems not directly related to technology or the configuration of society just disappear. If you take away tech advancement, that just leaves the configuration of society. Plus, I thought he was pretty clear at first that we were supposed to be looking for sociological solutions to the problem.
And you just agreed that any plan that we come up with will fade with time and become irrelevant. I still say that either we advance in some way, or considering the system on a societal level is not worth it.
Many of the utopias from the golden age of science fiction (long before nanotech) had recognisable humans, who were not immortal and whose robots were good butlers, at best. While for the sake of plot they generally had faster than light travel, that isn’t actually a requirement for the human species to spread out to the stars.
If you’ll grant sub-light interstellar travel, then all these become possible. Let groups bid on planets, set a pre-requisite that the group survive in a bio-sphere for 3 generations, as a test to see if their proposed society is sufficient stable to avoid shooting each other, then ship them off.
That then transforms the question into what sort of meta-Utopian society would support investing the time and effort required to mine the asteroids, set up massive solar powered anti-matter production factories, and keep seeding the universe generation after generation?
Given said technologies would also offer plenty of ways for individuals or small groups to blow up the Earth, it would need to be either a very tightly controlled one, or a very sane one. Or possibly both. Heavy investment into improving education, parenting (and possibly designer babies). And getting people used to lack of privacy and the goldfish bowl surveillance society.
Can that be done without changing human nature beyond existing parameters? Well, the boundaries of the current parameters are actually pretty wide: they include sealed nuclear subs, monastic communities, military academies with no privacy, and life-logging 24⁄7 facebook-blogging twitter-addicted web-cam-dorm residents.
*Reads limitations closely*
So super-MRI’s and super-computers are fair game? That is fairly mundane tech. Then it only takes some fairly mundane tinkering with micro-neuro-anatomy to make uploads.
Otherwise...
Make humans more able to do stuff. More true to what humans want, for better or for worse. This ranges from more convenient biology to more convenient epistemological and motivational psychology.
Everybody is a bayesian genius with a perfect body who lives to be 200+ years old.
1. Take apart the earth, use it to build a fleet of smaller habitats that make use of the materials to generate maximum sustainable habitat space.
2. Distribute these habitats into combinations of nature preserve biomes, agricultural systems and urban centers.
3. All habitats should be given sufficient technological infrastructure to power all their life support with some extra. All habitats should have internet capabilities and backup stores of all human knowledge.
4. Redistribute the population of earth into their preferred habitat with the kinds of people they would prefer to be around, pole them if needed.This includes isolating ideologies that cannot abide any other ideologies away from other habitats.
5. Dedicate a small portion of habitats to ferry services that take regular routes between other habitats. Arrange routes so that individuals with ideological incompatibility are separated from those they cannot tolerate by longer routes.
6. Sit back and let the chips fall where they may. Live in a habitat with people of like mind, eventually die but know that humanity has been given a head start to interstellar colonization and the biospheres of earth have been preserved as redundantly as possible without dismantling any worlds beyond earth.
http://lesswrong.com/lw/5dl/is_kiryas_joel_an_unhappy_place/
A possibility worth investigating.
Of course, the idea that there could ever be one utopia is an absurdity. People have different values and preferences and thrive under different circumstances. It is clear that Less Wrongers do not understand this, and therefore they should not be in the Utopia-business in the first place.
Which is stated in the OP and is kindof the point of the post. What would be your utopia?
Does everyone just kill each other because our values are utterly nonreconcilable? Or can we mostly agree that we can do better than that?
If you can’t do “perfect” go for “best”
I strongly agree with this part:
but stongly disagree with this part:
EY’s CEV concept seems to explicitly take this into account. That’s not to say I don’t have my own objections to it, but I don’t see this as one of them.
The people’s nature will evolve anyway. As everything else. Having many billion people alive means, our genome acquires new bits every day. A fast biological evolution from now on, would just happen anyway.
During this time, I see no utopias possible. At least not for a longer time.
Human life, as life on Earth, is boring/pointless without the Singularity, from my point of view.
Sometimes I ask my postmodernist friend, who rejects and is horrified by the techno rapture of any kind, what is HIS utopia?
A better care for the nature, a lot of (Slovenian) culture … then I am horrified!
Yes. No SAI, no nanotech, no Galaxy transformation .. makes me sick.
Can you expand on this? I’m not sure I see how one would reach this sort of value system. Even if we only say double human lifespans that seems better than nothing. And even if things end up being pretty similar to how they are now, that won’t mean there won’t be interesting things to examine. Whether P = NP and whether the Riemann hypothesis is true, and whether there’s any FTL travel, whether supersymmetry is correct, all seem to be interesting questions whether or not there’s a Singularity.
It seems like comments like this are the sort of thing that makes a lot of transhumanists and singulitarians pattern match to being religious. A major part of many religious outlooks is the certainty that things are meaningless without their specific religion. Some forms of Christianity have been made this into an important theological claim, and other religions make similar claims.
Aaarrrgh, The Enemy!
I think the key point is that the exclusion of transhuman and posthuman tech makes the scope of possible futures orders of magnitude less appealing, even at maxima, than that of possible futures that do include such tech.
The jump from there to outright refusal to consider such futures and a rejection of their utility seems a bit extreme, but I would never have made or seen a parallel with religion until someone else mentioned it. IME, the grandparent comment would mostly / most frequently be interpreted (even by random people) as “Hey, wouldn’t it be really really awesome if we had X? The rest / real life seems pretty boring by comparison.”
I think you need to be leaving yourself a way bigger line of retreat here.
Agreed. Hell, a significant element of this article seems to be about making lines of retreat, and then the grandparent says “No.” with more sophisticated verbiage. (though points go for examples and specifying what he was talking about)
Human life could be just as boring and pointless with the singularity. Most anything humans value probably decays into nihilism when you widen or narrow the scope sufficiently.
In this scenario I turn omnicidal. Human lives without hope in general are not worth living.
I worry about anyone whose worldview prefers the immanentization of their particular eschaton, followed immediately by global human extermination.
I’m sorry, what does this mean?
Bringing about the end of the existing world as predicted by their philosophy. Depending on context this might imply anything from a Singularity to a Last Judgment to a communist utopia.
Understandable, maybe even justified, but I still do and think I am right in doing so.
Then I counter by turning philosocidal.
Philosophers who decide my life is not worth living aren’t worth having around.
Edit: On the other hand I could think for a minute and throw out the naive symmetry-following and come up with a real solution that doesn’t involve violence.
“Omnicide” Includes me as well. And every animal, blade of grass, every single last living cell.
I turn to Mad Science. Throwing all ethical concerns through the window, I become only interested in achievement for the sake of achievement. I build Wonders and forget myself in them. Starting with the development of combustible lemons.
And if the other humans disagree with what you intend to do with them?
Kill them anyway?
Why? If they assert that despite your concerns they prefer existence to non-existence, why do you persist? Do you think that you can predict from your own preferences what the true preference is of others?
Why should I care about the preferences of others?
I don’t think I can argue why you should in any useful fashion (beyond some sort of prisoner’s dilemma sort of situation) but this isn’t that relevant since given these and other comments by Armok, I suspect that they do care.
Fair enough.
I care about the happiness of others. I care about their growth, them having complex fun, and all the myriad other human values. However, in so far this overlaps with their values it is purely incidental. If I encounter a papercliper, I will give it complex fun and happiness and growth as a person, but no paperclips.
Ok. So which of these values suggests that omnicide is the best option if one doesn’t have a Singularity?
The ones that value pain, death and futility negatively.
And those outweigh complex fun and the other human values?
In the scenario we talking about there would be almost zero of those things, outweighed by the suffering by many order of magnitude.
This sounds to me like one has been almost spoiled by the possibility of such incredibly large amounts of complex fun that more down-to-earth mundane levels look like they are close to zero.
Or maybe you have never suffered like many many in this world do an cannot imagine it. I’ve had more suffering than there has been fun in all of human history combined, and there are plenty of people who have it vastly worse than me. Wouldn’t be all that surprised it it’s a majority of humans. And I also wouldn’t be surprised if the average insect, obviously not capable of having any fun, produced a similar amount of suffering per hour being eaten from the inside out...
So how would you know this?
Incidentally, most of this reply isn’t that relevant for another reason: The OP and discussion isn’t about the exact status quo. It doesn’t for example rule out weaker forms of transhumanism which minimize suffering.
Diego, anything that improves the human condition is ‘transhuman.’ Cooking, Jethro Tull’s seed drill, vaccination, education, human rights as a social convention.… We’ll do the best we can within the constraints we face.
That’s misleading and unhelpful. There are many people in favor of transhumanism and people who specifically oppose it but still favor new technologies. So that broad a notion of transhumanism doesn’t capture the intuition people have. Transhumanism seems to focus on the use of technology to specifically increase human intellectual and physical capability, and to greatly extend lifespans. That would seem to be a more useful definition for capturing what people mean.
One perspective is that Transhumanism is nothing but simplified humanism. Eliezer asks: “Doesn’t that make the philosophy trivial...?” and answers in the negative.
But I appreciate the other perspective that answers: yes, this just trivializes the philosophy.
Of the 7 Extropian Principles, another take on Transhumanism, only one is “Intelligent technology.” If you went by the other 6, would it still be a Transhumanist life-view?
Sure, one can see it as a the logical outgrowth of simplified humanism, but that’s still a distinct claim (and to some extent a third way of reading it). If one sees transhumanism in that way then one can argue that people who favor technological and medical improvements but not the full gamut of extended life spans, etc. are being inconsistent, but that’s a distinct claim.
Or the number of posts per week by a certain overeager LWer from South America.
Would you rather I had created a Proper American White Ivy League Male profile, though fake, and distributed my posts between the real one and the fake one?
Or would you just prefer the posts to be spread over a larger time span to get a chance to give a deep and interesting response to each?
Sorry, I was unclear. I don’t care how often people post or how many nicks they use, as long as the content is good. I don’t think that using a sock puppet would improve your post quality.