I didn’t say that investigation into the possibilities of life lengthening should be prevented or inhibited. One of our most primal urges is to survive at any cost. Obviously human curiosity for what is possible will lead us into these areas of research. However I think that if these methods proved viable to the extent you suggest (1000 year life spans) this would ultimately be ruinous for the human race. I think you agree here as the situation you describe as a likely outcome is by no means desirable:
That power will come into the possession of a minority. It may be a minority numbering in the millions, a globally distributed technology-using class which then becomes the de-facto posthuman technocracy of Earth. And it would then be the cultural politics and other internal politics of that technocracy which decides what becomes of the rest of us. They might keep Earth as a nature preserve for old-style humans, policed to ensure that we don’t independently reinvent the technologies of power. They might create a social order which permits natural humans to join the techno-aristocracy, but only after they have shown a readiness to be responsible, and a psychological capacity to be responsible.
In the definition of the “relinquish” option it argues that the prevention of certain technological investigations is characteristic of a totalitarian society. While I agree with that I have to say that the society you describe sounds worse. So maybe if what you describe was really a likely outcome we should think about the value of restrictive measures now for preventing much bigger infringements of rights in the future. The prospects you describe as inevitable are deeply depressing: an elite group of humans enforcing the complete subjugation of the rest of the planet.
You also dismiss the problem of over population, lack of resources, and colonisation of space despite the fact that these issues are no where near being solved now. If it was possible to colonise space on a massive scale this again I feel would be completely undesirable. Instead of looking for contingency plans we should confront the problems of global warming lack of food and energy sources directly.
Essentially I think the desire for a prolonged life span is born out of fear of death, eternity and uncertainty. This is a natural human reaction to death but in the end even if we could push death further and further away we would still have to face it at some point. Most people don’t need an extra 900 years of life. I don’t think we should allow our selfishness to corrupt the natural continuation of the human race.
How?By “most people” I’m implying no one I’m not suggesting there are some people who deserve extra life and some who don’t. Sorry if that isn’t clear where I’m from that would be implicitly understood.
My point was that you’re thinking that you know enough to decide that for everyone. You’re also bringing in a concept of need (need for what?) when I think the relevant question is whether people want to live longer.
And I’m including myself in that no one too obviously! I don’t know whether I need to make that explicit but I just realised you said selfish as well as elitist.
Ok I don’t think I would use the word selfish in that context- arrogant or misguided maybe- so I wasn’t sure what you meant. Anyway a lot of people when they are nearing the end of life say that they are happy to accept death, many very old people even wish for death. Obviously people have regrets but I think you would have these regardless of life span. I’m not disputing the fact that many people do wish for immortality or massively increased life span.
I brought up the question of need because of the possible negative impact that the invention of immortality could have on the human race and the rest of the world. If it is just a question of desire then the possible consequences have to be more closely scrutinised than if it actually served a purpose.
I think you agree here as the situation you describe as a likely outcome is by no means desirable … an elite group of humans enforcing the complete subjugation of the rest of the planet.
It’s not subjugation if all they do is prevent independent outbreaks of destructive technology.
I can see that I described four forms of posthuman technocracy, that we could call wilderness, meritocracy, exploiter, and cult. (Don’t hesitate to employ better names if you think of them.) Wilderness and meritocracy are the ones that you quoted and I can’t view them as bad, in fact they could be positively utopian. We’re talking about a world which has solved the crisis created by technology, by arriving at a social and cultural form that can employ technology’s enormous potential constructively.
Such a world has to ensure that the crisis isn’t recreated by independent reinvention, and it has to regulate its own access to techno-power. The “policing” is for the first part, and “entry exams for the techno-aristocracy” is for the second part. Do you find the prospect of even that much structure depressing, or is it just that you think the dystopian possibilities would inevitably come true?
You also dismiss the problem of over population, lack of resources, and colonisation of space … Instead of looking for contingency plans we should confront the problems of global warming lack of food and energy sources directly.
We should distinguish between any exacerbation of existing problems due to longevity (your first list), and the problems that already exist and would continue to exist even without extra longevity (your second list). Just so we have names for things, I’ll call the second type of problem a “sustainability problem”; a technological solution to a sustainability problem, a “sustainability technology”; the first type of problem, an “exacerbation problem”; and a technology implicated in an exacerbation problem, a “destabilizing technology”. So we can call, for example, solar cells a sustainability technology, and stem cells a destabilizing technology. (These are precisely the sort of ultra-broad categories that can lead you astray, because under certain circumstances, solar cells might be destabilizing and stem cells might be sustainability-enhancing. But I felt the need for some extra vocabulary.)
So I summarize this part of your position as follows: we should prioritize the solution of sustainability problems, and avoid exacerbation problems by staying away from destabilizing technologies.
There’s a large number of sub-issues we could explore here, but I see the central fact as the failure of relinquishment as anything more than a local tactic that buys time. Someone somewhere will develop the destabilizing technologies. Independent development can’t be prevented or “policed” except by someone already possessing similar powers. So ultimately, someone somewhere must learn how to live with personally having access to destabilizing technologies, even if these powers are locked away most of the time and only brought out in a crisis. The various “posthuman technocracies” that I sketched are speculations about social forms that don’t keep advanced technology completely sealed away, but which nonetheless have some sort of resilience.
I don’t think we should allow our selfishness to corrupt the natural continuation of the human race.
Well, the moral and existential dimension of these discussions is hotly contested. I focused on politics and pragmatism above, but I’ll try to return to the other topics in the next round.
You didn’t say destructive technologies you said “technologies of power” implying that the technological and scientific domains in this hypothetical future world would be stifled or non existent in order to prevent humans grasping power for themselves and threatening the reign of this “post human technocracy”. This sounds like subjugation to me. You also say they would “decide what would become of the rest of us”. This also sounds like complete totalitarian domination of the “old style human beings”. Considering that the ideal world for me would be an egalitarian society in which everyone is afforded the same indispensable rights and liberties, where the power balance is roughly equal from person to person or where the leadership is transparent in its governing and is not afforded liberties others are not this does not sound like a utopian society.
I’m not backing the prevention of investigation into these “destabilising technologies” as you have christened them. I think that the expansion of knowledge in every domain is desirable though we should work out better ways of managing the consequences of such knowledge in some cases. But I am kind of confused as to why you think that relinquishment now is unacceptable (not that i’m advocating that it is acceptable) but that in the future you would welcome the idea that humans would be policed in order to prevent the development of technologies that could threaten the existence of the elite post humans or any independent innovation. Isn’t that analogous to a hypothetical example of present day humans policing or preventing technologies that could threaten their existence in the future?However you reject this strategy as relinquishment. If you think this is an unsustainable method now then why do you think it will be a viable solution in the future? Do you think that it would be more acceptable because it would be easier to enforce in a more strictly policed and less tolerant society. Do you object to “relinquishment” on an ideological basis or a pragmatic basis?
Also I’m not sure we need the distinction between the two kinds of problems: global warming, lack of resources (food/ energy) and over population are all problems that would continue to exist without longevity and would be exacerbated by longevity. Colonisation of space is a separate issue, it isn’t a problem as such except in that it is not currently possible. So some may find it problematic in this respect.
I didn’t just mean political power or power over people. Consider the “power” to fly to another country or to talk with someone on the other side of the world. Science and technology produce a lot of powers like that. The advance of knowledge to the point that you could rejuvenate a person or revive someone from cryonics implies an enormous leap in that sort of power.
Total relinquishment doesn’t work because it requires absolutely everyone else to be persuaded of your view. If just one country opts out and keeps doing R&D, then relinquishment fails. But a society where advanced technology does already exist has some chance of controlling where and how it gets reinvented. Such a society could become overtly hierarchical and have no notion of equal rights.
But even a society with deep egalitarian values would need to find a way to assimilate these powers, rather than just renounce them, to remain in control of its own destiny. Otherwise it risks waking up one day to find that a different set of values are in charge, or just that someone played with matches and burned down the house.
One irony of technological power is that it offers ways for egalitarian values to survive, even in deeply destabilizing circumstances. A theme of the modern world is the fear of homemade WMDs. As knowledge of chemistry, biology, nanotechnology, robotics… advances, it becomes increasingly easy for an isolated lab to cook up a doomsday device. One response to that would be to just strip 99% of humanity of the power and the right to make a technology lab. The posthuman technocrats live in orbit and monitor the Earth to see that no-one’s in a cave reinventing rocketry, and everyone else goes back to hunting and gathering. Luddism and transhumanism reach a compromise.
Hopefully you can see that a luddite world with a small transhuman elite is more stable than a purely luddite world. The hunter-gatherers can’t do much about it if someone restarts the industrial revolution. This is why total reliquishment is impractical.
But what about transhuman egalitarianism? Is that workable? If anyone can make a doomsday lab, won’t it all come crashing down? This is why there’s a “neuro” in “neuro-technocracy”. Advanced technology’s doomsday problem is mostly due to malice (I want to destroy) or carelessness (I’m playing with matches). Malice and carelessness are psychological states. So are benevolence and competent caution. Human society already has a long inventory of ways, old and new, in which it tries to instil benevolence and competence in its members, in which it watches for sociopathy and mental breakdown, and tries to deal with such problems when they arise.
In a society with truly profound neuroscientific knowledge, all of that will be amplified too. It should be possible to make yourself smarter or more moral by something akin to increasing the density of neural connections in the relevant parts of your brain. I don’t mean that neural connectedness is seriously the answer to everything, it’s just a concrete example for the purposes of discussion. The idea is that there are properties of your brain which have an impact on the traits that determine whether you can or can’t be trusted with the keys to advanced technology.
For a culture that has absorbed advanced technology, neuroscience and neurotechnology can become one of the ways that it protects and propagates itself, alongside more traditional methods like education and socialization. This is yet another futurist frontier where there’s an explosion of possibilities which we hardly have the concepts to discuss. We still need to work through the cartoon examples, like spacepeople and cavepeople, in order to get a feel for how it could function, before we try to be more realistic.
So let’s go back to the scenario where we have high technology in orbit and a new stone age on Earth. I set that up as an example of a nonegalitarian but stable scenario. Now let’s modify the details a bit. What if stoners and spacers have a shared culture and it’s a matter of choice where you reside and how you live? All a stoner has to do is go to the local communication monolith and say, I want to join the spacers. As a member of solar-system civilization, they will have access to all those technologies that are forbidden on Earth. So a condition of solar citizenship might be, a regular and ongoing neuropsychological examination and tune-up, to ensure that bad or stupid impulses aren’t developing. Down on Earth they can be a little more lax, but solar society is full of dangerously powerful technologies, and so it’s part of the social contract that everyone stays morally and intellectually calibrated, to a degree that is superhuman by present-day standards. And when someone migrates from the stone age to the space age, it’s a condition of entry that they adopt space-age standards of character and behavior, and the personal practices which protect those standards.
That’s the cartoon version of benevolent space-age neurotechnocracy. :-)
I didn’t say that investigation into the possibilities of life lengthening should be prevented or inhibited. One of our most primal urges is to survive at any cost. Obviously human curiosity for what is possible will lead us into these areas of research. However I think that if these methods proved viable to the extent you suggest (1000 year life spans) this would ultimately be ruinous for the human race. I think you agree here as the situation you describe as a likely outcome is by no means desirable:
In the definition of the “relinquish” option it argues that the prevention of certain technological investigations is characteristic of a totalitarian society. While I agree with that I have to say that the society you describe sounds worse. So maybe if what you describe was really a likely outcome we should think about the value of restrictive measures now for preventing much bigger infringements of rights in the future. The prospects you describe as inevitable are deeply depressing: an elite group of humans enforcing the complete subjugation of the rest of the planet.
You also dismiss the problem of over population, lack of resources, and colonisation of space despite the fact that these issues are no where near being solved now. If it was possible to colonise space on a massive scale this again I feel would be completely undesirable. Instead of looking for contingency plans we should confront the problems of global warming lack of food and energy sources directly.
Essentially I think the desire for a prolonged life span is born out of fear of death, eternity and uncertainty. This is a natural human reaction to death but in the end even if we could push death further and further away we would still have to face it at some point. Most people don’t need an extra 900 years of life. I don’t think we should allow our selfishness to corrupt the natural continuation of the human race.
I find that to be a stunningly selfish and elitist point of view. How do you know, and why are you deciding for them?
How?By “most people” I’m implying no one I’m not suggesting there are some people who deserve extra life and some who don’t. Sorry if that isn’t clear where I’m from that would be implicitly understood.
My point was that you’re thinking that you know enough to decide that for everyone. You’re also bringing in a concept of need (need for what?) when I think the relevant question is whether people want to live longer.
And I’m including myself in that no one too obviously! I don’t know whether I need to make that explicit but I just realised you said selfish as well as elitist.
To my mind, the selfishness was that you thought you could reasonably make that sort of decision for the whole human race.
Ok I don’t think I would use the word selfish in that context- arrogant or misguided maybe- so I wasn’t sure what you meant. Anyway a lot of people when they are nearing the end of life say that they are happy to accept death, many very old people even wish for death. Obviously people have regrets but I think you would have these regardless of life span. I’m not disputing the fact that many people do wish for immortality or massively increased life span.
I brought up the question of need because of the possible negative impact that the invention of immortality could have on the human race and the rest of the world. If it is just a question of desire then the possible consequences have to be more closely scrutinised than if it actually served a purpose.
Oops just replied to myself but meant to reply to you.
It’s not subjugation if all they do is prevent independent outbreaks of destructive technology.
I can see that I described four forms of posthuman technocracy, that we could call wilderness, meritocracy, exploiter, and cult. (Don’t hesitate to employ better names if you think of them.) Wilderness and meritocracy are the ones that you quoted and I can’t view them as bad, in fact they could be positively utopian. We’re talking about a world which has solved the crisis created by technology, by arriving at a social and cultural form that can employ technology’s enormous potential constructively.
Such a world has to ensure that the crisis isn’t recreated by independent reinvention, and it has to regulate its own access to techno-power. The “policing” is for the first part, and “entry exams for the techno-aristocracy” is for the second part. Do you find the prospect of even that much structure depressing, or is it just that you think the dystopian possibilities would inevitably come true?
We should distinguish between any exacerbation of existing problems due to longevity (your first list), and the problems that already exist and would continue to exist even without extra longevity (your second list). Just so we have names for things, I’ll call the second type of problem a “sustainability problem”; a technological solution to a sustainability problem, a “sustainability technology”; the first type of problem, an “exacerbation problem”; and a technology implicated in an exacerbation problem, a “destabilizing technology”. So we can call, for example, solar cells a sustainability technology, and stem cells a destabilizing technology. (These are precisely the sort of ultra-broad categories that can lead you astray, because under certain circumstances, solar cells might be destabilizing and stem cells might be sustainability-enhancing. But I felt the need for some extra vocabulary.)
So I summarize this part of your position as follows: we should prioritize the solution of sustainability problems, and avoid exacerbation problems by staying away from destabilizing technologies.
There’s a large number of sub-issues we could explore here, but I see the central fact as the failure of relinquishment as anything more than a local tactic that buys time. Someone somewhere will develop the destabilizing technologies. Independent development can’t be prevented or “policed” except by someone already possessing similar powers. So ultimately, someone somewhere must learn how to live with personally having access to destabilizing technologies, even if these powers are locked away most of the time and only brought out in a crisis. The various “posthuman technocracies” that I sketched are speculations about social forms that don’t keep advanced technology completely sealed away, but which nonetheless have some sort of resilience.
Well, the moral and existential dimension of these discussions is hotly contested. I focused on politics and pragmatism above, but I’ll try to return to the other topics in the next round.
You didn’t say destructive technologies you said “technologies of power” implying that the technological and scientific domains in this hypothetical future world would be stifled or non existent in order to prevent humans grasping power for themselves and threatening the reign of this “post human technocracy”. This sounds like subjugation to me. You also say they would “decide what would become of the rest of us”. This also sounds like complete totalitarian domination of the “old style human beings”. Considering that the ideal world for me would be an egalitarian society in which everyone is afforded the same indispensable rights and liberties, where the power balance is roughly equal from person to person or where the leadership is transparent in its governing and is not afforded liberties others are not this does not sound like a utopian society.
I’m not backing the prevention of investigation into these “destabilising technologies” as you have christened them. I think that the expansion of knowledge in every domain is desirable though we should work out better ways of managing the consequences of such knowledge in some cases. But I am kind of confused as to why you think that relinquishment now is unacceptable (not that i’m advocating that it is acceptable) but that in the future you would welcome the idea that humans would be policed in order to prevent the development of technologies that could threaten the existence of the elite post humans or any independent innovation. Isn’t that analogous to a hypothetical example of present day humans policing or preventing technologies that could threaten their existence in the future?However you reject this strategy as relinquishment. If you think this is an unsustainable method now then why do you think it will be a viable solution in the future? Do you think that it would be more acceptable because it would be easier to enforce in a more strictly policed and less tolerant society. Do you object to “relinquishment” on an ideological basis or a pragmatic basis?
Also I’m not sure we need the distinction between the two kinds of problems: global warming, lack of resources (food/ energy) and over population are all problems that would continue to exist without longevity and would be exacerbated by longevity. Colonisation of space is a separate issue, it isn’t a problem as such except in that it is not currently possible. So some may find it problematic in this respect.
I didn’t just mean political power or power over people. Consider the “power” to fly to another country or to talk with someone on the other side of the world. Science and technology produce a lot of powers like that. The advance of knowledge to the point that you could rejuvenate a person or revive someone from cryonics implies an enormous leap in that sort of power.
Total relinquishment doesn’t work because it requires absolutely everyone else to be persuaded of your view. If just one country opts out and keeps doing R&D, then relinquishment fails. But a society where advanced technology does already exist has some chance of controlling where and how it gets reinvented. Such a society could become overtly hierarchical and have no notion of equal rights.
But even a society with deep egalitarian values would need to find a way to assimilate these powers, rather than just renounce them, to remain in control of its own destiny. Otherwise it risks waking up one day to find that a different set of values are in charge, or just that someone played with matches and burned down the house.
One irony of technological power is that it offers ways for egalitarian values to survive, even in deeply destabilizing circumstances. A theme of the modern world is the fear of homemade WMDs. As knowledge of chemistry, biology, nanotechnology, robotics… advances, it becomes increasingly easy for an isolated lab to cook up a doomsday device. One response to that would be to just strip 99% of humanity of the power and the right to make a technology lab. The posthuman technocrats live in orbit and monitor the Earth to see that no-one’s in a cave reinventing rocketry, and everyone else goes back to hunting and gathering. Luddism and transhumanism reach a compromise.
Hopefully you can see that a luddite world with a small transhuman elite is more stable than a purely luddite world. The hunter-gatherers can’t do much about it if someone restarts the industrial revolution. This is why total reliquishment is impractical.
But what about transhuman egalitarianism? Is that workable? If anyone can make a doomsday lab, won’t it all come crashing down? This is why there’s a “neuro” in “neuro-technocracy”. Advanced technology’s doomsday problem is mostly due to malice (I want to destroy) or carelessness (I’m playing with matches). Malice and carelessness are psychological states. So are benevolence and competent caution. Human society already has a long inventory of ways, old and new, in which it tries to instil benevolence and competence in its members, in which it watches for sociopathy and mental breakdown, and tries to deal with such problems when they arise.
In a society with truly profound neuroscientific knowledge, all of that will be amplified too. It should be possible to make yourself smarter or more moral by something akin to increasing the density of neural connections in the relevant parts of your brain. I don’t mean that neural connectedness is seriously the answer to everything, it’s just a concrete example for the purposes of discussion. The idea is that there are properties of your brain which have an impact on the traits that determine whether you can or can’t be trusted with the keys to advanced technology.
For a culture that has absorbed advanced technology, neuroscience and neurotechnology can become one of the ways that it protects and propagates itself, alongside more traditional methods like education and socialization. This is yet another futurist frontier where there’s an explosion of possibilities which we hardly have the concepts to discuss. We still need to work through the cartoon examples, like spacepeople and cavepeople, in order to get a feel for how it could function, before we try to be more realistic.
So let’s go back to the scenario where we have high technology in orbit and a new stone age on Earth. I set that up as an example of a nonegalitarian but stable scenario. Now let’s modify the details a bit. What if stoners and spacers have a shared culture and it’s a matter of choice where you reside and how you live? All a stoner has to do is go to the local communication monolith and say, I want to join the spacers. As a member of solar-system civilization, they will have access to all those technologies that are forbidden on Earth. So a condition of solar citizenship might be, a regular and ongoing neuropsychological examination and tune-up, to ensure that bad or stupid impulses aren’t developing. Down on Earth they can be a little more lax, but solar society is full of dangerously powerful technologies, and so it’s part of the social contract that everyone stays morally and intellectually calibrated, to a degree that is superhuman by present-day standards. And when someone migrates from the stone age to the space age, it’s a condition of entry that they adopt space-age standards of character and behavior, and the personal practices which protect those standards.
That’s the cartoon version of benevolent space-age neurotechnocracy. :-)