You didn’t say destructive technologies you said “technologies of power” implying that the technological and scientific domains in this hypothetical future world would be stifled or non existent in order to prevent humans grasping power for themselves and threatening the reign of this “post human technocracy”. This sounds like subjugation to me. You also say they would “decide what would become of the rest of us”. This also sounds like complete totalitarian domination of the “old style human beings”. Considering that the ideal world for me would be an egalitarian society in which everyone is afforded the same indispensable rights and liberties, where the power balance is roughly equal from person to person or where the leadership is transparent in its governing and is not afforded liberties others are not this does not sound like a utopian society.
I’m not backing the prevention of investigation into these “destabilising technologies” as you have christened them. I think that the expansion of knowledge in every domain is desirable though we should work out better ways of managing the consequences of such knowledge in some cases. But I am kind of confused as to why you think that relinquishment now is unacceptable (not that i’m advocating that it is acceptable) but that in the future you would welcome the idea that humans would be policed in order to prevent the development of technologies that could threaten the existence of the elite post humans or any independent innovation. Isn’t that analogous to a hypothetical example of present day humans policing or preventing technologies that could threaten their existence in the future?However you reject this strategy as relinquishment. If you think this is an unsustainable method now then why do you think it will be a viable solution in the future? Do you think that it would be more acceptable because it would be easier to enforce in a more strictly policed and less tolerant society. Do you object to “relinquishment” on an ideological basis or a pragmatic basis?
Also I’m not sure we need the distinction between the two kinds of problems: global warming, lack of resources (food/ energy) and over population are all problems that would continue to exist without longevity and would be exacerbated by longevity. Colonisation of space is a separate issue, it isn’t a problem as such except in that it is not currently possible. So some may find it problematic in this respect.
I didn’t just mean political power or power over people. Consider the “power” to fly to another country or to talk with someone on the other side of the world. Science and technology produce a lot of powers like that. The advance of knowledge to the point that you could rejuvenate a person or revive someone from cryonics implies an enormous leap in that sort of power.
Total relinquishment doesn’t work because it requires absolutely everyone else to be persuaded of your view. If just one country opts out and keeps doing R&D, then relinquishment fails. But a society where advanced technology does already exist has some chance of controlling where and how it gets reinvented. Such a society could become overtly hierarchical and have no notion of equal rights.
But even a society with deep egalitarian values would need to find a way to assimilate these powers, rather than just renounce them, to remain in control of its own destiny. Otherwise it risks waking up one day to find that a different set of values are in charge, or just that someone played with matches and burned down the house.
One irony of technological power is that it offers ways for egalitarian values to survive, even in deeply destabilizing circumstances. A theme of the modern world is the fear of homemade WMDs. As knowledge of chemistry, biology, nanotechnology, robotics… advances, it becomes increasingly easy for an isolated lab to cook up a doomsday device. One response to that would be to just strip 99% of humanity of the power and the right to make a technology lab. The posthuman technocrats live in orbit and monitor the Earth to see that no-one’s in a cave reinventing rocketry, and everyone else goes back to hunting and gathering. Luddism and transhumanism reach a compromise.
Hopefully you can see that a luddite world with a small transhuman elite is more stable than a purely luddite world. The hunter-gatherers can’t do much about it if someone restarts the industrial revolution. This is why total reliquishment is impractical.
But what about transhuman egalitarianism? Is that workable? If anyone can make a doomsday lab, won’t it all come crashing down? This is why there’s a “neuro” in “neuro-technocracy”. Advanced technology’s doomsday problem is mostly due to malice (I want to destroy) or carelessness (I’m playing with matches). Malice and carelessness are psychological states. So are benevolence and competent caution. Human society already has a long inventory of ways, old and new, in which it tries to instil benevolence and competence in its members, in which it watches for sociopathy and mental breakdown, and tries to deal with such problems when they arise.
In a society with truly profound neuroscientific knowledge, all of that will be amplified too. It should be possible to make yourself smarter or more moral by something akin to increasing the density of neural connections in the relevant parts of your brain. I don’t mean that neural connectedness is seriously the answer to everything, it’s just a concrete example for the purposes of discussion. The idea is that there are properties of your brain which have an impact on the traits that determine whether you can or can’t be trusted with the keys to advanced technology.
For a culture that has absorbed advanced technology, neuroscience and neurotechnology can become one of the ways that it protects and propagates itself, alongside more traditional methods like education and socialization. This is yet another futurist frontier where there’s an explosion of possibilities which we hardly have the concepts to discuss. We still need to work through the cartoon examples, like spacepeople and cavepeople, in order to get a feel for how it could function, before we try to be more realistic.
So let’s go back to the scenario where we have high technology in orbit and a new stone age on Earth. I set that up as an example of a nonegalitarian but stable scenario. Now let’s modify the details a bit. What if stoners and spacers have a shared culture and it’s a matter of choice where you reside and how you live? All a stoner has to do is go to the local communication monolith and say, I want to join the spacers. As a member of solar-system civilization, they will have access to all those technologies that are forbidden on Earth. So a condition of solar citizenship might be, a regular and ongoing neuropsychological examination and tune-up, to ensure that bad or stupid impulses aren’t developing. Down on Earth they can be a little more lax, but solar society is full of dangerously powerful technologies, and so it’s part of the social contract that everyone stays morally and intellectually calibrated, to a degree that is superhuman by present-day standards. And when someone migrates from the stone age to the space age, it’s a condition of entry that they adopt space-age standards of character and behavior, and the personal practices which protect those standards.
That’s the cartoon version of benevolent space-age neurotechnocracy. :-)
You didn’t say destructive technologies you said “technologies of power” implying that the technological and scientific domains in this hypothetical future world would be stifled or non existent in order to prevent humans grasping power for themselves and threatening the reign of this “post human technocracy”. This sounds like subjugation to me. You also say they would “decide what would become of the rest of us”. This also sounds like complete totalitarian domination of the “old style human beings”. Considering that the ideal world for me would be an egalitarian society in which everyone is afforded the same indispensable rights and liberties, where the power balance is roughly equal from person to person or where the leadership is transparent in its governing and is not afforded liberties others are not this does not sound like a utopian society.
I’m not backing the prevention of investigation into these “destabilising technologies” as you have christened them. I think that the expansion of knowledge in every domain is desirable though we should work out better ways of managing the consequences of such knowledge in some cases. But I am kind of confused as to why you think that relinquishment now is unacceptable (not that i’m advocating that it is acceptable) but that in the future you would welcome the idea that humans would be policed in order to prevent the development of technologies that could threaten the existence of the elite post humans or any independent innovation. Isn’t that analogous to a hypothetical example of present day humans policing or preventing technologies that could threaten their existence in the future?However you reject this strategy as relinquishment. If you think this is an unsustainable method now then why do you think it will be a viable solution in the future? Do you think that it would be more acceptable because it would be easier to enforce in a more strictly policed and less tolerant society. Do you object to “relinquishment” on an ideological basis or a pragmatic basis?
Also I’m not sure we need the distinction between the two kinds of problems: global warming, lack of resources (food/ energy) and over population are all problems that would continue to exist without longevity and would be exacerbated by longevity. Colonisation of space is a separate issue, it isn’t a problem as such except in that it is not currently possible. So some may find it problematic in this respect.
I didn’t just mean political power or power over people. Consider the “power” to fly to another country or to talk with someone on the other side of the world. Science and technology produce a lot of powers like that. The advance of knowledge to the point that you could rejuvenate a person or revive someone from cryonics implies an enormous leap in that sort of power.
Total relinquishment doesn’t work because it requires absolutely everyone else to be persuaded of your view. If just one country opts out and keeps doing R&D, then relinquishment fails. But a society where advanced technology does already exist has some chance of controlling where and how it gets reinvented. Such a society could become overtly hierarchical and have no notion of equal rights.
But even a society with deep egalitarian values would need to find a way to assimilate these powers, rather than just renounce them, to remain in control of its own destiny. Otherwise it risks waking up one day to find that a different set of values are in charge, or just that someone played with matches and burned down the house.
One irony of technological power is that it offers ways for egalitarian values to survive, even in deeply destabilizing circumstances. A theme of the modern world is the fear of homemade WMDs. As knowledge of chemistry, biology, nanotechnology, robotics… advances, it becomes increasingly easy for an isolated lab to cook up a doomsday device. One response to that would be to just strip 99% of humanity of the power and the right to make a technology lab. The posthuman technocrats live in orbit and monitor the Earth to see that no-one’s in a cave reinventing rocketry, and everyone else goes back to hunting and gathering. Luddism and transhumanism reach a compromise.
Hopefully you can see that a luddite world with a small transhuman elite is more stable than a purely luddite world. The hunter-gatherers can’t do much about it if someone restarts the industrial revolution. This is why total reliquishment is impractical.
But what about transhuman egalitarianism? Is that workable? If anyone can make a doomsday lab, won’t it all come crashing down? This is why there’s a “neuro” in “neuro-technocracy”. Advanced technology’s doomsday problem is mostly due to malice (I want to destroy) or carelessness (I’m playing with matches). Malice and carelessness are psychological states. So are benevolence and competent caution. Human society already has a long inventory of ways, old and new, in which it tries to instil benevolence and competence in its members, in which it watches for sociopathy and mental breakdown, and tries to deal with such problems when they arise.
In a society with truly profound neuroscientific knowledge, all of that will be amplified too. It should be possible to make yourself smarter or more moral by something akin to increasing the density of neural connections in the relevant parts of your brain. I don’t mean that neural connectedness is seriously the answer to everything, it’s just a concrete example for the purposes of discussion. The idea is that there are properties of your brain which have an impact on the traits that determine whether you can or can’t be trusted with the keys to advanced technology.
For a culture that has absorbed advanced technology, neuroscience and neurotechnology can become one of the ways that it protects and propagates itself, alongside more traditional methods like education and socialization. This is yet another futurist frontier where there’s an explosion of possibilities which we hardly have the concepts to discuss. We still need to work through the cartoon examples, like spacepeople and cavepeople, in order to get a feel for how it could function, before we try to be more realistic.
So let’s go back to the scenario where we have high technology in orbit and a new stone age on Earth. I set that up as an example of a nonegalitarian but stable scenario. Now let’s modify the details a bit. What if stoners and spacers have a shared culture and it’s a matter of choice where you reside and how you live? All a stoner has to do is go to the local communication monolith and say, I want to join the spacers. As a member of solar-system civilization, they will have access to all those technologies that are forbidden on Earth. So a condition of solar citizenship might be, a regular and ongoing neuropsychological examination and tune-up, to ensure that bad or stupid impulses aren’t developing. Down on Earth they can be a little more lax, but solar society is full of dangerously powerful technologies, and so it’s part of the social contract that everyone stays morally and intellectually calibrated, to a degree that is superhuman by present-day standards. And when someone migrates from the stone age to the space age, it’s a condition of entry that they adopt space-age standards of character and behavior, and the personal practices which protect those standards.
That’s the cartoon version of benevolent space-age neurotechnocracy. :-)