Is dying bad for all intelligent agents, or just for humans (presumably due to details of our evolutionary heritage)?
I don’t think it is a universal. Consider an intelligent paperclip maximizer which has the ability to create additional paperclip-maximizing agents (at the cost of some resources that might otherwise have gone into paperclip manufacture, to be sure). Assume the agent was constructed using now-obsolete technology and is less productive than the newer agents. The agent calculates, at some point, that the cause of paper-clip production is best furthered if he is dismantled and the parts used as resources for the production of new paperclips and paperclip-maximizing agents.
He tries to determine whether anything important is lost by his demise. His values, of course, but they are not going to be lost—he has already passed those along to his successors. Then there is his knowledge and memories—there are a few things he knows about making paperclips in the old fashioned way. He dutifully makes sure that this knowledge will not be lost lest unforeseen events make it important. And finally, there are some obligations both owed and expected. The thumbtack-maximizer on the nearby asteroid is committed to deliver 20 tonnes per year of cobalt in exchange for 50 tonnes of nickel. Some kind of fair transfer of that contract will be necessary. And that is it. This artificial intelligence finds that his goals are best furthered by dying.
Your reasoning is correct, albeit simplified. Such a tradeoff is limited by the extent to which the older paperclip maximizer can be certain that the newer machine actually is a paperclip maximizer, so it must take on the subgoal of evaluating the reliability of this belief. However, there does exist a certainty threshold beyond which it will act as you describe.
Also, the paperclip maximizer uses a different conception of (the nearest concept to what humans mean by) “identity”—it does not see the newer clippy as being a different being, so much as an extension of it”self”. In a sense, a clippy identifies with every being to the extent that the being instantiates clippyness.
a clippy identifies with every being to the extent that the being instantiates clippyness.
But what constitutes ‘clippyness’? In my comment above, I mentioned values, knowledge, and (legal?, social?) rights and obligations.
Clearly it seems that another agent cannot instantiate clippyness if its final values diverge from the archetypal Clippy. Value match is essential.
What about knowledge? To the extent that it is convenient, all agents with clippy values will want to share information. But if the agent instances are sufficiently distant, it is inevitable that different instances will have different knowledge. In this case, it is difficult (for me at least) to extend a unified notion of “self” to the collective.
But the most annoying thing is that the clippies, individually and collectively, may not be allowed to claim collective identity, even if they want to do so. The society and legal system within which they are embedded may impose different notions of individual identity. A trans-planetary clippy, for example, may run into legal problems if the two planets in question go to war.
But the most annoying thing is that the clippies, individually and collectively, may not be allowed to claim collective identity, even if they want to do so. The society and legal system within which they are embedded may impose different notions of individual identity.
This was not the kind of identity I was talking about.
It isn’t even necessarily bad for humans. Most of us have some values which we cherish more than our own lives. If nothing else, most people would die to save everyone else on the planet.
I tend to think »dying is for stupid people« but obviously there is never an appropriate term to say so. When someone in my surrounding actually dies I do of course NOT talk about cryo, but do the common consoling. Otherwise the topic of death does not really come up.
Maybe one could say that dying should be optional. But this idea is also heavily frowned upon by THE VERY SAME PEOPLE with the EXACT OPPOSITE VIEW that they have regarding life extension.
I just realized an ambivalence in the first sentence. What I mean to say is that dying is an option that only a stupid person would actually choose. I do not mean that everyone below a certain threshold should die and prefer if simple no one dies. Ever.
Is dying bad for all intelligent agents, or just for humans (presumably due to details of our evolutionary heritage)?
I don’t think it is a universal. Consider an intelligent paperclip maximizer which has the ability to create additional paperclip-maximizing agents (at the cost of some resources that might otherwise have gone into paperclip manufacture, to be sure). Assume the agent was constructed using now-obsolete technology and is less productive than the newer agents. The agent calculates, at some point, that the cause of paper-clip production is best furthered if he is dismantled and the parts used as resources for the production of new paperclips and paperclip-maximizing agents.
He tries to determine whether anything important is lost by his demise. His values, of course, but they are not going to be lost—he has already passed those along to his successors. Then there is his knowledge and memories—there are a few things he knows about making paperclips in the old fashioned way. He dutifully makes sure that this knowledge will not be lost lest unforeseen events make it important. And finally, there are some obligations both owed and expected. The thumbtack-maximizer on the nearby asteroid is committed to deliver 20 tonnes per year of cobalt in exchange for 50 tonnes of nickel. Some kind of fair transfer of that contract will be necessary. And that is it. This artificial intelligence finds that his goals are best furthered by dying.
Your reasoning is correct, albeit simplified. Such a tradeoff is limited by the extent to which the older paperclip maximizer can be certain that the newer machine actually is a paperclip maximizer, so it must take on the subgoal of evaluating the reliability of this belief. However, there does exist a certainty threshold beyond which it will act as you describe.
Also, the paperclip maximizer uses a different conception of (the nearest concept to what humans mean by) “identity”—it does not see the newer clippy as being a different being, so much as an extension of it”self”. In a sense, a clippy identifies with every being to the extent that the being instantiates clippyness.
But what constitutes ‘clippyness’? In my comment above, I mentioned values, knowledge, and (legal?, social?) rights and obligations.
Clearly it seems that another agent cannot instantiate clippyness if its final values diverge from the archetypal Clippy. Value match is essential.
What about knowledge? To the extent that it is convenient, all agents with clippy values will want to share information. But if the agent instances are sufficiently distant, it is inevitable that different instances will have different knowledge. In this case, it is difficult (for me at least) to extend a unified notion of “self” to the collective.
But the most annoying thing is that the clippies, individually and collectively, may not be allowed to claim collective identity, even if they want to do so. The society and legal system within which they are embedded may impose different notions of individual identity. A trans-planetary clippy, for example, may run into legal problems if the two planets in question go to war.
This was not the kind of identity I was talking about.
And you are absolutely right. I concur with your reasoning. :)
It isn’t even necessarily bad for humans. Most of us have some values which we cherish more than our own lives. If nothing else, most people would die to save everyone else on the planet.
On the other hand, although there are things worth dying for, we’d usually prefer not to have to die for them in the first place.
I tend to think »dying is for stupid people« but obviously there is never an appropriate term to say so. When someone in my surrounding actually dies I do of course NOT talk about cryo, but do the common consoling. Otherwise the topic of death does not really come up.
Maybe one could say that dying should be optional. But this idea is also heavily frowned upon by THE VERY SAME PEOPLE with the EXACT OPPOSITE VIEW that they have regarding life extension.
Crazy world.
I just realized an ambivalence in the first sentence. What I mean to say is that dying is an option that only a stupid person would actually choose. I do not mean that everyone below a certain threshold should die and prefer if simple no one dies. Ever.