Given the opening post I am not sure I understand what you are saying. What about being resurrected with the people described would be an Extrovert Hell? That you don’t have any pre revival friends?
I’m referencing a prior thread. Pre-revival friends or family are a prerequisite for me not looking at the prospect of revival with dread instead of hope.
With those values the ‘find friends who are signed up to cryonics’ sounds like the obvious plan. (Well, less obvious than the one where you kidnap your friends, cut of their head and preserve it against their will. But more sane.)
I bet that online dating and friend making will work a lot better in the future. Can you elaborate about what is so dreadful about waking up without knowing anyone?
I’m referencing a prior thread. Pre-revival friends or family are a prerequisite for me not looking at the prospect of revival with dread instead of hope.
You know what? This isn’t about your feelings. A human life, with all its joys and all its pains, adding up over the course of decades, is worth far more than your brain’s feelings of comfort or discomfort with a plan. Does computing the expected utility feel too cold-blooded for your taste? Well, that feeling isn’t even a feather in the scales, when a life is at stake. Just shut up and multiply.
Okay, 1) I dislike the “shut up and multiply” sentiment anyway, since it’s so distinctly consequentialist. I will not shut up, and I will only multiply when everything I’m multiplying is really commensurate including in a deontic sense. I will walk away from Omelas should I have occasion. And 2) it’s my freakin’ life. I’m not deciding to deny someone else the chance to be ferried to the future on the basis of it sounding lonely.
Is there some other significance to the links and quote that you hoped I’d extract?
Is there some other significance to the links and quote that you hoped I’d extract?
The significant claim seems to be that it is often necessary to quell an instinctive reaction in order to best meet your own preferences. There are some reflectively consistent preferences systems in which it is better to die than to suffer the distress of a lonely revival but there are many more that are not. I take Vladmir’s suggestion to be “make sure this is what you really want, not just akrasia magnified a thousand times”.
And 2) it’s my freakin’ life. I’m not deciding to deny someone else the chance to be ferried to the future on the basis of it sounding lonely.
Often claims of the shape of Vladimir’s are intended to enforce a norm upon the recipient. In this case the implied ‘should’ is of the kind “action X may best give Y what they want” which is at least slightly less objectionable.
I did a reversal test on the preference; if everybody I cared about disappeared from my life all at once and everybody who remained was as alien as the people of the future will likely be, I would probably want to die, no cryonics required.
I bet that online dating and friend making will work a lot better in the future. There probably exist many people in the future that appreciate your unique knowledge and want to get to know you better.
When you wake up in the future, you will probably immediately meet people from a time not so unlike our own. Going through physical and mental rehab with them could be a good way to form lifelong friendships. You are never going to be the only person from the 20th and 21st century in the future.
Can you talk more about why your future is so dreadful? Stating that all possible futures are worse than death is a strong statement. In this reversal test, it even assigns a “probably” to being suicidal. I think your flaw in reasoning lies there. I don’t think that being “probably” suicidal in the future is sufficient reason to not visit the future.
In our time, we morally justify the forcible hospitalization and medication of suicidal people until they aren’t suicidal anymore. With Friendly AI, this moral justification may remain true in the future, and once you’re on drugs or other brain enhancements, you’ll probably love life and think your self from your first life absolutely insane for preferring death to glorious existence. Again, I think your desire for deep connections with other people is likely to be nearly immediately fixable in the future. This does sound a little dystopian, but I don’t think there exist very many wake-up futures in which your existential misery can not be fixed.
To me, it seems like in nearly all cases it is worth waiting until the future to decide whether or not it is worth living.
“When you wake up in the future, you will probably immediately meet people from a time not so unlike our own. Going through physical and mental rehab with them could be a good way to form lifelong friendships. You are never going to be the only person from the 20th and 21st century in the future.”
Woman: You’re from 1999? I’m from 2029! Say, remember when we got invaded by the cybernetic ape army?
In our time, we morally justify the forcible hospitalization and medication of suicidal people until they aren’t suicidal anymore. With Friendly AI, this moral justification may remain true in the future, and once you’re on drugs or other brain enhancements, you’ll probably love life and think your self from your first life absolutely insane for preferring death to glorious existence. Again, I think your desire for deep connections with other people is likely to be nearly immediately fixable in the future. This does sound a little dystopian, but I don’t think there exist very many wake-up futures in which your existential misery can not be fixed.
Hmm… At least the content of my position seems to have been rehashed a lot, even if you won’t agree with it.
I believe that your opinion about what your values are has very little influence on what your values actually are, which in the backbone are human-universal values plus a lot of person-specific detail that is so below the level of conscious understanding that isn’t even worth speculating about. Whenever someone states an opinion about their values being extreme, they are seriously wrong about their actual values. Consequently, acting on the misconstrued values is against the person’s own actual values.
I don’t grant nearly as much credence to the idea that there are human-universal values as most people around here seem to. People are a wacky, diverse bunch.
Also, if you have an idea about what my values Really Are that is unconnected to what I tell you about them, I don’t want you anywhere near any decisions about my life. Back! Back! The power of my value of self-determination compels you!
Also, if you have an idea about what my values Really Are that is unconnected to what I tell you about them, I don’t want you anywhere near any decisions about my life.
I get my ideas about what people’s values Really Are based on their decisions. How much weight I place on what they tell me about their values varies based on their behaviour and what they say. I don’t make it my business to be anywhere near any decisions about other people’s lives except to the extent that they could impact me and I need to protect my interests.
I don’t grant nearly as much credence to the idea that there are human-universal values as most people around here seem to. People are a wacky, diverse bunch.
That assumption (and presumption!) of human-universal values scares me at times. It triggers my instinctive “if you actually had the power to act on that belief I would have to kill you” instinct.
Even with that kind of ruthless self-determination in mind it is true that “acting on the misconstrued values is against the person’s own actual values”. Vladmir’s point is not particularly controversial, whether it applies to you or not is for you to decide and Vladmir to speculate on if he happens to be curious.
Absolutely. And I weigh that information higher coming from yourself than from many people given my observations of apparent self awareness and maturity somewhat beyond what I expect given your self reported age. Obviously such judgements also vary based on topic and context.
In general, however, my life has been a lot simpler and more successful since realising what people say about their values is not always a reliable indicator.
Back! Back! The power of my value of self-determination compels you!
Friendly AI be the judge (I’m working on that). :-)
By the way, this reminds of Not Taking Over the World (the world is mad and is afraid of getting saved, or course, in the hypothetical scenario where the idea gets taken seriously to begin with!).
I don’t recall hearing that kind of an argument presented here anywhere. Yes, there have been arguments about your values shifting when you happen to achieve power, as well as seemingly altruistic behavior actually working to promote individual fitness. But I don’t think anybody has yet claimed that whenever somebody feels they have extreme values, they are wrong about them.
Furthermore—if the discussion in those referenced posts is the one you’re referring to—I’d be hesitant to claim that the consciously held values are false values. People might actually end up acting on the non-conscious values more than they do on the conscious ones, but that’s no grounds for simply saying “your declared values are false and not worth attention”. If you went down that route, you might as well start saying that since all ethics is rationalization anyway, then any consequentialist arguments that didn’t aim at promoting the maximum fitness of your genes were irrelevant. Not to mention that I would be very, very skeptical of any attempts to claim you knew someone else’s values better than they did.
I’m not arguing for the supremacy of non-conscious values: in many cases, people have good sense of their actual values and consciously resolve their implications, which is what I see as the topic of Which Parts Are “Me”?. The inborn values are not a fixed form, although they are a fixed seed, and their contradictions need to be resolved.
If you went down that route, you might as well start saying that since all ethics is rationalization anyway, then any consequentialist arguments that didn’t aim at promoting the maximum fitness of your genes were irrelevant.
Human universal (we all share the bulk of our values),
Complexity of value (there is a lot of stuff coded in the inborn values; one can’t explain away huge chunks of this complexity by asserting them not present in one’s particular values),
Fake simplicity (it’s easy to find simple arguments that gloss over a complex phenomenon),
No, Really, I’ve Deceived Myself (it’s not a given that one even appreciates the connection of the belief with the asserted content of that belief)
These obviously don’t form a consistent argument, but may give an idea of where I’m coming from. I’m only declining to believe particularly outrageous claims, where I assume the claims being made because of error and not because of the connection to reality; where the claims are not outrageous, they might well indicate the particular ways in which the person’s values deviate from the typical.
I suspect this community overemphasizes the extent to which human universals are applicable to individuals (as opposed to cultures), and underemphasizes individual variation. I should probably write a post regarding this at some point.
Well put. My own uncertainty with regard to my values is the main reason I’m reluctant to take “mind hacks” out for casual spins—I’ve been quite surprised in the past by how sophisticated subconscious reactions can be. That said, I don’t think I could bring myself to ignore my consciously-held values to the point of doing something as significant as signing up for cryonics, were that necessary.
Given the opening post I am not sure I understand what you are saying. What about being resurrected with the people described would be an Extrovert Hell? That you don’t have any pre revival friends?
I’m referencing a prior thread. Pre-revival friends or family are a prerequisite for me not looking at the prospect of revival with dread instead of hope.
With those values the ‘find friends who are signed up to cryonics’ sounds like the obvious plan. (Well, less obvious than the one where you kidnap your friends, cut of their head and preserve it against their will. But more sane.)
I don’t think most of my friendships would survive kidnapping, decapitation, and non-consensual vitrification, even if my friends survived it.
A friend will help you move. A good friend will help you move a body. A great friend is the body.
That sounded pretty odd until I looked up the parent comment, I gotta tell you.
This is an incredibly good joke.
I bet that online dating and friend making will work a lot better in the future. Can you elaborate about what is so dreadful about waking up without knowing anyone?
But, but!..
Okay, 1) I dislike the “shut up and multiply” sentiment anyway, since it’s so distinctly consequentialist. I will not shut up, and I will only multiply when everything I’m multiplying is really commensurate including in a deontic sense. I will walk away from Omelas should I have occasion. And 2) it’s my freakin’ life. I’m not deciding to deny someone else the chance to be ferried to the future on the basis of it sounding lonely.
Is there some other significance to the links and quote that you hoped I’d extract?
The significant claim seems to be that it is often necessary to quell an instinctive reaction in order to best meet your own preferences. There are some reflectively consistent preferences systems in which it is better to die than to suffer the distress of a lonely revival but there are many more that are not. I take Vladmir’s suggestion to be “make sure this is what you really want, not just akrasia magnified a thousand times”.
Often claims of the shape of Vladimir’s are intended to enforce a norm upon the recipient. In this case the implied ‘should’ is of the kind “action X may best give Y what they want” which is at least slightly less objectionable.
I did a reversal test on the preference; if everybody I cared about disappeared from my life all at once and everybody who remained was as alien as the people of the future will likely be, I would probably want to die, no cryonics required.
I bet that online dating and friend making will work a lot better in the future. There probably exist many people in the future that appreciate your unique knowledge and want to get to know you better.
When you wake up in the future, you will probably immediately meet people from a time not so unlike our own. Going through physical and mental rehab with them could be a good way to form lifelong friendships. You are never going to be the only person from the 20th and 21st century in the future.
Can you talk more about why your future is so dreadful? Stating that all possible futures are worse than death is a strong statement. In this reversal test, it even assigns a “probably” to being suicidal. I think your flaw in reasoning lies there. I don’t think that being “probably” suicidal in the future is sufficient reason to not visit the future.
In our time, we morally justify the forcible hospitalization and medication of suicidal people until they aren’t suicidal anymore. With Friendly AI, this moral justification may remain true in the future, and once you’re on drugs or other brain enhancements, you’ll probably love life and think your self from your first life absolutely insane for preferring death to glorious existence. Again, I think your desire for deep connections with other people is likely to be nearly immediately fixable in the future. This does sound a little dystopian, but I don’t think there exist very many wake-up futures in which your existential misery can not be fixed.
To me, it seems like in nearly all cases it is worth waiting until the future to decide whether or not it is worth living.
“When you wake up in the future, you will probably immediately meet people from a time not so unlike our own. Going through physical and mental rehab with them could be a good way to form lifelong friendships. You are never going to be the only person from the 20th and 21st century in the future.”
Woman: You’re from 1999? I’m from 2029! Say, remember when we got invaded by the cybernetic ape army?
Fry: Uh… yeah. Those were some crazy times!
Yeah, uh… threatening me with psychoactive medication is not a good way to make me buy a ticket to the future.
Resistance is illogical, you will be upgraded.
I take it you read “Transmetropolitan?” I don’t think that particular reference case is very likely.
I have not read that (*googles*) series of comic books.
I believe that you are not entitled to your choice of values. Preference and priors are not for grabs.
I cannot make heads nor tails of what you’re trying to convey.
Hmm… At least the content of my position seems to have been rehashed a lot, even if you won’t agree with it.
I believe that your opinion about what your values are has very little influence on what your values actually are, which in the backbone are human-universal values plus a lot of person-specific detail that is so below the level of conscious understanding that isn’t even worth speculating about. Whenever someone states an opinion about their values being extreme, they are seriously wrong about their actual values. Consequently, acting on the misconstrued values is against the person’s own actual values.
I don’t grant nearly as much credence to the idea that there are human-universal values as most people around here seem to. People are a wacky, diverse bunch.
Also, if you have an idea about what my values Really Are that is unconnected to what I tell you about them, I don’t want you anywhere near any decisions about my life. Back! Back! The power of my value of self-determination compels you!
I get my ideas about what people’s values Really Are based on their decisions. How much weight I place on what they tell me about their values varies based on their behaviour and what they say. I don’t make it my business to be anywhere near any decisions about other people’s lives except to the extent that they could impact me and I need to protect my interests.
That assumption (and presumption!) of human-universal values scares me at times. It triggers my instinctive “if you actually had the power to act on that belief I would have to kill you” instinct.
Even with that kind of ruthless self-determination in mind it is true that “acting on the misconstrued values is against the person’s own actual values”. Vladmir’s point is not particularly controversial, whether it applies to you or not is for you to decide and Vladmir to speculate on if he happens to be curious.
My decision to tell you about my values counts as a decision, doesn’t it?
Absolutely. And I weigh that information higher coming from yourself than from many people given my observations of apparent self awareness and maturity somewhat beyond what I expect given your self reported age. Obviously such judgements also vary based on topic and context.
In general, however, my life has been a lot simpler and more successful since realising what people say about their values is not always a reliable indicator.
Friendly AI be the judge (I’m working on that). :-)
By the way, this reminds of Not Taking Over the World (the world is mad and is afraid of getting saved, or course, in the hypothetical scenario where the idea gets taken seriously to begin with!).
Be sure to keep us posted on your progress. It’s always good to know who may need a dose of Sword of Good ahead of time. ;)
I don’t recall hearing that kind of an argument presented here anywhere. Yes, there have been arguments about your values shifting when you happen to achieve power, as well as seemingly altruistic behavior actually working to promote individual fitness. But I don’t think anybody has yet claimed that whenever somebody feels they have extreme values, they are wrong about them.
Furthermore—if the discussion in those referenced posts is the one you’re referring to—I’d be hesitant to claim that the consciously held values are false values. People might actually end up acting on the non-conscious values more than they do on the conscious ones, but that’s no grounds for simply saying “your declared values are false and not worth attention”. If you went down that route, you might as well start saying that since all ethics is rationalization anyway, then any consequentialist arguments that didn’t aim at promoting the maximum fitness of your genes were irrelevant. Not to mention that I would be very, very skeptical of any attempts to claim you knew someone else’s values better than they did.
There have also been posts specifically arguing that those non-conscious values might not actually be your true values.
I’m not arguing for the supremacy of non-conscious values: in many cases, people have good sense of their actual values and consciously resolve their implications, which is what I see as the topic of Which Parts Are “Me”?. The inborn values are not a fixed form, although they are a fixed seed, and their contradictions need to be resolved.
Genes? The expression of that evil alien elder god? They don’t write a default morality.
The links relevant to my argument:
Human universal (we all share the bulk of our values), Complexity of value (there is a lot of stuff coded in the inborn values; one can’t explain away huge chunks of this complexity by asserting them not present in one’s particular values), Fake simplicity (it’s easy to find simple arguments that gloss over a complex phenomenon), No, Really, I’ve Deceived Myself (it’s not a given that one even appreciates the connection of the belief with the asserted content of that belief)
These obviously don’t form a consistent argument, but may give an idea of where I’m coming from. I’m only declining to believe particularly outrageous claims, where I assume the claims being made because of error and not because of the connection to reality; where the claims are not outrageous, they might well indicate the particular ways in which the person’s values deviate from the typical.
I suspect this community overemphasizes the extent to which human universals are applicable to individuals (as opposed to cultures), and underemphasizes individual variation. I should probably write a post regarding this at some point.
Well put. My own uncertainty with regard to my values is the main reason I’m reluctant to take “mind hacks” out for casual spins—I’ve been quite surprised in the past by how sophisticated subconscious reactions can be. That said, I don’t think I could bring myself to ignore my consciously-held values to the point of doing something as significant as signing up for cryonics, were that necessary.