I think there is more conflict in those statements than you seem to and I agree with the second and not the first.
I think P(me convincing someone of X|we’re talking about X) is vastly greater than P(me convincing someone of X|we’re not talking about X). Is the following rewording more clear?
″...first I have to get them to actually engage me in a conversation about it.”
I would assume that if I couldn’t think of another way I was an unskilled enough manipulator, not that there wasn’t another way.
If there’s a way to convince an arbitrary person of an arbitrary proposition that I’m capable of discovering, it would be a global instant victory condition: EY would have figured it out, and SIAI would have income in the millions. The best I can hope to do is be able to convince an arbitrary person of an arbitrary factual position. LW rationality techniques may quite possibly be the shortest route after the person is seriously engaged in the issue. Also, there is the question of whether I want to be the sort of person who would do that kind of manipulation, even if I had the capability.
Also, there is the question of whether I want to be the sort of person who would do that kind of manipulation, even if I had the capability.
According to EY, rationality is all about winning. Or, to quote a quote within a quote, “The Way of the Ichi school is the spirit of winning, whatever the weapon and whatever its size.”
If your moral dilemma is between your parents dying and your using a bit of manipulation to keep them (potentially) alive and happy forever, and you are in doubt, maybe you should consider reexamining how cryonics fits into your personal utility function.
If your moral dilemma is between your parents dying and your using a bit of manipulation to keep them (potentially) alive and happy forever, and you are in doubt, maybe you should consider reexamining how cryonics fits into your personal utility function.
I thought about it, and you are right: my utility function values my parents’ continued life no matter what their own utility function has to say on the matter.
Maybe I need to reexamine why manipulation seems to be rated so low in my utility function. I can’t think of a single time when I’ve purposely and consciously manipulated someone. Surely, that can’t be optimal if I’m expecting to pull off a large amount of manipulation here. If I’m going to accomplish this at all, it has to be premeditated and skillfully executed. Where can I ethically practice manipulating people?
We manipulate people all the time, our parents (and children) included. Guilt-tripping, appealing to God, to their sense of self-worth, you name it. They have done it to you, you have done it to them. The difference is that most of the time it is done subconsciously. It might be that doing it intentionally is too repugnant for you for whatever reasons. It is also possible that the actual amount of manipulation you may have to do is lower than you think, or that you might have to work over someone you care about less.
For example, in a hypothetical case that you would convince their pastor/rabbi/mullah into believing that God wants all his children to use all available technological and medical advances to extend their life (not such a far-fetched idea), and that cryonics is one of those advances, and he would preach it to the congregation one day, then you can rely on his word when explaining that you, personally, want to use it for yourself. From there it is not that large a leap to include the rest of your family into the picture.
We manipulate people all the time, our parents (and children) included. Guilt-tripping, appealing to God, to their sense of self-worth, you name it.
Yeah, that’s why I said “purposely and consciously”.
One of my parents was guilt-tripped a lot by their mother, and as a consequence deliberately taught my siblings and I to identify and be immune to guilt-trips while we were very young (method: attempt to put us on ridiculous guilt trips, slowly reduce the ridiculousness over time. It was very effective). Maybe this explains why it feels evil to me...
It’s a sub-type of the rhetorical method of getting a debater to agree to seemingly innocuous statements, and then piecing them together to show how consistency demands those who agree with the small assertions must agree with your conclusion as well.
Instead of leading someone off a rhetorical cliff one step at a time, one constructs an analogous argument with an entirely different, non-emotionally charged subject. The other person will often make the connection without one raising the subject one is concerned with.
I’ve found that difficult to do IRL for a few different reasons:
If the target figures out what you’re up to, in my experience they react negatively.
If you introduce the analogy before the thing it’s an analogy of, it comes across as a non-sequitur and people spend more of their time trying to figure out what you’re up to than reasoning about the analogy.
On any topic remotely related to religion, people are extremely reluctant to make testable statements. If they notice the analogy, they figure out what I’m up to and shut down.
I’m totally willing to believe this is a problem with my application and not a problem with the technique, but I’ve not found it very effective.
I think there is more conflict in those statements than you seem to and I agree with the second and not the first.
I would assume that if I couldn’t think of another way I was an unskilled enough manipulator, not that there wasn’t another way.
I think P(me convincing someone of X|we’re talking about X) is vastly greater than P(me convincing someone of X|we’re not talking about X). Is the following rewording more clear?
″...first I have to get them to actually engage me in a conversation about it.”
If there’s a way to convince an arbitrary person of an arbitrary proposition that I’m capable of discovering, it would be a global instant victory condition: EY would have figured it out, and SIAI would have income in the millions. The best I can hope to do is be able to convince an arbitrary person of an arbitrary factual position. LW rationality techniques may quite possibly be the shortest route after the person is seriously engaged in the issue. Also, there is the question of whether I want to be the sort of person who would do that kind of manipulation, even if I had the capability.
According to EY, rationality is all about winning. Or, to quote a quote within a quote, “The Way of the Ichi school is the spirit of winning, whatever the weapon and whatever its size.”
If your moral dilemma is between your parents dying and your using a bit of manipulation to keep them (potentially) alive and happy forever, and you are in doubt, maybe you should consider reexamining how cryonics fits into your personal utility function.
I thought about it, and you are right: my utility function values my parents’ continued life no matter what their own utility function has to say on the matter.
Maybe I need to reexamine why manipulation seems to be rated so low in my utility function. I can’t think of a single time when I’ve purposely and consciously manipulated someone. Surely, that can’t be optimal if I’m expecting to pull off a large amount of manipulation here. If I’m going to accomplish this at all, it has to be premeditated and skillfully executed. Where can I ethically practice manipulating people?
We manipulate people all the time, our parents (and children) included. Guilt-tripping, appealing to God, to their sense of self-worth, you name it. They have done it to you, you have done it to them. The difference is that most of the time it is done subconsciously. It might be that doing it intentionally is too repugnant for you for whatever reasons. It is also possible that the actual amount of manipulation you may have to do is lower than you think, or that you might have to work over someone you care about less.
For example, in a hypothetical case that you would convince their pastor/rabbi/mullah into believing that God wants all his children to use all available technological and medical advances to extend their life (not such a far-fetched idea), and that cryonics is one of those advances, and he would preach it to the congregation one day, then you can rely on his word when explaining that you, personally, want to use it for yourself. From there it is not that large a leap to include the rest of your family into the picture.
Yeah, that’s why I said “purposely and consciously”.
One of my parents was guilt-tripped a lot by their mother, and as a consequence deliberately taught my siblings and I to identify and be immune to guilt-trips while we were very young (method: attempt to put us on ridiculous guilt trips, slowly reduce the ridiculousness over time. It was very effective). Maybe this explains why it feels evil to me...
Here are examples of my go-to technique.
It’s a sub-type of the rhetorical method of getting a debater to agree to seemingly innocuous statements, and then piecing them together to show how consistency demands those who agree with the small assertions must agree with your conclusion as well.
Instead of leading someone off a rhetorical cliff one step at a time, one constructs an analogous argument with an entirely different, non-emotionally charged subject. The other person will often make the connection without one raising the subject one is concerned with.
I’ve found that difficult to do IRL for a few different reasons:
If the target figures out what you’re up to, in my experience they react negatively.
If you introduce the analogy before the thing it’s an analogy of, it comes across as a non-sequitur and people spend more of their time trying to figure out what you’re up to than reasoning about the analogy.
On any topic remotely related to religion, people are extremely reluctant to make testable statements. If they notice the analogy, they figure out what I’m up to and shut down.
I’m totally willing to believe this is a problem with my application and not a problem with the technique, but I’ve not found it very effective.