I think the local version would be something like, “May my strength as a rationalist give me the ability to discern what I can and cannot change, and the determination to make a desperate effort at the latter when remaining uncertainty allows that this has the highest expected utility.”
(Where leaving out or replacing ‘strength as a rationalist’ makes the quote a whole lot more appealing to me if nobody else. Heck, even the jargon term ‘luminosity’ would feel better.)
What I like about the serenity prayer (at least the way I interpret it) is that it puts the priority on changing things; serenity is just a second-best option for things that are unchangeable.
In that respect, it’s like a transhumanist slogan. With something like life extension, I want to point to the serenity prayer and say we can change this, which means we need to have the courage to change. Death at the end of the current lifespan isn’t something that we should serenely accept because we can change it. The serenity prayer calls for courage and action to follow through and make those changes.
Part of the difficulty is that the wisdom to know the difference also requires the wisdom to change your mind. Once people accept that something cannot be changed, then their serenity-producing mechanisms prevent them from reconsidering the evidence and recognizing that maybe it really can (and should) be changed.
If I was going to alter the serenity prayer, that’s one thing I’d add. In Alicorn’s version, that means the strength as a rationalist to distinguish what I can and cannot change, and to update those categorizations as new evidence arises.
Friends, help me build the serenity to accept the things I cannot change; the courage to change the things I can; and the wisdom to continually update which is which based on the best available evidence.
I think it genuinely wise, it contains three related important concepts: 1) You should try to make the world a better place, 2) You shouldn’t waste your effort in attempting 1 in situations when you will almost certainly fail, 3) in order to succeed at 1 & 2 you need to be able to understand the world around you, a desire, to affect change isn’t enough.
The only thing that’s missing form it is something about having the insight to distinguish good changes form bad ones.
You shouldn’t waste your effort in attempting 1 in situations when you will almost certainly fail,
Not quite. You want to consider the expected value of the attempt, not the raw probability of success. A 0.1% chance of curing cancer or ‘old age’ is to be preferred over an 80% chance of winning the X-Factor (particularly given that the latter applies to yourself).
It would definitely be foolish to waste effort attempting something that will certainly fail.
I agree with your qualifications, I was oversimplifying. And the reason I didn’t say certainly fail because I try to avoid using the word “certain” unless I’m dealing with purely logical systems.
And the reason I didn’t say certainly fail because I try to avoid using the word “certain” unless I’m dealing with purely logical systems.
A worthy goal. Usually that will prevent you from making claims that are technically wrong despite being inspired by good thinking. This seems to be a rare case where defaulting to not using an absolute introduces the technical problem.
1) You should try to make the world a better place
is actually implied by the original wording. Clippy could also view
God, grant me the serenity To accept the things I cannot change; Courage to change the things I can; And wisdom to know the difference.
as wise, though in vis case, “the things I cannot change” would be closer to “the resources I am unable to apply to paperclips”. One can’t expect too much specificity from a 25 word quote… I’m taking your point
The only thing that’s missing form it is something about having the insight to distinguish good changes form bad ones.
(which I agree with) as meaning that one should have the insight to distinguish instrumental subgoals that actually will advance one’s ultimate goals from subgoals that don’t accomplish this. (This is separate from differences in ultimate goals.)
I think Mike Vassar said something like “you should not have preferences over the current states of the world, only over your emotional dispositions”. It’s a second-hand quote, but seems like a good way of putting it.
As far as the false suspicion of wireheading, I am not sure about the attitudes here, but isn’t it just a value? I mean I don’t think I am interested in wireheading, but if someone truly thinks it’s for them, why would we condemn? I thought the forum is about being rational, not about a specific set of values.
As far as the false suspicion of wireheading, I am not sure about the attitudes here, but isn’t it just a value? I mean I don’t think I am interested in wireheading, but if someone truly thinks it’s for them, why would we condemn? I thought the forum is about being rational, not about a specific set of values.
Your point is valid.
Where it does make sense to call another’s choice to wirehead a mistake (rather than just a difference in values) is when that person thinks that wireheading is what they want but they are actually mistaken about their own values or how to achieve them.
It is a little counterintuitive but even though values are entirely subjective people are actually not the absolute authority on what their subjective preferences are. Subjective preferences are objective facts in as much as they are represented by the physical state of the universe (particularly that part of the universe that is the person’s head). People’s beliefs about that part of the universe and the implications thereof can (and often are) wrong. This particularly applies to abstract concepts—we aren’t very good at wiring up our abstract beliefs with rest of our desires.
It is a little counterintuitive but even though values are entirely subjective people are actually not the absolute authority on what their subjective preferences are.
Absolutely. In a way we owe this understanding to Freud, he popularized the notion that people do not know what they are really pursuing. Of course he thought they were pursuing sex with their mother...
Absolutely. In a way we owe this understanding to Freud
Could we instead say “this understanding is predated by Freud’s popularized notion...”? There is no debt if the concept is arrived at independently and this is a general philosophical point that is not limited to humans specifically while Freud’s is proto-psychology.
I didn’t either; fortunately, no source has been presented, so I don’t need to believe he said that and can postulate that he actually said the opposite or was engaged in criticizing such a position.
I can confirm he said something like it. However, what he meant by it was that our emotions should be keyed to how we act, not how the universe is. We should be rewarded for acting to produce the best outcome possible. We don’t control what the universe is, just our actions, so we shouldn’t be made to feel bad (or good) because of something we couldn’t control. For example, if we imagine a situation where 10 people were going to die, but you managed to save 5 of them, your emotional state shouldn’t be sad, because they should reward the fact that you saved 5 people. Equivalently, you shouldn’t really be all that happy that a thousand people get something that makes them really happy when your actions reduced the number of people who received whatever it is by 500. Just because the people are better off you shouldn’t be emotionally rewarded, because you reduced the number who would be happy. If the best you can make the universe is horrible you shouldn’t be depressed about it, because it isn’t good to increase the amount of disutility in the universe and doesn’t incentivize acting to bring the best situation about. Conversely, if the worse you can do is pretty damn good, you shouldn’t be happy about it, because you shouldn’t incentivize leaving utility on the table. Basically, it’s an endorsement of virtue ethics for human-type minds.
Thanks, that is a deeper understanding than I got from it second—hand (though I did not think it meant wireheading). I understood it to warn having and reacting to false sense of control, which I often see, “accepting that there are (many) things you cannot change”.
Equivalently, you shouldn’t really be all that happy that a thousand people get something that makes them really happy when your actions reduced the number of people who received whatever it is by 500.
I’ve got no problem with being happy that a thousand people get a bunch of utility (assuming they are people for whom I have altruistic interest). I would not be glad about the fact that I somehow screwed up (or was unlucky) and prevented even more altruistic goodies but I could be glad (happy) that some action of mine or external cause resulted in the boon for the 1,000.
I have neither the need nor desire to rewire my emotions such that I could unload a can of Skinner on my ass.
-- adapted from Reinhold Niebuhr
Is this a piece of traditional deep wisdom that’s actually wise?
I think the local version would be something like, “May my strength as a rationalist give me the ability to discern what I can and cannot change, and the determination to make a desperate effort at the latter when remaining uncertainty allows that this has the highest expected utility.”
(Where leaving out or replacing ‘strength as a rationalist’ makes the quote a whole lot more appealing to me if nobody else. Heck, even the jargon term ‘luminosity’ would feel better.)
That was beautiful. :)
God grant me the strength to change the things I can,
The intelligence to know what I can change,
And the rationality to realize that God isn’t the key figure here.
Cute, but you just undermined “strength” :)
What I like about the serenity prayer (at least the way I interpret it) is that it puts the priority on changing things; serenity is just a second-best option for things that are unchangeable.
In that respect, it’s like a transhumanist slogan. With something like life extension, I want to point to the serenity prayer and say we can change this, which means we need to have the courage to change. Death at the end of the current lifespan isn’t something that we should serenely accept because we can change it. The serenity prayer calls for courage and action to follow through and make those changes.
Part of the difficulty is that the wisdom to know the difference also requires the wisdom to change your mind. Once people accept that something cannot be changed, then their serenity-producing mechanisms prevent them from reconsidering the evidence and recognizing that maybe it really can (and should) be changed.
If I was going to alter the serenity prayer, that’s one thing I’d add. In Alicorn’s version, that means the strength as a rationalist to distinguish what I can and cannot change, and to update those categorizations as new evidence arises.
Friends, help me build the serenity to accept the things I cannot change; the courage to change the things I can; and the wisdom to continually update which is which based on the best available evidence.
Er, how about the wisdom to know whether a thing should be changed in the 1st place?
A good point… although I would remove the ‘should’ and instead emphasise the coherence and self awareness to know which things I want.
I think it genuinely wise, it contains three related important concepts: 1) You should try to make the world a better place, 2) You shouldn’t waste your effort in attempting 1 in situations when you will almost certainly fail, 3) in order to succeed at 1 & 2 you need to be able to understand the world around you, a desire, to affect change isn’t enough.
The only thing that’s missing form it is something about having the insight to distinguish good changes form bad ones.
Not quite. You want to consider the expected value of the attempt, not the raw probability of success. A 0.1% chance of curing cancer or ‘old age’ is to be preferred over an 80% chance of winning the X-Factor (particularly given that the latter applies to yourself).
It would definitely be foolish to waste effort attempting something that will certainly fail.
I agree with your qualifications, I was oversimplifying. And the reason I didn’t say certainly fail because I try to avoid using the word “certain” unless I’m dealing with purely logical systems.
A worthy goal. Usually that will prevent you from making claims that are technically wrong despite being inspired by good thinking. This seems to be a rare case where defaulting to not using an absolute introduces the technical problem.
Just an indication that one should avoid absolutes: even an absolute directive to avoid absolutes ;)
I don’t think that
is actually implied by the original wording. Clippy could also view
as wise, though in vis case, “the things I cannot change” would be closer to “the resources I am unable to apply to paperclips”. One can’t expect too much specificity from a 25 word quote… I’m taking your point
(which I agree with) as meaning that one should have the insight to distinguish instrumental subgoals that actually will advance one’s ultimate goals from subgoals that don’t accomplish this. (This is separate from differences in ultimate goals.)
That all sounds right to me.
Yes.
Except for the God grant me part, yeah.
I think Mike Vassar said something like “you should not have preferences over the current states of the world, only over your emotional dispositions”. It’s a second-hand quote, but seems like a good way of putting it.
Are you sure you don’t have his comment backwards?
I didn’t expect much karma for this, but WTF with the downvote?
Because the quote seems to be endorsing wireheading, which is pretty universally condemned here, and seems of little relevance anyway.
As far as the false suspicion of wireheading, I am not sure about the attitudes here, but isn’t it just a value? I mean I don’t think I am interested in wireheading, but if someone truly thinks it’s for them, why would we condemn? I thought the forum is about being rational, not about a specific set of values.
Your point is valid.
Where it does make sense to call another’s choice to wirehead a mistake (rather than just a difference in values) is when that person thinks that wireheading is what they want but they are actually mistaken about their own values or how to achieve them.
It is a little counterintuitive but even though values are entirely subjective people are actually not the absolute authority on what their subjective preferences are. Subjective preferences are objective facts in as much as they are represented by the physical state of the universe (particularly that part of the universe that is the person’s head). People’s beliefs about that part of the universe and the implications thereof can (and often are) wrong. This particularly applies to abstract concepts—we aren’t very good at wiring up our abstract beliefs with rest of our desires.
Absolutely. In a way we owe this understanding to Freud, he popularized the notion that people do not know what they are really pursuing. Of course he thought they were pursuing sex with their mother...
Could we instead say “this understanding is predated by Freud’s popularized notion...”? There is no debt if the concept is arrived at independently and this is a general philosophical point that is not limited to humans specifically while Freud’s is proto-psychology.
+(-1)
Did Vassar really say something like that? I didn’t think he was, well, silly.
I didn’t either; fortunately, no source has been presented, so I don’t need to believe he said that and can postulate that he actually said the opposite or was engaged in criticizing such a position.
I can confirm he said something like it. However, what he meant by it was that our emotions should be keyed to how we act, not how the universe is. We should be rewarded for acting to produce the best outcome possible. We don’t control what the universe is, just our actions, so we shouldn’t be made to feel bad (or good) because of something we couldn’t control. For example, if we imagine a situation where 10 people were going to die, but you managed to save 5 of them, your emotional state shouldn’t be sad, because they should reward the fact that you saved 5 people. Equivalently, you shouldn’t really be all that happy that a thousand people get something that makes them really happy when your actions reduced the number of people who received whatever it is by 500. Just because the people are better off you shouldn’t be emotionally rewarded, because you reduced the number who would be happy. If the best you can make the universe is horrible you shouldn’t be depressed about it, because it isn’t good to increase the amount of disutility in the universe and doesn’t incentivize acting to bring the best situation about. Conversely, if the worse you can do is pretty damn good, you shouldn’t be happy about it, because you shouldn’t incentivize leaving utility on the table. Basically, it’s an endorsement of virtue ethics for human-type minds.
Thanks, that is a deeper understanding than I got from it second—hand (though I did not think it meant wireheading). I understood it to warn having and reacting to false sense of control, which I often see, “accepting that there are (many) things you cannot change”.
I’ve got no problem with being happy that a thousand people get a bunch of utility (assuming they are people for whom I have altruistic interest). I would not be glad about the fact that I somehow screwed up (or was unlucky) and prevented even more altruistic goodies but I could be glad (happy) that some action of mine or external cause resulted in the boon for the 1,000.
I have neither the need nor desire to rewire my emotions such that I could unload a can of Skinner on my ass.