HPJEV isn’t supposed to be a perfect executor of his own advice and statements. I would say that it’s not the concept of heroic responsibility is at fault, but his own self-serving reasoning which he applies to justify breaking the rules and doing something cool. In doing so, he fails his heroic responsibility to the over 100 expected people whose lives he might have saved by spending his time more effectively (by doing research which results in an earlier friendly magitech singularity, and buying his warm fuzzies separately by learning the spell for transfiguring a bunch of kittens or something), and HPJEV would feel appropriately bad about his choices if he came to that realisation.
you’ll drive yourself crazy if you blame yourself every time you “could have” prevented something that no-one should expect you to have.
Depending on what you mean by “blame”, I would either disagree with this statement, or I would say that heroic responsibility would disapprove of you blaming yourself too. By heroic responsibility, you don’t have time to feel sorry for yourself that you failed to prevent something, regardless of how realistically you could have.
It is impossible to fulfill the requirements of heroic responsibility.
Where do you get the idea of “requirements” from? When a shepherd is considered responsible for his flock, is he not responsible for every sheep? And if we learn that wolves will surely eat a dozen over the coming year, does that make him any less responsible for any one of his sheep? IMO no: he should try just as hard to save the third sheep as the fifth, even if that means leaving the third to die when it’s wounded so that 4-10 don’t get eaten because they would have been traveling more slowly.
It is a basic fact of utilitarianism that you can’t score a perfect win. Even discounting the universe which is legitimately out of your control, you will screw up sometimes as point of statistical fact. But that does not make the utilons you could not harvest any less valuable than the ones you could have. Heroic responsibility is the emotional equivalent of this fact.
What you need is the serenity to accept the things you cannot change, the courage to change the things you can, and the wisdom to know the difference.
That sounds wise, but is it actually true? Do you actually need that serenity/acceptance part? To keep it heroically themed, I think you’re better off with courage, wisdom, and power.
That sounds wise, but is it actually true? Do you actually need that serenity/acceptance part?
Yes, I do. Most other humans do, too and it’s a sufficiently difficult and easy to neglect skill that it is well worth preserving as ‘wisdom’.
Non-human intelligences will not likely have ‘serenity’ or ‘acceptance’ but will need some similar form of the generalised trait of not wasting excessive amounts of computational resources exploring parts of solution space that have insufficient probability of significant improvement.
In that case, I’m confused about what serenity/acceptance entails, why you seem to believe heroic responsibility to be incongruent with it, and why it doesn’t just fall under “courage” and “wisdom” (as the emotional fortitude to withstand the inevitable imperfection/partial failure and accurate beliefs respectively). Not wasting (computational) resources on efforts with low expected utility is part of your responsibility to maximise utility, and I don’t see a reason to have a difference between things I “can’t change” and things I might be able to change but which are simply suboptimal.
I’m confused about what serenity/acceptance entails
A human psychological experience and tool that can approximately be described by referring to allocating attention and resources efficiently in the face of some adverse and difficult to influence circumstance.
why you seem to believe heroic responsibility to be incongruent with it
I don’t. I suspect you are confusing me with someone else.
Not wasting (computational) resources on efforts with low expected utility is part of your responsibility to maximise utility,
Yes. Yet for some reason merely seeing an equation and believing it must be maximised is an insufficient guide to optimally managing the human machinery we inhabit. We have to learn other things—including things which can be derived from the equation - in detail and and practice them repetitively.
and I don’t see a reason to have a difference between things I “can’t change” and things I might be able to change but which are simply suboptimal.
The Virtue of Narrowness may help you. I have different names for “DDR Ram” and “A replacement battery for my Sony Z2 android” even though I can see how they both relate to computers.
Right, I thought you were RobinZ. By the context, it sounds like he does consider serenity incongruous with heroic responsibility:
There are no rational limits to heroic responsibility. It is impossible to fulfill the requirements of heroic responsibility. What you need is the serenity to accept the things you cannot change, the courage to change the things you can, and the wisdom to know the difference.
With my (rhetorical) question, I expressed doubt towards his interpretation of the phrase, not (necessarily) all reasonable interpretations of it.
and I don’t see a reason to have a difference between things I “can’t change” and things I might be able to change but which are simply suboptimal.
The Virtue of Narrowness may help you. I have different names for “DDR Ram” and “A replacement battery for my Sony Z2 android” even though I can see how they both relate to computers.
For me at least, saying something “can’t be changed” roughly means modelling something as P(change)=0. This may be fine as a local heuristic when there are significantly larger expected utilities on the line to work with, but without a subject of comparison it seems inappropriate, and I would blame it for certain error modes, like ignoring theories because they have been labeled impossible at some point.
To approach it another way, I would be fine with just adding adjectives to “extremely ridiculously [...] absurdly unfathomably unlikely” to satisfy the requirements of narrowness, rather than just saying something can’t be done.
A human psychological experience and tool that can approximately be described by referring to allocating attention and resources efficiently in the face of some adverse and difficult to influence circumstance.
I would call this “level-headedness”. By my intuition, serenity is a specific calm emotional state, which is not required to make good decisions, though it may help. My dataset luckily isn’t large, but I have been able to get by on “numb” pretty well in the few relevant cases.
Right, I thought you were RobinZ. By the context, it sounds like he does consider serenity incongruous with heroic responsibility:
I agree. I downvoted RobinZ’s comment and ignored it because the confusion about what heroic responsibility means was too fundamental, annoyingly difficult to correct and had incidentally already been argued for far more eloquently elsewhere in the thread. In contrast I fundamentally agree with most of what you have said on this thread so the disagreement on one conclusion regarding a principle of rationality and psychology is more potentially interesting.
With my (rhetorical) question, I expressed doubt towards his interpretation of the phrase, not (necessarily) all reasonable interpretations of it.
I agree with your rejection of the whole paragraph. My objection seems to be directed at the confusion about heroic (and arguably mundane) responsibility rather than the serenity wisdom heuristic.
For me at least, saying something “can’t be changed” roughly means modelling something as P(change)=0.
I can empathize with being uncomfortable with colloquial expressions which deviate from literal meaning. I can also see some value in making a stand against that kind of misuse due to the way such framing can influence our thinking. Overconfident or premature ruling out of possibilities is something humans tend to be biased towards.
I would call this “level-headedness”.
Whatever you call it it sounds like you have the necessary heuristics in place to avoid the failure modes the wisdom quote is used to prevent. (Avoiding over-responsibility and avoiding pointless worry loops).
By my intuition, serenity is a specific calm emotional state, which is not required to make good decisions, though it may help.
The phrasing “The X to” intuitively brings to my mind a relative state rather than an absolute one. That is, while getting to some Zen endpoint state of inner peace or tranquillity is not needed but there are often times when moving towards that state to a sufficient degree will allow much more effective action. ie. it translates to “whatever minimum amount of acceptance of reality and calmness is needed to allow me correctly account for opportunity costs and decide according to the bigger picture”.
My dataset luckily isn’t large, but I have been able to get by on “numb” pretty well in the few relevant cases.
That can work. If used too much it sometimes seems to correlate with developing pesky emotional associations (like ‘Ugh fields’) with related stimulus but that obviously depends on which emotional cognitive processes result in the ‘numbness’ and soforth.
I downvoted RobinZ’s comment and ignored it because the confusion about what heroic responsibility means was too fundamental, annoyingly difficult to correct and had incidentally already been argued for far more eloquently elsewhere in the thread.
I would rather you tell me that I am misunderstanding something than downvote silently. My prior probability distribution over reasons for the −1 had “I disagreed with Eliezer Yudkowsky and he has rabid fans” orders of magnitude more likely than “I made a category error reading the fanfic and now we’re talking past each other”, and a few words from you could have reversed that ratio.
I would rather you tell me that I am misunderstanding something than downvote silently.
Thankyou for your feedback. I usually ration my explicit disagreement with people on the internet but your replies prompt me to add “RobinZ” to the list of people worth actively engaging with.
...huh. I’m glad to have been of service, but that’s not really what I was going for. I meant that silent downvoting for the kind of confusion you diagnosed in me is counterproductive generally—“You keep using that word. I do not think it means what you think it means” is not a hypothesis that springs naturally to mind. The same downvote paired with a comment saying:
This is a waste of time. You keep claiming that “heroic responsibility” says this or “heroic responsibility” demands that, but you’re fundamentally mistaken about what heroic responsibility is and you can’t seem to understand anything we say to correct you. I’m downvoting the rest of this conversation.
...would have been more like what I wanted to encourage.
I meant that silent downvoting for the kind of confusion you diagnosed in me is counterproductive generally
I fundamentally disagree. It is better for misleading comments to have lower votes than insightful ones. This helps limit the epistemic damage caused to third parties. Replying to every incorrect claim with detailed arguments in not viable and not my responsibility either heroic or conventional—even though my comment history suggests that for a few years I made a valiant effort.
Silent downvoting is often the most time efficient form positive influence available and I endorse it as appropriate, productive and typically wiser than trying to argue all the time.
I didn’t propose that you should engage in detailed arguments with anyone—not even me. I proposed that you should accompany some downvotes with an explanation akin to the three-sentence example I gave.
Another example of a sufficiently-elaborate downvote explanation: “I downvoted your reply because it mischaracterized my position more egregiously than any responsible person should.” One sentence, long enough, no further argument required.
HPJEV isn’t supposed to be a perfect executor of his own advice and statements. I would say that it’s not the concept of heroic responsibility is at fault, but his own self-serving reasoning which he applies to justify breaking the rules and doing something cool. In doing so, he fails his heroic responsibility to the over 100 expected people whose lives he might have saved by spending his time more effectively (by doing research which results in an earlier friendly magitech singularity, and buying his warm fuzzies separately by learning the spell for transfiguring a bunch of kittens or something), and HPJEV would feel appropriately bad about his choices if he came to that realisation.
Depending on what you mean by “blame”, I would either disagree with this statement, or I would say that heroic responsibility would disapprove of you blaming yourself too. By heroic responsibility, you don’t have time to feel sorry for yourself that you failed to prevent something, regardless of how realistically you could have.
Where do you get the idea of “requirements” from? When a shepherd is considered responsible for his flock, is he not responsible for every sheep? And if we learn that wolves will surely eat a dozen over the coming year, does that make him any less responsible for any one of his sheep? IMO no: he should try just as hard to save the third sheep as the fifth, even if that means leaving the third to die when it’s wounded so that 4-10 don’t get eaten because they would have been traveling more slowly.
It is a basic fact of utilitarianism that you can’t score a perfect win. Even discounting the universe which is legitimately out of your control, you will screw up sometimes as point of statistical fact. But that does not make the utilons you could not harvest any less valuable than the ones you could have. Heroic responsibility is the emotional equivalent of this fact.
That sounds wise, but is it actually true? Do you actually need that serenity/acceptance part? To keep it heroically themed, I think you’re better off with courage, wisdom, and power.
Yes, I do. Most other humans do, too and it’s a sufficiently difficult and easy to neglect skill that it is well worth preserving as ‘wisdom’.
Non-human intelligences will not likely have ‘serenity’ or ‘acceptance’ but will need some similar form of the generalised trait of not wasting excessive amounts of computational resources exploring parts of solution space that have insufficient probability of significant improvement.
In that case, I’m confused about what serenity/acceptance entails, why you seem to believe heroic responsibility to be incongruent with it, and why it doesn’t just fall under “courage” and “wisdom” (as the emotional fortitude to withstand the inevitable imperfection/partial failure and accurate beliefs respectively). Not wasting (computational) resources on efforts with low expected utility is part of your responsibility to maximise utility, and I don’t see a reason to have a difference between things I “can’t change” and things I might be able to change but which are simply suboptimal.
A human psychological experience and tool that can approximately be described by referring to allocating attention and resources efficiently in the face of some adverse and difficult to influence circumstance.
I don’t. I suspect you are confusing me with someone else.
Yes. Yet for some reason merely seeing an equation and believing it must be maximised is an insufficient guide to optimally managing the human machinery we inhabit. We have to learn other things—including things which can be derived from the equation - in detail and and practice them repetitively.
The Virtue of Narrowness may help you. I have different names for “DDR Ram” and “A replacement battery for my Sony Z2 android” even though I can see how they both relate to computers.
Right, I thought you were RobinZ. By the context, it sounds like he does consider serenity incongruous with heroic responsibility:
With my (rhetorical) question, I expressed doubt towards his interpretation of the phrase, not (necessarily) all reasonable interpretations of it.
For me at least, saying something “can’t be changed” roughly means modelling something as P(change)=0. This may be fine as a local heuristic when there are significantly larger expected utilities on the line to work with, but without a subject of comparison it seems inappropriate, and I would blame it for certain error modes, like ignoring theories because they have been labeled impossible at some point.
To approach it another way, I would be fine with just adding adjectives to “extremely ridiculously [...] absurdly unfathomably unlikely” to satisfy the requirements of narrowness, rather than just saying something can’t be done.
I would call this “level-headedness”. By my intuition, serenity is a specific calm emotional state, which is not required to make good decisions, though it may help. My dataset luckily isn’t large, but I have been able to get by on “numb” pretty well in the few relevant cases.
I agree. I downvoted RobinZ’s comment and ignored it because the confusion about what heroic responsibility means was too fundamental, annoyingly difficult to correct and had incidentally already been argued for far more eloquently elsewhere in the thread. In contrast I fundamentally agree with most of what you have said on this thread so the disagreement on one conclusion regarding a principle of rationality and psychology is more potentially interesting.
I agree with your rejection of the whole paragraph. My objection seems to be directed at the confusion about heroic (and arguably mundane) responsibility rather than the serenity wisdom heuristic.
I can empathize with being uncomfortable with colloquial expressions which deviate from literal meaning. I can also see some value in making a stand against that kind of misuse due to the way such framing can influence our thinking. Overconfident or premature ruling out of possibilities is something humans tend to be biased towards.
Whatever you call it it sounds like you have the necessary heuristics in place to avoid the failure modes the wisdom quote is used to prevent. (Avoiding over-responsibility and avoiding pointless worry loops).
The phrasing “The X to” intuitively brings to my mind a relative state rather than an absolute one. That is, while getting to some Zen endpoint state of inner peace or tranquillity is not needed but there are often times when moving towards that state to a sufficient degree will allow much more effective action. ie. it translates to “whatever minimum amount of acceptance of reality and calmness is needed to allow me correctly account for opportunity costs and decide according to the bigger picture”.
That can work. If used too much it sometimes seems to correlate with developing pesky emotional associations (like ‘Ugh fields’) with related stimulus but that obviously depends on which emotional cognitive processes result in the ‘numbness’ and soforth.
I would rather you tell me that I am misunderstanding something than downvote silently. My prior probability distribution over reasons for the −1 had “I disagreed with Eliezer Yudkowsky and he has rabid fans” orders of magnitude more likely than “I made a category error reading the fanfic and now we’re talking past each other”, and a few words from you could have reversed that ratio.
Thankyou for your feedback. I usually ration my explicit disagreement with people on the internet but your replies prompt me to add “RobinZ” to the list of people worth actively engaging with.
...huh. I’m glad to have been of service, but that’s not really what I was going for. I meant that silent downvoting for the kind of confusion you diagnosed in me is counterproductive generally—“You keep using that word. I do not think it means what you think it means” is not a hypothesis that springs naturally to mind. The same downvote paired with a comment saying:
...would have been more like what I wanted to encourage.
I fundamentally disagree. It is better for misleading comments to have lower votes than insightful ones. This helps limit the epistemic damage caused to third parties. Replying to every incorrect claim with detailed arguments in not viable and not my responsibility either heroic or conventional—even though my comment history suggests that for a few years I made a valiant effort.
Silent downvoting is often the most time efficient form positive influence available and I endorse it as appropriate, productive and typically wiser than trying to argue all the time.
I didn’t propose that you should engage in detailed arguments with anyone—not even me. I proposed that you should accompany some downvotes with an explanation akin to the three-sentence example I gave.
Another example of a sufficiently-elaborate downvote explanation: “I downvoted your reply because it mischaracterized my position more egregiously than any responsible person should.” One sentence, long enough, no further argument required.
I retract my previous statement based on new evidence acquired.
I continue to endorse being selective in whom one spends time arguing with.