In short, I don’t see any “philisophical” points to discuss here, just practical ones. I appoligize if I’m being too literal and missing out on something. Please let me know if I am.
All I got from the idea of heroic responsibility is, “Delegating responsibility to authorities is a heuristic. Heuristics sacrifice accuracy for speed, and will thus sometimes be inaccurate. People tend to apply this heuristic way too much in real life without thinking about whether or not doing so makes sense.”
Concrete questions:
How should a nurse act if she was optimizing for societies aggregate happiness?
How should a nurse act if she was optimizing for her own happiness?
How should a nurse act if she was optimizing for a blend between the two?
How should a nurse act if she was optimizing for “heroic responsibility”?
Take the Standard Definitional Dispute, for example, about the tree falling in a deserted forest. Is there any way-the-world-could-be—any state of affairs—that corresponds to the word “sound” really meaning only acoustic vibrations, or really meaning only auditory experiences?
Is there any way-the-world-could-be that corresponds to the phrase “heroic responsibility” really meaning something?
Why does the mind formulate questions of responsibility?
I don’t know, but I have a hypothesis. My hypothesis is social pressure/conditioning. Personally, even though I think it’s a Wrong Question, I nevertheless feel like it’s a real thing (to some extent).
I feel some sort of drive toward “heroic responsibility” and away from “heroic irresponsibility”. Even when I dissolve the question, there’s still a force at play: my emotions telling me, “Bad Adam. I don’t care if you dissolve the question. Heroic Responsiblity is Good and Heroic Irresponsibility is Bad”.
Being a well functioning gear.
There is a downside to “going against the grain”. The machine seems to be somewhat fragile, and a deviant gear is likely to screw stuff up.
There’s also the obvious upside of possibly making the machine more efficient.
Whether or not it’s worth it to go against the grain depends on the cost-benefit.
What’s different between the nurse who should leave in order to take meta-level responsibility, and the nurse who should stay because she’s needed as a gear?
1) What they optimize for, and 2) EV of trying.
The machine does need fixing–but I would argue that from within the machine, as one of its parts, taking heroic responsibility for your own sphere of control isn’t the way to go about fixing the system.
My impression is that that’s true for the majority of people in the majority of situations, because they’re not smart enough for the EV of them going against the grain to be worth it.
As for whether or not it’s worth it for you:
1) It depends on what your goals are. Largely the personal happiness vs. altruism question: what is the blend you’re optimizing for?
2) My impression: situations like the one you describe in Something Impossible seem unlikely to be worth deviating in. However, my impression is that if you put your mind to it and took a more intermediate-long term approach to pursuing change, you could do incredible things.
In short, I don’t see any “philisophical” points to discuss here, just practical ones. I appoligize if I’m being too literal and missing out on something. Please let me know if I am.
All I got from the idea of heroic responsibility is, “Delegating responsibility to authorities is a heuristic. Heuristics sacrifice accuracy for speed, and will thus sometimes be inaccurate. People tend to apply this heuristic way too much in real life without thinking about whether or not doing so makes sense.”
Concrete questions:
How should a nurse act if she was optimizing for societies aggregate happiness?
How should a nurse act if she was optimizing for her own happiness?
How should a nurse act if she was optimizing for a blend between the two?
Wrong question:
How should a nurse act if she was optimizing for “heroic responsibility”?
Is there any way-the-world-could-be that corresponds to the phrase “heroic responsibility” really meaning something?
Why does the mind formulate questions of responsibility?
I don’t know, but I have a hypothesis. My hypothesis is social pressure/conditioning. Personally, even though I think it’s a Wrong Question, I nevertheless feel like it’s a real thing (to some extent).
I feel some sort of drive toward “heroic responsibility” and away from “heroic irresponsibility”. Even when I dissolve the question, there’s still a force at play: my emotions telling me, “Bad Adam. I don’t care if you dissolve the question. Heroic Responsiblity is Good and Heroic Irresponsibility is Bad”.
Being a well functioning gear.
There is a downside to “going against the grain”. The machine seems to be somewhat fragile, and a deviant gear is likely to screw stuff up.
There’s also the obvious upside of possibly making the machine more efficient.
Whether or not it’s worth it to go against the grain depends on the cost-benefit.
1) What they optimize for, and 2) EV of trying.
My impression is that that’s true for the majority of people in the majority of situations, because they’re not smart enough for the EV of them going against the grain to be worth it.
As for whether or not it’s worth it for you:
1) It depends on what your goals are. Largely the personal happiness vs. altruism question: what is the blend you’re optimizing for?
2) My impression: situations like the one you describe in Something Impossible seem unlikely to be worth deviating in. However, my impression is that if you put your mind to it and took a more intermediate-long term approach to pursuing change, you could do incredible things.