True story: when I first heard the phrase ‘heroic responsibility’, it took me about five seconds and the question, “On TV Tropes, what definition fits this title?” to generate every detail of EY’s definition save one. That detail was that this was supposed to be a good idea. As you point out—and eli-sennesh points out, and the trope that most closely resembles the concept points out - ‘heroic responsibility’ assumes that everyone other than the heroes cannot be trusted to do their jobs. And, as you point out, that’s a recipe for everyone getting in everyone else’s way and burning out within a year. And, as you point out, you don’t actually know the doctor’s job better than the doctors do.
Responsibility — ethical obligation — is boundless and universal. All are responsible for all. No one is exempt.
Now, if that were all we had to say or all that we could know, we would likely be paralyzed, overwhelmed by an amorphous, undifferentiated ocean of need. We would be unable to respond effectively, specifically or appropriately to any particular dilemma. And we would come to feel powerless and incapable, thus becoming less likely to even try.
But that’s not all that we can know or all that we have to say.
We are all responsible, but we are not all responsible in the same way. We each and all have roles to play, but we do not all have the same role to play, and we do not each play the same role all the time.
Relationship, proximity, office, ability, means, calling and many other factors all shape our particular individual and differentiated responsibilities in any given case. In every given case. Circumstance and pure chance also play a role, sometimes a very large role, as when you alone are walking by the pond where the drowning stranger calls for help, or when you alone are walking on the road to Jericho when you encounter the stranger who has fallen among thieves.
Different circumstances and different relationships and different proximities entail different responsibilities, but no matter what those differences may be, all are always responsible. Sometimes we may be responsible to act or to give, to lift or to carry directly. Sometimes indirectly. Sometimes our responsibility may be extremely indirect — helping to create the context for the proper functioning of those institutions that, in turn, create the context that allows those most directly and immediately responsible to respond effectively. (Sometimes our indirect responsibility involves giving what we can to the Red Cross or other such organizations to help the victims of a disaster.)
The idea of heroic responsibility suggests that you should make an extraordinary effort to coerce the doctor into re-examining diagnoses whenever you think an error has been made. Bearing in mind that I have no relevant expertise, the idea of subsidiarity suggests to me that you, being in a better position to monitor a patient’s symptoms than the doctor, should have the power to set wheels in motion when those symptoms do not fit the diagnosis … which suggests a number of approaches to the situation, such as asking the doctor, “Can you give me more information on what I should expect to see or not see based on this diagnosis?”
(My first thought regarding your anecdote was that the medical records should automatically include Bayesian probability data on symptoms to help nurses recognize when the diagnosis doesn’t fit, but this article about the misdiagnosis of Ebola suggests revising the system to make it more likely for doctors see the nurses’ observations that would let them catch a misdiagnosis. You’re in a better position to examine the policy question than I am.)
I have to admit, I haven’t been following the website for a long while—these days, I don’t get a lot of value out of it—so what I’m saying that Fred Clark is saying might be what a lot of people already see as the meaning of the concept. But I think that it is valuable to emphasize that responsibility is shared, and sometimes the best thing you can do is help other people do the job. And that’s not what Harry Potter-Evans-Verres does in the fanfic.
As you point out—and eli-sennesh points out, and the trope that most closely resembles the concept points out - ‘heroic responsibility’ assumes that everyone other than the heroes cannot be trusted to do their jobs.
This would only be true if the hero has infinite resources, actually able to redo everyone’s work. In practice, deciding how your resources should be allocated requires a reasonably accurate estimate of how likely everyone is to do their job well. Swimmer963 shouldn’t insist on farming her own wheat for her bread (like she would if she didn’t trust the supply chain), not because she doesn’t have (heroic) responsibility to make sure she stays alive to help patients, but because that very responsibility means she shouldn’t waste her time and effort on unfounded paranoia to the detriment of everyone.
The main thing about heroic responsibility is that you don’t say “you should have gotten it right”. Instead you can only say “I was wrong to trust you this much”: it’s your failure, and whether it’s a failure of the person you trusted really doesn’t matter for the ethics of the thing.
My referent for ‘heroic responsibility’ was HPMoR, in which Harry doesn’t trust anyone to do a competent job—not even someone like McGonagall, whose intelligence, rationality, and good intentions he had firsthand knowledge of on literally their second meeting. I don’t know the full context, but unless McGonagall had her brain surgically removed sometime between Chapter 6 and Chapter 75, he could actually tell her everything that he knew that gave him reason to be concerned about the continued good behavior of the bullies in question, and then tell her if those bullies attempted to evade her supervision. And, in the real world, that would be a perfect example of comparative advantage and opportunity cost in action: Harry is a lot better at high-stakes social and magical shenanigans relative to student discipline than McGonagall is, so for her to expend her resources on the latter while he expends his on the former would produce a better overall outcome by simple economics. (Not to mention that Harry should face far worse consequences if he screws up than McGonagall would—even if he has his status as Savior of the Wizarding World to protect him.) (Also, leaving aside whether his plans would actually work.)
I am advocating for people to take the initiative when they can do good without permission. Others in the thread have given good examples of this. But you can’t solve all the problems you touch, and you’ll drive yourself crazy if you blame yourself every time you “could have” prevented something that no-one should expect you to have. There are no rational limits to heroic responsibility. It is impossible to fulfill the requirements of heroic responsibility. What you need is the serenity to accept the things you cannot change, the courage to change the things you can, and the wisdom to know the difference.
Did we read the same story? Harry has lots of evidence that McGonagall isn’t in fact trustworthy and in large-part it’s because she doesn’t fully accept heroic responsibility and is too willing to uncritically delegate responsibility to others.
I also vaguely remember your point being addressed in HPMoR. I certainly wouldn’t guess that Harry wouldn’t understand that “there are no rational limits to heroic responsibility”. It certainly matters for doing the most good as a creature that can’t psychologically handle unlimited responsibility.
Full disclosure: I stopped reading HPMoR in the middle of Chapter 53. When I was researching my comment, I looked at the immediate context of the initial definition of “heroic responsibility” and reviewed Harry’s rationality test of McGonagall in Chapter 6.
I would have given Harry a three-step plan: inform McGonagall, monitor situation, escalate if not resolved. Based on McGonagall’s characterization in the part of the story I read, barring some drastic idiot-balling since I quit, she’s willing to take Harry seriously enough to act based on the information he provides; unless the bullies are somehow so devious as to be capable of evading both Harry and McGonagall’s surveillance—and note that, with McGonagall taking point, they wouldn’t know that they need to hide from Harry—this plan would have a reasonable chance of working with much less effort from Harry (and much less probability of misfiring) than any finger-snapping shenanigans. Not to mention that, if Harry read the situation wrong, this would give him a chance to be set straight. Not to mention that, if McGonagall makes a serious effort to crack down on bullying, the effect is likely to persist for far longer than Harry’s term.
On the subject of psychology: really, what made me so emphatic in my denouncing “heroic responsibility” was [edit: my awareness of] the large percentage of adults (~10-18%) subject to anxiety disorders of one kind or another—including me. One of the most difficult problems for such people is how to restrain their instinct to blame themselves—how to avoid blaming themselves for events out of their control. When Harry says, “whatever happens, no matter what, it’s always your fault” to such persons, he is saying, “blame yourself for everything” … and that makes his suggestion completely useless to guide their behavior.
I would have given Harry a three-step plan: inform McGonagall, monitor situation, escalate if not resolved.
Your three step plan seems much more effective than Harry’s shenannigans and also serves as an excellent example of heroic responsibility. Normal ‘responsibility’ in that situation is to do nothing or at most take step one.
Heroic responsibility doesn’t mean do it yourself through personal power and awesomeness. It means using whatever resources are available to cause the desired thing to occur (unless the cost of doing so is deemed too high relative to the benefit). Institutions, norms and powerful people are valuable resources.
I’m realizing that my attitude towards heroic responsibility is heavily driven by the anxiety-disorder perspective, but telling me that I am responsible for x doesn’t tell me that I am allowed to delegate x to someone else, and—especially in contexts like Harry’s decision (and Swimmer’s decision in the OP) - doesn’t tell me whether “those nominally responsible can’t do x” or “those nominally responsible don’t know that they should do x”. Harry’s idea of heroic responsibility led him to conflate these states of affairs re: McGonagall, and the point of advice is to make people do better, not to win philosophy arguments.
When I came up with the three-point plan I gave to you, I did not do so by asking, “what would be the best way to stop this bullying?” I did so by asking myself, “if McGonagall is the person best placed to stop bullying, but official school action might only drive bullying underground without stopping it, what should I do?” I asked myself this because subsidiarity includes something that heroic responsibility does not: the idea that some people are more responsible—better placed, better trained, better equipped, etc. - than others for any given problem, and that, unless the primary responsibility-holder cannot do the job, those farther away should give support instead of acting on their own.
(Actually, thinking about localism suggested a modification to my Step 1: brief the prefects on the situation in addition to briefing McGonagall. That said, I don’t know if that would be a good idea in this case—again, I stopped reading twenty chapters before.)
I asked myself this because subsidiarity includes something that heroic responsibility does not: the idea that some people are more responsible—better placed, better trained, better equipped, etc. - than others for any given problem, and that, unless the primary responsibility-holder cannot do the job, those farther away should give support instead of acting on their own.
I agree with all of this except the part where you say that heroic responsibility does not include this. As wedrifid noted in the grandparent of this comment, heroic responsibility means using the resources available in order to achieve the desired result. In the context of HPMoR, Harry is responding to this remark by Hermione:
“I would’ve done the responsible thing and told Professor McGonagall and let her take care of it,” Hermione said promptly.
Again, as wedrifid noted above, this is step one and only step one. Taking that step alone, however, is not heroic responsibility. I agree that Harry’s method of dealing with the situation was far from optimal; however, his general point I agree with completely. Here is his response:
“You could call it heroic responsibility, maybe,” Harry Potter said. “Not like the usual sort. It means that whatever happens, no matter what, it’s always your fault. Even if you tell Professor McGonagall, she’s not responsible for what happens, you are. Following the school rules isn’t an excuse, someone else being in charge isn’t an excuse, even trying your best isn’t an excuse. There just aren’t any excuses, you’ve got to get the job done no matter what.”
Notice that nowhere in this definition is the notion of running to an authority figure precluded! Harry himself didn’t consider it because he’s used to occupying the mindset that “adults are useless”. But if we ignore what Harry actually did and just look at what he said, I’m not seeing anything here that disagrees with anything you said. Perhaps I’m missing something. If so, could you elaborate?
Neither Hermione nor Harry dispute that they have a responsibility to protect the victims of bullying. There may be people who would have denied that, but none of them are involved in the conversation. What they are arguing over is what their responsibility requires of them, not the existence of a responsibility. In other words, they are arguing over what to do.
Human beings are not perfect Bayesian calculators. When you present a human being with criteria for success, they do not proceed to optimize perfectly over the universe of all possible strategies. The task “write a poem” is less constrained than the task “write an Elizabethan sonnet”, and in all likelihood the best poem is not an Elizabethan sonnet, but that doesn’t mean that you will get a better poem out of a sixth-grader by asking for any poem than by giving them something to work with. The passage from Zen and the Art of Motorcycle Maintenance Eliezer Yudkowsy quoted back during the Overcoming Bias days, “Original Seeing”, gave an example of this: the student couldn’t think of anything to say in a five-hundred word essay about the United States, Bozeman, or the main street of Bozeman, but produced a five-thousand word essay about the front facade of the Opera House. Therefore, when I evaluate “heroic responsibility”, I do not evaluate it as a proposition which is either true or false, but as a meme which either produces superior or inferior results—I judge it by instrumental, not epistemic, standards.
Looking at the example in the fanfic and the example in the OP, as a means to inspire superior strategic behavior, it sucks. It tells people to work harder, not smarter. It tells people to fix things, but it doesn’t tell them how to fix things—and if you tell a human being (as opposed to a perfect Bayesian calculator) to fix something, it sounds like you’re telling them to fix it themselves because that is what it sounds like from a literary perspective. “You’ve got to get the job done no matter what” is not what the hero says when they want people to vote in the next school board election—it’s what the hero says when they want people to run for the school board in the next election, or to protest for fifteen days straight outside the meeting place of the school board to pressure them into changing their behavior, or something else on that level of commitment. And if you want people to make optimal decisions, you need to give them better guidance than that to allocating their resources.
That’s the part I’m not getting. All Harry is saying is that you should consider yourself responsible for the actions you take, and that delegating that responsibility to someone else isn’t a good idea. Delegating responsibility, however, is not the same as delegating tasks. Delegating a particular task to someone else might well be the correct action in some contexts, but you’re not supposed to use that as an excuse to say, “Because I delegated the task of handling this situation to someone else, I am no longer responsible for the outcome of this situation.” This advice doesn’t tell people how to fix things, true, but that’s not the point—it tells people how to get into the right mindset to fix things. In other words, it’s not object-level advice; it’s meta-level advice, and obviously if you treat it as the former instead of the latter you’re going to come to the conclusion that it sucks.
Sometimes, to solve a problem, you have to work harder. Other times, you have to work smarter. Sometimes, you have to do both. “Heroic responsibility” isn’t saying anything that contradicts that. In the context of the conversation in HPMoR, I do not agree with either Hermione or Harry; both of them are overlooking a lot of things. But those are object-level considerations. Once you look at the bigger picture—the level on which Harry’s advice about heroic responsibility actually applies—I don’t think you’ll find him saying anything that runs counter to what you’re saying. If anything, I’d say he’s actually agreeing with you!
Humans are not perfectly rational agents—far from it. System 1 often takes precedence over System 2. Sometimes, to get people going, you need to re-frame the situation in a way that makes both systems “get it”. The virtue of “heroic responsibility”, i.e. “no matter what happens, you should consider yourself responsible”, seems like a good way to get that across.
That’s an interesting question. I’ll try to answer it here.
“You could call it heroic responsibility, maybe,” Harry Potter said. “Not like the usual sort. It means that whatever happens, no matter what, it’s always your fault. Even if you tell Professor McGonagall, she’s not responsible for what happens, you are. Following the school rules isn’t an excuse, someone else being in charge isn’t an excuse, even trying your best isn’t an excuse. There just aren’t any excuses, you’ve got to get the job done no matter what.”
This seems to imply that no matter whatever happens, you should hold yourself responsible in the end. If you take a randomly selected person, which of the following two cases do you think will be more likely to cause that person to think really hard about how to solve a problem?
They are told to solve the problem.
They are told that they must solve the problem, and if they fail for any reason, it’s their fault.
Personally, I would find the second case far more pressing and far more likely to cause me to actually think, rather than just take the minimum number of steps required of me in order to fulfill the “role” of a problem-solver, and I suspect that this would be true of many other people here as well. Certainly I would imagine it’s true of many effective altruists, for instance. It’s possible I’m committing a typical mind fallacy here, but I don’t think so.
On the other hand, you yourself have said that your attitude toward this whole thing is heavily driven by the fact that you have anxiety disorder, and if that’s the case, then I agree that blaming yourself is entirely the wrong way to go about doing things. That being said, the whole point of having something called “heroic responsibility” is to get people to actually put in some effort as opposed to just playing the role of someone who’s perceived as putting in effort. If you are able to do that without resorting to holding yourself responsible for the outcomes of situations, then by all means continue to do so. However, I would be hesitant to label advice intended to motivate and galvanize as “useless”, especially when using evidence taken from a subset of all people (those with anxiety disorders) to make a general claim (the notion of “heroic responsibility” is useless).
I think I see what you’re getting at. If I understand you rightly, what “heroic responsibility” is intended to affect is the behavior of people such as [trigger warning: child abuse, rape] Mike McQueary during the Penn State child sex abuse scandal, who stumbled upon Sandusky in the act, reported it to his superiors (and, possibly, the police), and failed to take further action when nothing significant came of it. [/trigger warning] McQueary followed the ‘proper’ procedure, but he should not have relied upon it being sufficient to do the job. He had sufficient firsthand evidence to justify much more dramatic action than what he did.
Given that, I can see why you object to my “useless”. But when I consider the case above, I think what McQueary was lacking was the same thing that Hermione was lacking in HPMoR: a sense of when the system might fail.
Most of the time, it’s better to trust the system than it is to trust your ability to outthink the system. The system usually has access to much, much more information than you do; the system usually has people with much, much better training than you have; the system usually has resources that are much, much more abundant than you can draw on. In the vast majority of situations I would expect McQueary or Hermione to encounter—defective equipment, scheduling conflicts, truancy, etc. - I think they would do far worse by taking matters into their own hands than by calling upon the system to handle it. In all likelihood, prior to the events in question, their experiences all supported the idea that the system is sound. So what they needed to know was not that they were somehow more responsible to those in the line of fire than they previously realized, but that in these particular cases they should not trust the system. Both of them had access to enough data to draw that conclusion*, but they did not.
If they had, you would not need to tell them that they had a responsibility. Any decent human being would feel that immediately. What they needed was the sense that the circumstances were extraordinary and awareness of the extraordinary actions that they could take. And if you want to do better than chance at sensing extraordinary circumstances when they really are extraordinary and better than chance at planning extraordinary action that is effective, determination is nice, but preparation and education are a whole lot better.
* The reasons differ: McQueary shouldn’t have trusted it because:
One cannot rely on any organization to act against any of its members unless that member is either low-status or has acted against the preferences of its leadership.
In some situations, one’s perceptions—even speculative, gut-feeling, this-feels-not-right perceptions—produce sufficiently reliable Bayesian evidence to overwhelm the combined force of a strong negative prior on whether an event could happen and the absence of supporting evidence from others in the group that said event could happen.
...while Hermione shouldn’t have trusted it because:
Past students like James Potter got away with much because they were well-regarded.
Present employees like Snape got away with much because they were an established part of the system.
Again, you’re right about the advice being poor – in the way you mention – but I also think it’s great advice if you consider it’s target the idea that the consequences are irrelevant if you’ve done the ‘right’ thing. If you’ve done the ‘right’ thing but the consequences are still bad, then you should probably reconsider what you’re doing. When aiming at this target, ‘heroic responsibility’ is just the additional responsibility of considering whether the ‘right’ thing to do is really right (i.e. will really work).
...
And now that I’m thinking about this heroic responsibility idea again, I feel a little more strongly how it’s a trap – it is. Nothing can save you from potential devastation at the loss of something or someone important to you. Simply shouldering responsibility for everything you care about won’t actually help. It’s definitely a practical necessity that groups of people carefully divide and delegate important responsibilities. But even that’s not enough! Nothing’s enough. So we can’t and shouldn’t be content with the responsibilities we’re expected to meet.
I subscribe to the idea that virtue ethics is how humans should generally implement good (ha) consequentialist ethics. But we can’t escape the fact that no amount of Virtue is a complete and perfect means of achieving all our desired ends! We’re responsible for which virtues we hold as much as we are of learning and practicing them.
You are analyzing “heroic responsibility” as a philosophical construct. I am analyzing it as [an ideological mantra]. Considering the story, there’s no reason for Harry to have meant it as the former, given that it is entirely redundant with the pre-existing philosophical construct of consequentialism, and every reason for him to have meant it as the latter, given that it explains why he must act differently than Hermione proposes.
[Note: the phrase “an ideological mantra” appears here because I’m not sure what phrase should appear here. Let me know if what I mean requires elaboration.]
I think you might be over-analyzing the story; which is fine actually, as I’m enjoying doing the same.
I have no evidence that Eliezer considered it so, but I just think Harry was explaining consequentialism to Hermione, without introducing it as a term.
I’m unsure if it’s connected in any obvious way, but to me the quoted conversation between Harry and Hermione is reminiscent of other conversations between the two characters about heroism generally. In that context, it’s obviously a poor ‘ideological mantra’ as it was targeted towards Hermione. Given what I remember of the story, it worked pretty well for her.
I confess, it would make sense to me if Harry was unfamiliar with metaethics and his speech about “heroic responsibility” was an example of him reinventing the idea. If that is the case, it would explain why his presentation is as sloppy as it is.
I’m realizing that my attitude towards heroic responsibility is heavily driven by the anxiety-disorder perspective,
Surprisingly, so is mine, yet we’ve arrived at entirely different philosophical conclusions. Perfectionistic, intelligent idealist with visceral aversions to injustice walk a fine line when it comes to managing anxiety and the potential for either burn out or helpless existential dispair. To remain sane and effectively harness my passion and energy I had to learn a few critical lessons:
Over-responsibility is not ‘responsible’. It is right there next to ‘completely negligent’ inside the class ‘irresponsible’.
Trusting that if you do what the proximate social institution suggests you ‘should’ do then it will take care of problems is absurd. Those cursed with either weaker than normal hypocrisy skills or otherwise lacking the privelidge to maintain a sheltered existence will quickly become distressed from constant disappointment.
For all that the local social institutions fall drastically short of ideals—and even fall short of what we are supposed to pretend to believe of them—they are still what happens to be present in the universe that is and so are a relevant source of power. Finding ways to get what you want (for yourself or others) by using the system is a highly useful skill.
You do not (necessarily) need to fix the system in order to fix a problem that is important to you. You also don’t (necessarily) need to subvert it.
‘Hermione’ style ‘responsibility’ would be a recipe for insanity if I chose to keep it. I had to abandon it at about the same age she is in the story. It is based on premises that just don’t hold in this universe.
but telling me that I am responsible for x doesn’t tell me that I am allowed to delegate x to someone else
‘Responsibility’ of the kind you can tell others they have is almost always fundamentally different in kind to the ‘responsibility’ word as used in ‘heroic responsibility’. It’s a difference that results in frequent accidental equivocation and accidental miscommunicaiton across inferential distances. This is one rather large problem with ‘heroic responsibility’ as a jargon term. Those who have something to learn about the concept are unlikely to because ‘responsibility’ comes riddled with normative social power connotations.
, and—especially in contexts like Harry’s decision (and Swimmer’s decision in the OP) - doesn’t tell me whether “those nominally responsible can’t do x” or “those nominally responsible don’t know that they should do x”.
That’s technically true. Heroic responsibility is completely orthogonal to either of those concerns.
I asked myself this because subsidiarity includes something that heroic responsibility does not: the idea that some people are more responsible—better placed, better trained, better equipped, etc. - than others for any given problem, and that, unless the primary responsibility-holder cannot do the job, those farther away should give support instead of acting on their own.
Expected value maximisation isn’t for everyone. Without supplementing it with an awfully well developed epistemology people will sometimes be worse off than with just following whichever list of ‘shoulds’ they have been prescribed.
I may have addressed the bulk of what you’re getting at in another comment; the short form of my reply is, “In the cases which ‘heroic responsibility’ is supposed to address, inaction rarely comes because an individual does not feel responsible, but because they don’t know when the system may fail and don’t know what to do when it might.”
I may have addressed the bulk of what you’re getting at in another comment; the short form of my reply is, “In the cases which ‘heroic responsibility’ is supposed to address, inaction rarely comes because an individual does not feel responsible, but because they don’t know when the system may fail and don’t know what to do when it might.”
Short form reply: That seems false. Perhaps you have a different notion of precisely what heroic responsibility is supposed to address?
As I read HPMoR (and I’ve read all of it), a lot of the reason why Harry specifically distrusts the relevant authority figures is that they are routinely surprised by the various horrible events that happen and seem unwilling to accept responsibility for anything they don’t already expect. McGonagall definitely improves on this point in the story tho.
In the story, the advice Harry gives Hermione seems appropriate. Your example would be much better for anyone inclined to anxiety about satisfying arbitrary constraints (i.e. being responsible for arbitrary outcomes) – and probably for anyone, period, if for no other reason than it’s easier to edit an existing idea than generate an entirely new one.
@wedrifid’s correct your plan is better than Harry’s in the story, but I think Harry’s point – and it’s one I agree with – is that even having a plan, and following it, doesn’t absolve oneself – and to oneself, if no one else – of coming up with a better plan, or improvising, or delegating some or all of the plan, if that’s what’s needed to stop kids from being bullied or an evil villain from destroying the world (or whatever).
Another way to consider the conversation in the story is that Hermione initially represents virtue ethics:
“I would’ve done the responsible thing and told Professor McGonagall and let her take care of it,” Hermione said promptly.
Harry counters with a rendition of consequentialist ethics.
If I believed you to be a virtue ethicist, I might say that you must be mindful of your audience when dispensing advice. If I believed you to be a deontologist, I might say that you should tailor your advice to the needs of the listener. Believing you to be a consequentialist, I will say that advice is only good if it produces better outcomes than the alternatives.
Of course, you know this. So why do you argue that Harry’s speech about heroic responsibility is good advice?
No, I haven’t answered my own question. In what way was Harry’s monologue about consequentialist ethics superior to telling Hermione why McGonagall couldn’t be counted upon?
HPJEV isn’t supposed to be a perfect executor of his own advice and statements. I would say that it’s not the concept of heroic responsibility is at fault, but his own self-serving reasoning which he applies to justify breaking the rules and doing something cool. In doing so, he fails his heroic responsibility to the over 100 expected people whose lives he might have saved by spending his time more effectively (by doing research which results in an earlier friendly magitech singularity, and buying his warm fuzzies separately by learning the spell for transfiguring a bunch of kittens or something), and HPJEV would feel appropriately bad about his choices if he came to that realisation.
you’ll drive yourself crazy if you blame yourself every time you “could have” prevented something that no-one should expect you to have.
Depending on what you mean by “blame”, I would either disagree with this statement, or I would say that heroic responsibility would disapprove of you blaming yourself too. By heroic responsibility, you don’t have time to feel sorry for yourself that you failed to prevent something, regardless of how realistically you could have.
It is impossible to fulfill the requirements of heroic responsibility.
Where do you get the idea of “requirements” from? When a shepherd is considered responsible for his flock, is he not responsible for every sheep? And if we learn that wolves will surely eat a dozen over the coming year, does that make him any less responsible for any one of his sheep? IMO no: he should try just as hard to save the third sheep as the fifth, even if that means leaving the third to die when it’s wounded so that 4-10 don’t get eaten because they would have been traveling more slowly.
It is a basic fact of utilitarianism that you can’t score a perfect win. Even discounting the universe which is legitimately out of your control, you will screw up sometimes as point of statistical fact. But that does not make the utilons you could not harvest any less valuable than the ones you could have. Heroic responsibility is the emotional equivalent of this fact.
What you need is the serenity to accept the things you cannot change, the courage to change the things you can, and the wisdom to know the difference.
That sounds wise, but is it actually true? Do you actually need that serenity/acceptance part? To keep it heroically themed, I think you’re better off with courage, wisdom, and power.
That sounds wise, but is it actually true? Do you actually need that serenity/acceptance part?
Yes, I do. Most other humans do, too and it’s a sufficiently difficult and easy to neglect skill that it is well worth preserving as ‘wisdom’.
Non-human intelligences will not likely have ‘serenity’ or ‘acceptance’ but will need some similar form of the generalised trait of not wasting excessive amounts of computational resources exploring parts of solution space that have insufficient probability of significant improvement.
In that case, I’m confused about what serenity/acceptance entails, why you seem to believe heroic responsibility to be incongruent with it, and why it doesn’t just fall under “courage” and “wisdom” (as the emotional fortitude to withstand the inevitable imperfection/partial failure and accurate beliefs respectively). Not wasting (computational) resources on efforts with low expected utility is part of your responsibility to maximise utility, and I don’t see a reason to have a difference between things I “can’t change” and things I might be able to change but which are simply suboptimal.
I’m confused about what serenity/acceptance entails
A human psychological experience and tool that can approximately be described by referring to allocating attention and resources efficiently in the face of some adverse and difficult to influence circumstance.
why you seem to believe heroic responsibility to be incongruent with it
I don’t. I suspect you are confusing me with someone else.
Not wasting (computational) resources on efforts with low expected utility is part of your responsibility to maximise utility,
Yes. Yet for some reason merely seeing an equation and believing it must be maximised is an insufficient guide to optimally managing the human machinery we inhabit. We have to learn other things—including things which can be derived from the equation - in detail and and practice them repetitively.
and I don’t see a reason to have a difference between things I “can’t change” and things I might be able to change but which are simply suboptimal.
The Virtue of Narrowness may help you. I have different names for “DDR Ram” and “A replacement battery for my Sony Z2 android” even though I can see how they both relate to computers.
Right, I thought you were RobinZ. By the context, it sounds like he does consider serenity incongruous with heroic responsibility:
There are no rational limits to heroic responsibility. It is impossible to fulfill the requirements of heroic responsibility. What you need is the serenity to accept the things you cannot change, the courage to change the things you can, and the wisdom to know the difference.
With my (rhetorical) question, I expressed doubt towards his interpretation of the phrase, not (necessarily) all reasonable interpretations of it.
and I don’t see a reason to have a difference between things I “can’t change” and things I might be able to change but which are simply suboptimal.
The Virtue of Narrowness may help you. I have different names for “DDR Ram” and “A replacement battery for my Sony Z2 android” even though I can see how they both relate to computers.
For me at least, saying something “can’t be changed” roughly means modelling something as P(change)=0. This may be fine as a local heuristic when there are significantly larger expected utilities on the line to work with, but without a subject of comparison it seems inappropriate, and I would blame it for certain error modes, like ignoring theories because they have been labeled impossible at some point.
To approach it another way, I would be fine with just adding adjectives to “extremely ridiculously [...] absurdly unfathomably unlikely” to satisfy the requirements of narrowness, rather than just saying something can’t be done.
A human psychological experience and tool that can approximately be described by referring to allocating attention and resources efficiently in the face of some adverse and difficult to influence circumstance.
I would call this “level-headedness”. By my intuition, serenity is a specific calm emotional state, which is not required to make good decisions, though it may help. My dataset luckily isn’t large, but I have been able to get by on “numb” pretty well in the few relevant cases.
Right, I thought you were RobinZ. By the context, it sounds like he does consider serenity incongruous with heroic responsibility:
I agree. I downvoted RobinZ’s comment and ignored it because the confusion about what heroic responsibility means was too fundamental, annoyingly difficult to correct and had incidentally already been argued for far more eloquently elsewhere in the thread. In contrast I fundamentally agree with most of what you have said on this thread so the disagreement on one conclusion regarding a principle of rationality and psychology is more potentially interesting.
With my (rhetorical) question, I expressed doubt towards his interpretation of the phrase, not (necessarily) all reasonable interpretations of it.
I agree with your rejection of the whole paragraph. My objection seems to be directed at the confusion about heroic (and arguably mundane) responsibility rather than the serenity wisdom heuristic.
For me at least, saying something “can’t be changed” roughly means modelling something as P(change)=0.
I can empathize with being uncomfortable with colloquial expressions which deviate from literal meaning. I can also see some value in making a stand against that kind of misuse due to the way such framing can influence our thinking. Overconfident or premature ruling out of possibilities is something humans tend to be biased towards.
I would call this “level-headedness”.
Whatever you call it it sounds like you have the necessary heuristics in place to avoid the failure modes the wisdom quote is used to prevent. (Avoiding over-responsibility and avoiding pointless worry loops).
By my intuition, serenity is a specific calm emotional state, which is not required to make good decisions, though it may help.
The phrasing “The X to” intuitively brings to my mind a relative state rather than an absolute one. That is, while getting to some Zen endpoint state of inner peace or tranquillity is not needed but there are often times when moving towards that state to a sufficient degree will allow much more effective action. ie. it translates to “whatever minimum amount of acceptance of reality and calmness is needed to allow me correctly account for opportunity costs and decide according to the bigger picture”.
My dataset luckily isn’t large, but I have been able to get by on “numb” pretty well in the few relevant cases.
That can work. If used too much it sometimes seems to correlate with developing pesky emotional associations (like ‘Ugh fields’) with related stimulus but that obviously depends on which emotional cognitive processes result in the ‘numbness’ and soforth.
I downvoted RobinZ’s comment and ignored it because the confusion about what heroic responsibility means was too fundamental, annoyingly difficult to correct and had incidentally already been argued for far more eloquently elsewhere in the thread.
I would rather you tell me that I am misunderstanding something than downvote silently. My prior probability distribution over reasons for the −1 had “I disagreed with Eliezer Yudkowsky and he has rabid fans” orders of magnitude more likely than “I made a category error reading the fanfic and now we’re talking past each other”, and a few words from you could have reversed that ratio.
I would rather you tell me that I am misunderstanding something than downvote silently.
Thankyou for your feedback. I usually ration my explicit disagreement with people on the internet but your replies prompt me to add “RobinZ” to the list of people worth actively engaging with.
...huh. I’m glad to have been of service, but that’s not really what I was going for. I meant that silent downvoting for the kind of confusion you diagnosed in me is counterproductive generally—“You keep using that word. I do not think it means what you think it means” is not a hypothesis that springs naturally to mind. The same downvote paired with a comment saying:
This is a waste of time. You keep claiming that “heroic responsibility” says this or “heroic responsibility” demands that, but you’re fundamentally mistaken about what heroic responsibility is and you can’t seem to understand anything we say to correct you. I’m downvoting the rest of this conversation.
...would have been more like what I wanted to encourage.
I meant that silent downvoting for the kind of confusion you diagnosed in me is counterproductive generally
I fundamentally disagree. It is better for misleading comments to have lower votes than insightful ones. This helps limit the epistemic damage caused to third parties. Replying to every incorrect claim with detailed arguments in not viable and not my responsibility either heroic or conventional—even though my comment history suggests that for a few years I made a valiant effort.
Silent downvoting is often the most time efficient form positive influence available and I endorse it as appropriate, productive and typically wiser than trying to argue all the time.
I didn’t propose that you should engage in detailed arguments with anyone—not even me. I proposed that you should accompany some downvotes with an explanation akin to the three-sentence example I gave.
Another example of a sufficiently-elaborate downvote explanation: “I downvoted your reply because it mischaracterized my position more egregiously than any responsible person should.” One sentence, long enough, no further argument required.
the medical records should automatically include Bayesian probability data on symptoms to help nurses recognize when the diagnosis doesn’t fit
Medical expert systems are getting pretty good, I don’t see why you wouldn’t just jump straight to an auto-updated list of most likely diagnoses (generated by a narrow AI) given the current list of symptoms and test results.
Most patient cases are so easy and common that filling forms for an AI would greatly slow the system down. AI could be useful when the diagnosis isn’t clear however. A sufficiently smart AI could pick up the relevant data from the notes but usually the picture that the diagnostician has in their mind is much more complete than any notes they make.
Note that I’m looking at this from a perspective where implementing theoretically smart systems has usually done nothing but increased my workload.
Most patient cases are so easy and common that filling forms for an AI would greatly slow the system down.
I am assuming you’re not filling out any forms specially for the AI—just that the record-keeping system is computerized and the AI has access to it. In trivial cases the AI won’t have much data (e.g. no fever, normal blood pressure, complains of a running nose and cough, that’s it) and its diagnoses will be low-credence, but that’s fine, you as a doctor won’t need its assistance in those cases.
The AI would need to know natural language to be of any use or else it will miss most of the relevant data. I suppose Watson is pretty close to that and have read that it’s tested in some hospitals. I wonder how this is implemented. I suspect doctors carry a lot more data in their heads than is readily apparent and much of this data will never make it to their notes and thus to the computerized records.
Taking a history, evaluating it’s reliability and using the senses to observe the patients are something machines can’t do for quite some time. On top of this I roughly know hundreds of patients now that I will see time and again and this helps immensely when judging their most acute presentations. By this I don’t mean I know them as lists of symptoms, but I know their personalities too and how this affects how they tell their stories and how seriously they take their symptoms from minor complaints to major problems. I could never take the approach of jumping from a hospital to hospital now that I’ve experienced this first hand.
The AI would need to know natural language to be of any use or else it will miss most of the relevant data. I suppose Watson is pretty close to that and have read that it’s tested in some hospitals. I wonder how this is implemented. I suspect doctors carry a lot more data in their heads than is readily apparent and much of this data will never make it to their notes and thus to the computerized records.
This is the reason Watson is a game-changer, despite expert prediction systems (using linear regression!) performing at the level of expert humans for ~50 years. Doctors may carry a lot of information in their heads, but I’ve yet to meet a person that’s able to mentally invert matrices of non-trivial size, which helps quite a bit with determining the underlying structure of the data and how best to use it.
Taking a history, evaluating it’s reliability and using the senses to observe the patients are something machines can’t do for quite some time.
I think machines have several comparative advantages here. An AI with basic conversational functions can take a history, and is better at evaluating some parts of the reliability and worse at others. It can compare with ‘other physicians’ more easily, or check public records, but probably can’t determine whether or not it’s a coherent narrative as easily (“What is Toronto?”). A webcam can measure pulse rate just by looking, and so I suspect it’ll be about as good at detecting deflection and lying as the average doctor. (I don’t remember seeing doctors as being particularly good at lie-detection, but it’s been a while since I’ve read any of the lie-detection literature.)
I could never take the approach of jumping from a hospital to hospital now that I’ve experienced this first hand.
Note that if the AI is sufficiently broadly used (here I’m imagining, say, the NHS in the UK using just one) then everyone will always have access to a doctor that’s known them as long as they’ve been in the system.
despite expert prediction systems (using linear regression!) performing at the level of expert humans for ~50 years.
Is this because using them is incredibly slow or something else?
A webcam can measure pulse rate just by looking, and so I suspect it’ll be about as good at detecting deflection and lying as the average doctor. (I don’t remember seeing doctors as being particularly good at lie-detection, but it’s been a while since I’ve read any of the lie-detection literature.)
Lies make no sense medically, or make too much sense. Once I’ve spotted a few lies, many of them fit a stereotypical pattern many patients use even if there aren’t any other clues. I don’t need to rely on body language much.
People also misremember things, or have a helpful relative misremember things for them, or home care providers feeding their clueless preliminary diagnoses for these people. People who don’t remember fill in the gap with something they think is plausible. Some people are also psychotic or don’t even remember what year it is or why they came in the first place. Some people treat every little ache like it’s the end of the world and some don’t seem to care if their leg’s missing.
I think even an independent AI could make up for many of its faults simply by being more accurate at interpreting the records and current test results.
I hope that when an AI can do my job I don’t need a job anymore :)
Is this because using them is incredibly slow or something else?
My understanding is that the ~4 measurements the system would use as inputs were typically measured by the doctor, and by the time the doctor had collected the data they had simultaneously come up with their own diagnosis. Typing the observations into the computer to get the same level of accuracy (or a few extra percentage points) rarely seemed worth it, and turning the doctor from a diagnostician to a tech was, to put it lightly, not popular with doctors. :P
There are other arguments which would take a long time to go into. One is “but what about X?”, where the linear regression wouldn’t take into account some other variable that the human could take into account, and so the human would want an override option. But, as one might expect, the only way for the regression to outperform the human is for the regression to be right more often than not when the two of them disagree, and humans are unfortunately not very good at determining whether or not the case in front of them is a special case where an override will increase accuracy or a normal case where an override will decrease accuracy. Here’s probably the best place to start if interested in reading more.
A rather limited subset of the natural language, I think it’s a surmountable problem.
I suspect doctors carry a lot more data in their heads than is readily apparent … I roughly know hundreds of patients now that I will see time and again and this helps immensely when judging their most acute presentations.
All true, which is why I think a well-designed diagnostic AI will work in partnership with a doctor instead of replacing him.
I agree with you, but I fear that makes for a boring conversation :)
The language is already relatively standardized and I suppose you could standardize it more to make it easier for the AI. I suspect any attempt to mold the system for an AI would meet heavy resistance however.
Largely for the same reasons that weather forecasting still involves human meteorologists and the draft in baseball still includes human scouts: a system that integrates both human and automated reasoning produces better outcomes, because human beings can see patterns a lot better than computers can.
I am not saying this narrow AI should be given direct control of IV drips :-/
I am saying that a doctor, when looking at a patient’s chart, should be able to see what the expert system considers to be the most likely diagnoses and then the doctor can accept one, or ignore them all, or order more tests, or do whatever she wants.
A system which automates almost all diagnoses would do that.
No, I don’t think so because even if you rely on an automated diagnosis you still have to treat the patient.
Even assuming that the machine would not be modified to give treatment recommendations, that wouldn’t change the effect I’m concerned about. If the doctor is accustomed to the machine giving the correct diagnosis for every patient, they’ll stop remembering how to diagnose disease and instead remember how to use the machine. It’s called “transactive memory”.
I’m not arguing against a machine with a button on it that says, “Search for conditions matching recorded symptoms”. I’m not arguing against a machine that has automated alerts about certain low-probability risks—if there was a box that noted the conjunction of “from Liberia” and “temperature spiking to 103 Fahrenheit” in Thomas Eric Duncan during his first hospital visit, there’d probably only be one confirmed case of ebola in the US instead of three, and Duncan might be alive today. But no automated system can be perfectly reliable, and I want doctors who are accustomed to doing the job themselves on the case whenever the system spits out, “No diagnosis found”.
You are using the wrong yardstick. Ain’t no thing is perfectly reliable. What matters is whether an automated system will be more reliable than the alternative—human doctors.
Commercial aviation has a pretty good safety record while relying on autopilots. Are you quite sure that without the autopilot the safety record would be better?
whenever the system spits out, “No diagnosis found”.
And why do you think a doctor will do better in this case?
I was going to say “doctor’s don’t have the option of not picking the diagnosis”, but that’s actually not true; they just don’t have the option of not picking a treatment. I’ve had plenty of patients who were “symptom X not yet diagnosed” and the treatment is basically supportive, “don’t let them die and try to notice if they get worse, while we figure this out.” I suspect that often it never gets figured out; the patient gets better and they go home. (Less so in the ICU, because it’s higher stakes and there’s more of an attitude of “do ALL the tests!”)
they just don’t have the option of not picking a treatment.
They do, they call the problem “psychosomatic” and send you to therapy or give you some echinacea “to support your immune system” or prescribe “something homeopathic” or whatever… And in very rare cases especially honest doctors may even admit that they do not have any idea what to do.
Because the cases where the doctor is stumped are not uniformly the cases where the computer is stumped. The computer might be stumped because a programmer made a typo three weeks ago entering the list of symptoms for diphtheria, because a nurse recorded the patient’s hiccups as coughs, because the patient is a professional athlete whose resting pulse should be three standard deviations slower than the mean … a doctor won’t be perfectly reliable either, but like a professional scout who can say, “His college batting average is .400 because there aren’t many good curveball pitchers in the league this year”, a doctor can detect low-prior confounding factors a lot faster than a computer can.
Well, let’s imagine a system which actually is—and that might be a stretch—intelligently designed.
This means it doesn’t say “I diagnose this patient with X”. It says “Here is a list of conditions along with their probabilities”. It also doesn’t say “No diagnosis found”—it says “Here’s a list of conditions along with their probabilities, it’s just that the top 20 conditions all have probabilities between 2% and 6%”.
It also says things like “The best way to make the diagnosis more specific would be to run test A, then test B, and if it came back in this particular range, then test C”.
A doctor might ask it “What about disease Y?” and the expert system will answer “It’s probability is such-and-such, it’s not zero because of symptoms Q and P, but it’s not high because test A came back negative and test B showed results in this range. If you want to get more certain with respect to disease Y, use test C.”
And there probably would be button which says “Explain” and pressing it will show the precisely what leads to the probability of disease X being what it is, and the doctor should be able to poke around it and say things like “What happens if we change these coughs to hiccups?”
An intelligently designed expert system often does not replace the specialist—it supports her, allows her to interact with it, ask questions, refine queries, etc.
If you have a patient with multiple nonspecific symptoms who takes a dozen different medications every day, a doctor cannot properly evaluate all the probabilities and interactions in her head. But an expert system can. It works best as a teammate of a human, not as something which just tells her.
Well, let’s imagine a system which actually is—and that might be a stretch—intelligently designed.
Us? I’m a mechanical engineer. I haven’t even read The Checklist Manifesto. I am manifestly unqualified either to design a user interface or to design a system for automated diagnosis of disease—and, as decades of professional failure have shown, neither of these is a task to be lightly ventured upon by dilettantes. The possible errors are simply too numerous and subtle for me to be assured of avoiding them. Case in point: prior to reading that article about Air France Flight 447, it never occurred to me that automation had allowed some pilots to completely forget how to fly a plane.
The details of automation are much less important to me than the ability of people like Swimmer963 to be a part of the decision-making process. Their position grants them a much better view of what’s going on with one particular patient than a doctor who reads a chart once a day or a computer programmer who writes software intended to read billions of charts over its operational lifespan. The system they are incorporated in should take advantage of that.
True story: when I first heard the phrase ‘heroic responsibility’, it took me about five seconds and the question, “On TV Tropes, what definition fits this title?” to generate every detail of EY’s definition save one. That detail was that this was supposed to be a good idea. As you point out—and eli-sennesh points out, and the trope that most closely resembles the concept points out - ‘heroic responsibility’ assumes that everyone other than the heroes cannot be trusted to do their jobs. And, as you point out, that’s a recipe for everyone getting in everyone else’s way and burning out within a year. And, as you point out, you don’t actually know the doctor’s job better than the doctors do.
In my opinion, what we should be advocating is the concept of ‘subsidiarity’ that Fred Clark blogs about on Slacktivist:
The idea of heroic responsibility suggests that you should make an extraordinary effort to coerce the doctor into re-examining diagnoses whenever you think an error has been made. Bearing in mind that I have no relevant expertise, the idea of subsidiarity suggests to me that you, being in a better position to monitor a patient’s symptoms than the doctor, should have the power to set wheels in motion when those symptoms do not fit the diagnosis … which suggests a number of approaches to the situation, such as asking the doctor, “Can you give me more information on what I should expect to see or not see based on this diagnosis?”
(My first thought regarding your anecdote was that the medical records should automatically include Bayesian probability data on symptoms to help nurses recognize when the diagnosis doesn’t fit, but this article about the misdiagnosis of Ebola suggests revising the system to make it more likely for doctors see the nurses’ observations that would let them catch a misdiagnosis. You’re in a better position to examine the policy question than I am.)
I have to admit, I haven’t been following the website for a long while—these days, I don’t get a lot of value out of it—so what I’m saying that Fred Clark is saying might be what a lot of people already see as the meaning of the concept. But I think that it is valuable to emphasize that responsibility is shared, and sometimes the best thing you can do is help other people do the job. And that’s not what Harry Potter-Evans-Verres does in the fanfic.
This would only be true if the hero has infinite resources, actually able to redo everyone’s work. In practice, deciding how your resources should be allocated requires a reasonably accurate estimate of how likely everyone is to do their job well. Swimmer963 shouldn’t insist on farming her own wheat for her bread (like she would if she didn’t trust the supply chain), not because she doesn’t have (heroic) responsibility to make sure she stays alive to help patients, but because that very responsibility means she shouldn’t waste her time and effort on unfounded paranoia to the detriment of everyone.
The main thing about heroic responsibility is that you don’t say “you should have gotten it right”. Instead you can only say “I was wrong to trust you this much”: it’s your failure, and whether it’s a failure of the person you trusted really doesn’t matter for the ethics of the thing.
My referent for ‘heroic responsibility’ was HPMoR, in which Harry doesn’t trust anyone to do a competent job—not even someone like McGonagall, whose intelligence, rationality, and good intentions he had firsthand knowledge of on literally their second meeting. I don’t know the full context, but unless McGonagall had her brain surgically removed sometime between Chapter 6 and Chapter 75, he could actually tell her everything that he knew that gave him reason to be concerned about the continued good behavior of the bullies in question, and then tell her if those bullies attempted to evade her supervision. And, in the real world, that would be a perfect example of comparative advantage and opportunity cost in action: Harry is a lot better at high-stakes social and magical shenanigans relative to student discipline than McGonagall is, so for her to expend her resources on the latter while he expends his on the former would produce a better overall outcome by simple economics. (Not to mention that Harry should face far worse consequences if he screws up than McGonagall would—even if he has his status as Savior of the Wizarding World to protect him.) (Also, leaving aside whether his plans would actually work.)
I am advocating for people to take the initiative when they can do good without permission. Others in the thread have given good examples of this. But you can’t solve all the problems you touch, and you’ll drive yourself crazy if you blame yourself every time you “could have” prevented something that no-one should expect you to have. There are no rational limits to heroic responsibility. It is impossible to fulfill the requirements of heroic responsibility. What you need is the serenity to accept the things you cannot change, the courage to change the things you can, and the wisdom to know the difference.
Did we read the same story? Harry has lots of evidence that McGonagall isn’t in fact trustworthy and in large-part it’s because she doesn’t fully accept heroic responsibility and is too willing to uncritically delegate responsibility to others.
I also vaguely remember your point being addressed in HPMoR. I certainly wouldn’t guess that Harry wouldn’t understand that “there are no rational limits to heroic responsibility”. It certainly matters for doing the most good as a creature that can’t psychologically handle unlimited responsibility.
Full disclosure: I stopped reading HPMoR in the middle of Chapter 53. When I was researching my comment, I looked at the immediate context of the initial definition of “heroic responsibility” and reviewed Harry’s rationality test of McGonagall in Chapter 6.
I would have given Harry a three-step plan: inform McGonagall, monitor situation, escalate if not resolved. Based on McGonagall’s characterization in the part of the story I read, barring some drastic idiot-balling since I quit, she’s willing to take Harry seriously enough to act based on the information he provides; unless the bullies are somehow so devious as to be capable of evading both Harry and McGonagall’s surveillance—and note that, with McGonagall taking point, they wouldn’t know that they need to hide from Harry—this plan would have a reasonable chance of working with much less effort from Harry (and much less probability of misfiring) than any finger-snapping shenanigans. Not to mention that, if Harry read the situation wrong, this would give him a chance to be set straight. Not to mention that, if McGonagall makes a serious effort to crack down on bullying, the effect is likely to persist for far longer than Harry’s term.
On the subject of psychology: really, what made me so emphatic in my denouncing “heroic responsibility” was [edit: my awareness of] the large percentage of adults (~10-18%) subject to anxiety disorders of one kind or another—including me. One of the most difficult problems for such people is how to restrain their instinct to blame themselves—how to avoid blaming themselves for events out of their control. When Harry says, “whatever happens, no matter what, it’s always your fault” to such persons, he is saying, “blame yourself for everything” … and that makes his suggestion completely useless to guide their behavior.
Your three step plan seems much more effective than Harry’s shenannigans and also serves as an excellent example of heroic responsibility. Normal ‘responsibility’ in that situation is to do nothing or at most take step one.
Heroic responsibility doesn’t mean do it yourself through personal power and awesomeness. It means using whatever resources are available to cause the desired thing to occur (unless the cost of doing so is deemed too high relative to the benefit). Institutions, norms and powerful people are valuable resources.
I’m realizing that my attitude towards heroic responsibility is heavily driven by the anxiety-disorder perspective, but telling me that I am responsible for x doesn’t tell me that I am allowed to delegate x to someone else, and—especially in contexts like Harry’s decision (and Swimmer’s decision in the OP) - doesn’t tell me whether “those nominally responsible can’t do x” or “those nominally responsible don’t know that they should do x”. Harry’s idea of heroic responsibility led him to conflate these states of affairs re: McGonagall, and the point of advice is to make people do better, not to win philosophy arguments.
When I came up with the three-point plan I gave to you, I did not do so by asking, “what would be the best way to stop this bullying?” I did so by asking myself, “if McGonagall is the person best placed to stop bullying, but official school action might only drive bullying underground without stopping it, what should I do?” I asked myself this because subsidiarity includes something that heroic responsibility does not: the idea that some people are more responsible—better placed, better trained, better equipped, etc. - than others for any given problem, and that, unless the primary responsibility-holder cannot do the job, those farther away should give support instead of acting on their own.
(Actually, thinking about localism suggested a modification to my Step 1: brief the prefects on the situation in addition to briefing McGonagall. That said, I don’t know if that would be a good idea in this case—again, I stopped reading twenty chapters before.)
I agree with all of this except the part where you say that heroic responsibility does not include this. As wedrifid noted in the grandparent of this comment, heroic responsibility means using the resources available in order to achieve the desired result. In the context of HPMoR, Harry is responding to this remark by Hermione:
Again, as wedrifid noted above, this is step one and only step one. Taking that step alone, however, is not heroic responsibility. I agree that Harry’s method of dealing with the situation was far from optimal; however, his general point I agree with completely. Here is his response:
Notice that nowhere in this definition is the notion of running to an authority figure precluded! Harry himself didn’t consider it because he’s used to occupying the mindset that “adults are useless”. But if we ignore what Harry actually did and just look at what he said, I’m not seeing anything here that disagrees with anything you said. Perhaps I’m missing something. If so, could you elaborate?
Neither Hermione nor Harry dispute that they have a responsibility to protect the victims of bullying. There may be people who would have denied that, but none of them are involved in the conversation. What they are arguing over is what their responsibility requires of them, not the existence of a responsibility. In other words, they are arguing over what to do.
Human beings are not perfect Bayesian calculators. When you present a human being with criteria for success, they do not proceed to optimize perfectly over the universe of all possible strategies. The task “write a poem” is less constrained than the task “write an Elizabethan sonnet”, and in all likelihood the best poem is not an Elizabethan sonnet, but that doesn’t mean that you will get a better poem out of a sixth-grader by asking for any poem than by giving them something to work with. The passage from Zen and the Art of Motorcycle Maintenance Eliezer Yudkowsy quoted back during the Overcoming Bias days, “Original Seeing”, gave an example of this: the student couldn’t think of anything to say in a five-hundred word essay about the United States, Bozeman, or the main street of Bozeman, but produced a five-thousand word essay about the front facade of the Opera House. Therefore, when I evaluate “heroic responsibility”, I do not evaluate it as a proposition which is either true or false, but as a meme which either produces superior or inferior results—I judge it by instrumental, not epistemic, standards.
Looking at the example in the fanfic and the example in the OP, as a means to inspire superior strategic behavior, it sucks. It tells people to work harder, not smarter. It tells people to fix things, but it doesn’t tell them how to fix things—and if you tell a human being (as opposed to a perfect Bayesian calculator) to fix something, it sounds like you’re telling them to fix it themselves because that is what it sounds like from a literary perspective. “You’ve got to get the job done no matter what” is not what the hero says when they want people to vote in the next school board election—it’s what the hero says when they want people to run for the school board in the next election, or to protest for fifteen days straight outside the meeting place of the school board to pressure them into changing their behavior, or something else on that level of commitment. And if you want people to make optimal decisions, you need to give them better guidance than that to allocating their resources.
That’s the part I’m not getting. All Harry is saying is that you should consider yourself responsible for the actions you take, and that delegating that responsibility to someone else isn’t a good idea. Delegating responsibility, however, is not the same as delegating tasks. Delegating a particular task to someone else might well be the correct action in some contexts, but you’re not supposed to use that as an excuse to say, “Because I delegated the task of handling this situation to someone else, I am no longer responsible for the outcome of this situation.” This advice doesn’t tell people how to fix things, true, but that’s not the point—it tells people how to get into the right mindset to fix things. In other words, it’s not object-level advice; it’s meta-level advice, and obviously if you treat it as the former instead of the latter you’re going to come to the conclusion that it sucks.
Sometimes, to solve a problem, you have to work harder. Other times, you have to work smarter. Sometimes, you have to do both. “Heroic responsibility” isn’t saying anything that contradicts that. In the context of the conversation in HPMoR, I do not agree with either Hermione or Harry; both of them are overlooking a lot of things. But those are object-level considerations. Once you look at the bigger picture—the level on which Harry’s advice about heroic responsibility actually applies—I don’t think you’ll find him saying anything that runs counter to what you’re saying. If anything, I’d say he’s actually agreeing with you!
Humans are not perfectly rational agents—far from it. System 1 often takes precedence over System 2. Sometimes, to get people going, you need to re-frame the situation in a way that makes both systems “get it”. The virtue of “heroic responsibility”, i.e. “no matter what happens, you should consider yourself responsible”, seems like a good way to get that across.
s/work harder, not smarter/get more work done, not how to get more work done/
Why do you believe this to be true?
That’s an interesting question. I’ll try to answer it here.
This seems to imply that no matter whatever happens, you should hold yourself responsible in the end. If you take a randomly selected person, which of the following two cases do you think will be more likely to cause that person to think really hard about how to solve a problem?
They are told to solve the problem.
They are told that they must solve the problem, and if they fail for any reason, it’s their fault.
Personally, I would find the second case far more pressing and far more likely to cause me to actually think, rather than just take the minimum number of steps required of me in order to fulfill the “role” of a problem-solver, and I suspect that this would be true of many other people here as well. Certainly I would imagine it’s true of many effective altruists, for instance. It’s possible I’m committing a typical mind fallacy here, but I don’t think so.
On the other hand, you yourself have said that your attitude toward this whole thing is heavily driven by the fact that you have anxiety disorder, and if that’s the case, then I agree that blaming yourself is entirely the wrong way to go about doing things. That being said, the whole point of having something called “heroic responsibility” is to get people to actually put in some effort as opposed to just playing the role of someone who’s perceived as putting in effort. If you are able to do that without resorting to holding yourself responsible for the outcomes of situations, then by all means continue to do so. However, I would be hesitant to label advice intended to motivate and galvanize as “useless”, especially when using evidence taken from a subset of all people (those with anxiety disorders) to make a general claim (the notion of “heroic responsibility” is useless).
I think I see what you’re getting at. If I understand you rightly, what “heroic responsibility” is intended to affect is the behavior of people such as [trigger warning: child abuse, rape] Mike McQueary during the Penn State child sex abuse scandal, who stumbled upon Sandusky in the act, reported it to his superiors (and, possibly, the police), and failed to take further action when nothing significant came of it. [/trigger warning] McQueary followed the ‘proper’ procedure, but he should not have relied upon it being sufficient to do the job. He had sufficient firsthand evidence to justify much more dramatic action than what he did.
Given that, I can see why you object to my “useless”. But when I consider the case above, I think what McQueary was lacking was the same thing that Hermione was lacking in HPMoR: a sense of when the system might fail.
Most of the time, it’s better to trust the system than it is to trust your ability to outthink the system. The system usually has access to much, much more information than you do; the system usually has people with much, much better training than you have; the system usually has resources that are much, much more abundant than you can draw on. In the vast majority of situations I would expect McQueary or Hermione to encounter—defective equipment, scheduling conflicts, truancy, etc. - I think they would do far worse by taking matters into their own hands than by calling upon the system to handle it. In all likelihood, prior to the events in question, their experiences all supported the idea that the system is sound. So what they needed to know was not that they were somehow more responsible to those in the line of fire than they previously realized, but that in these particular cases they should not trust the system. Both of them had access to enough data to draw that conclusion*, but they did not.
If they had, you would not need to tell them that they had a responsibility. Any decent human being would feel that immediately. What they needed was the sense that the circumstances were extraordinary and awareness of the extraordinary actions that they could take. And if you want to do better than chance at sensing extraordinary circumstances when they really are extraordinary and better than chance at planning extraordinary action that is effective, determination is nice, but preparation and education are a whole lot better.
* The reasons differ: McQueary shouldn’t have trusted it because:
One cannot rely on any organization to act against any of its members unless that member is either low-status or has acted against the preferences of its leadership.
In some situations, one’s perceptions—even speculative, gut-feeling, this-feels-not-right perceptions—produce sufficiently reliable Bayesian evidence to overwhelm the combined force of a strong negative prior on whether an event could happen and the absence of supporting evidence from others in the group that said event could happen.
...while Hermione shouldn’t have trusted it because:
Past students like James Potter got away with much because they were well-regarded.
Present employees like Snape got away with much because they were an established part of the system.
All right, cool. I think that dissolves most of our disagreement.
Glad to hear it. :)
Again, you’re right about the advice being poor – in the way you mention – but I also think it’s great advice if you consider it’s target the idea that the consequences are irrelevant if you’ve done the ‘right’ thing. If you’ve done the ‘right’ thing but the consequences are still bad, then you should probably reconsider what you’re doing. When aiming at this target, ‘heroic responsibility’ is just the additional responsibility of considering whether the ‘right’ thing to do is really right (i.e. will really work).
...
And now that I’m thinking about this heroic responsibility idea again, I feel a little more strongly how it’s a trap – it is. Nothing can save you from potential devastation at the loss of something or someone important to you. Simply shouldering responsibility for everything you care about won’t actually help. It’s definitely a practical necessity that groups of people carefully divide and delegate important responsibilities. But even that’s not enough! Nothing’s enough. So we can’t and shouldn’t be content with the responsibilities we’re expected to meet.
I subscribe to the idea that virtue ethics is how humans should generally implement good (ha) consequentialist ethics. But we can’t escape the fact that no amount of Virtue is a complete and perfect means of achieving all our desired ends! We’re responsible for which virtues we hold as much as we are of learning and practicing them.
You are analyzing “heroic responsibility” as a philosophical construct. I am analyzing it as [an ideological mantra]. Considering the story, there’s no reason for Harry to have meant it as the former, given that it is entirely redundant with the pre-existing philosophical construct of consequentialism, and every reason for him to have meant it as the latter, given that it explains why he must act differently than Hermione proposes.
[Note: the phrase “an ideological mantra” appears here because I’m not sure what phrase should appear here. Let me know if what I mean requires elaboration.]
I think you might be over-analyzing the story; which is fine actually, as I’m enjoying doing the same.
I have no evidence that Eliezer considered it so, but I just think Harry was explaining consequentialism to Hermione, without introducing it as a term.
I’m unsure if it’s connected in any obvious way, but to me the quoted conversation between Harry and Hermione is reminiscent of other conversations between the two characters about heroism generally. In that context, it’s obviously a poor ‘ideological mantra’ as it was targeted towards Hermione. Given what I remember of the story, it worked pretty well for her.
I confess, it would make sense to me if Harry was unfamiliar with metaethics and his speech about “heroic responsibility” was an example of him reinventing the idea. If that is the case, it would explain why his presentation is as sloppy as it is.
Surprisingly, so is mine, yet we’ve arrived at entirely different philosophical conclusions. Perfectionistic, intelligent idealist with visceral aversions to injustice walk a fine line when it comes to managing anxiety and the potential for either burn out or helpless existential dispair. To remain sane and effectively harness my passion and energy I had to learn a few critical lessons:
Over-responsibility is not ‘responsible’. It is right there next to ‘completely negligent’ inside the class ‘irresponsible’.
Trusting that if you do what the proximate social institution suggests you ‘should’ do then it will take care of problems is absurd. Those cursed with either weaker than normal hypocrisy skills or otherwise lacking the privelidge to maintain a sheltered existence will quickly become distressed from constant disappointment.
For all that the local social institutions fall drastically short of ideals—and even fall short of what we are supposed to pretend to believe of them—they are still what happens to be present in the universe that is and so are a relevant source of power. Finding ways to get what you want (for yourself or others) by using the system is a highly useful skill.
You do not (necessarily) need to fix the system in order to fix a problem that is important to you. You also don’t (necessarily) need to subvert it.
‘Hermione’ style ‘responsibility’ would be a recipe for insanity if I chose to keep it. I had to abandon it at about the same age she is in the story. It is based on premises that just don’t hold in this universe.
‘Responsibility’ of the kind you can tell others they have is almost always fundamentally different in kind to the ‘responsibility’ word as used in ‘heroic responsibility’. It’s a difference that results in frequent accidental equivocation and accidental miscommunicaiton across inferential distances. This is one rather large problem with ‘heroic responsibility’ as a jargon term. Those who have something to learn about the concept are unlikely to because ‘responsibility’ comes riddled with normative social power connotations.
That’s technically true. Heroic responsibility is completely orthogonal to either of those concerns.
Expected value maximisation isn’t for everyone. Without supplementing it with an awfully well developed epistemology people will sometimes be worse off than with just following whichever list of ‘shoulds’ they have been prescribed.
I may have addressed the bulk of what you’re getting at in another comment; the short form of my reply is, “In the cases which ‘heroic responsibility’ is supposed to address, inaction rarely comes because an individual does not feel responsible, but because they don’t know when the system may fail and don’t know what to do when it might.”
Short form reply: That seems false. Perhaps you have a different notion of precisely what heroic responsibility is supposed to address?
Is the long form also unclear? If so, could you elaborate on why it doesn’t make sense?
Your mention of anxiety (disorders) reminds me of Yvain’s general point that lots of advice is really terrible for at least some people.
As I read HPMoR (and I’ve read all of it), a lot of the reason why Harry specifically distrusts the relevant authority figures is that they are routinely surprised by the various horrible events that happen and seem unwilling to accept responsibility for anything they don’t already expect. McGonagall definitely improves on this point in the story tho.
In the story, the advice Harry gives Hermione seems appropriate. Your example would be much better for anyone inclined to anxiety about satisfying arbitrary constraints (i.e. being responsible for arbitrary outcomes) – and probably for anyone, period, if for no other reason than it’s easier to edit an existing idea than generate an entirely new one.
@wedrifid’s correct your plan is better than Harry’s in the story, but I think Harry’s point – and it’s one I agree with – is that even having a plan, and following it, doesn’t absolve oneself – and to oneself, if no one else – of coming up with a better plan, or improvising, or delegating some or all of the plan, if that’s what’s needed to stop kids from being bullied or an evil villain from destroying the world (or whatever).
Another way to consider the conversation in the story is that Hermione initially represents virtue ethics:
Harry counters with a rendition of consequentialist ethics.
If I believed you to be a virtue ethicist, I might say that you must be mindful of your audience when dispensing advice. If I believed you to be a deontologist, I might say that you should tailor your advice to the needs of the listener. Believing you to be a consequentialist, I will say that advice is only good if it produces better outcomes than the alternatives.
Of course, you know this. So why do you argue that Harry’s speech about heroic responsibility is good advice?
It seems like you’ve already answered your own question!
No, I haven’t answered my own question. In what way was Harry’s monologue about consequentialist ethics superior to telling Hermione why McGonagall couldn’t be counted upon?
HPJEV isn’t supposed to be a perfect executor of his own advice and statements. I would say that it’s not the concept of heroic responsibility is at fault, but his own self-serving reasoning which he applies to justify breaking the rules and doing something cool. In doing so, he fails his heroic responsibility to the over 100 expected people whose lives he might have saved by spending his time more effectively (by doing research which results in an earlier friendly magitech singularity, and buying his warm fuzzies separately by learning the spell for transfiguring a bunch of kittens or something), and HPJEV would feel appropriately bad about his choices if he came to that realisation.
Depending on what you mean by “blame”, I would either disagree with this statement, or I would say that heroic responsibility would disapprove of you blaming yourself too. By heroic responsibility, you don’t have time to feel sorry for yourself that you failed to prevent something, regardless of how realistically you could have.
Where do you get the idea of “requirements” from? When a shepherd is considered responsible for his flock, is he not responsible for every sheep? And if we learn that wolves will surely eat a dozen over the coming year, does that make him any less responsible for any one of his sheep? IMO no: he should try just as hard to save the third sheep as the fifth, even if that means leaving the third to die when it’s wounded so that 4-10 don’t get eaten because they would have been traveling more slowly.
It is a basic fact of utilitarianism that you can’t score a perfect win. Even discounting the universe which is legitimately out of your control, you will screw up sometimes as point of statistical fact. But that does not make the utilons you could not harvest any less valuable than the ones you could have. Heroic responsibility is the emotional equivalent of this fact.
That sounds wise, but is it actually true? Do you actually need that serenity/acceptance part? To keep it heroically themed, I think you’re better off with courage, wisdom, and power.
Yes, I do. Most other humans do, too and it’s a sufficiently difficult and easy to neglect skill that it is well worth preserving as ‘wisdom’.
Non-human intelligences will not likely have ‘serenity’ or ‘acceptance’ but will need some similar form of the generalised trait of not wasting excessive amounts of computational resources exploring parts of solution space that have insufficient probability of significant improvement.
In that case, I’m confused about what serenity/acceptance entails, why you seem to believe heroic responsibility to be incongruent with it, and why it doesn’t just fall under “courage” and “wisdom” (as the emotional fortitude to withstand the inevitable imperfection/partial failure and accurate beliefs respectively). Not wasting (computational) resources on efforts with low expected utility is part of your responsibility to maximise utility, and I don’t see a reason to have a difference between things I “can’t change” and things I might be able to change but which are simply suboptimal.
A human psychological experience and tool that can approximately be described by referring to allocating attention and resources efficiently in the face of some adverse and difficult to influence circumstance.
I don’t. I suspect you are confusing me with someone else.
Yes. Yet for some reason merely seeing an equation and believing it must be maximised is an insufficient guide to optimally managing the human machinery we inhabit. We have to learn other things—including things which can be derived from the equation - in detail and and practice them repetitively.
The Virtue of Narrowness may help you. I have different names for “DDR Ram” and “A replacement battery for my Sony Z2 android” even though I can see how they both relate to computers.
Right, I thought you were RobinZ. By the context, it sounds like he does consider serenity incongruous with heroic responsibility:
With my (rhetorical) question, I expressed doubt towards his interpretation of the phrase, not (necessarily) all reasonable interpretations of it.
For me at least, saying something “can’t be changed” roughly means modelling something as P(change)=0. This may be fine as a local heuristic when there are significantly larger expected utilities on the line to work with, but without a subject of comparison it seems inappropriate, and I would blame it for certain error modes, like ignoring theories because they have been labeled impossible at some point.
To approach it another way, I would be fine with just adding adjectives to “extremely ridiculously [...] absurdly unfathomably unlikely” to satisfy the requirements of narrowness, rather than just saying something can’t be done.
I would call this “level-headedness”. By my intuition, serenity is a specific calm emotional state, which is not required to make good decisions, though it may help. My dataset luckily isn’t large, but I have been able to get by on “numb” pretty well in the few relevant cases.
I agree. I downvoted RobinZ’s comment and ignored it because the confusion about what heroic responsibility means was too fundamental, annoyingly difficult to correct and had incidentally already been argued for far more eloquently elsewhere in the thread. In contrast I fundamentally agree with most of what you have said on this thread so the disagreement on one conclusion regarding a principle of rationality and psychology is more potentially interesting.
I agree with your rejection of the whole paragraph. My objection seems to be directed at the confusion about heroic (and arguably mundane) responsibility rather than the serenity wisdom heuristic.
I can empathize with being uncomfortable with colloquial expressions which deviate from literal meaning. I can also see some value in making a stand against that kind of misuse due to the way such framing can influence our thinking. Overconfident or premature ruling out of possibilities is something humans tend to be biased towards.
Whatever you call it it sounds like you have the necessary heuristics in place to avoid the failure modes the wisdom quote is used to prevent. (Avoiding over-responsibility and avoiding pointless worry loops).
The phrasing “The X to” intuitively brings to my mind a relative state rather than an absolute one. That is, while getting to some Zen endpoint state of inner peace or tranquillity is not needed but there are often times when moving towards that state to a sufficient degree will allow much more effective action. ie. it translates to “whatever minimum amount of acceptance of reality and calmness is needed to allow me correctly account for opportunity costs and decide according to the bigger picture”.
That can work. If used too much it sometimes seems to correlate with developing pesky emotional associations (like ‘Ugh fields’) with related stimulus but that obviously depends on which emotional cognitive processes result in the ‘numbness’ and soforth.
I would rather you tell me that I am misunderstanding something than downvote silently. My prior probability distribution over reasons for the −1 had “I disagreed with Eliezer Yudkowsky and he has rabid fans” orders of magnitude more likely than “I made a category error reading the fanfic and now we’re talking past each other”, and a few words from you could have reversed that ratio.
Thankyou for your feedback. I usually ration my explicit disagreement with people on the internet but your replies prompt me to add “RobinZ” to the list of people worth actively engaging with.
...huh. I’m glad to have been of service, but that’s not really what I was going for. I meant that silent downvoting for the kind of confusion you diagnosed in me is counterproductive generally—“You keep using that word. I do not think it means what you think it means” is not a hypothesis that springs naturally to mind. The same downvote paired with a comment saying:
...would have been more like what I wanted to encourage.
I fundamentally disagree. It is better for misleading comments to have lower votes than insightful ones. This helps limit the epistemic damage caused to third parties. Replying to every incorrect claim with detailed arguments in not viable and not my responsibility either heroic or conventional—even though my comment history suggests that for a few years I made a valiant effort.
Silent downvoting is often the most time efficient form positive influence available and I endorse it as appropriate, productive and typically wiser than trying to argue all the time.
I didn’t propose that you should engage in detailed arguments with anyone—not even me. I proposed that you should accompany some downvotes with an explanation akin to the three-sentence example I gave.
Another example of a sufficiently-elaborate downvote explanation: “I downvoted your reply because it mischaracterized my position more egregiously than any responsible person should.” One sentence, long enough, no further argument required.
I retract my previous statement based on new evidence acquired.
I continue to endorse being selective in whom one spends time arguing with.
Medical expert systems are getting pretty good, I don’t see why you wouldn’t just jump straight to an auto-updated list of most likely diagnoses (generated by a narrow AI) given the current list of symptoms and test results.
Most patient cases are so easy and common that filling forms for an AI would greatly slow the system down. AI could be useful when the diagnosis isn’t clear however. A sufficiently smart AI could pick up the relevant data from the notes but usually the picture that the diagnostician has in their mind is much more complete than any notes they make.
Note that I’m looking at this from a perspective where implementing theoretically smart systems has usually done nothing but increased my workload.
I am assuming you’re not filling out any forms specially for the AI—just that the record-keeping system is computerized and the AI has access to it. In trivial cases the AI won’t have much data (e.g. no fever, normal blood pressure, complains of a running nose and cough, that’s it) and its diagnoses will be low-credence, but that’s fine, you as a doctor won’t need its assistance in those cases.
The AI would need to know natural language to be of any use or else it will miss most of the relevant data. I suppose Watson is pretty close to that and have read that it’s tested in some hospitals. I wonder how this is implemented. I suspect doctors carry a lot more data in their heads than is readily apparent and much of this data will never make it to their notes and thus to the computerized records.
Taking a history, evaluating it’s reliability and using the senses to observe the patients are something machines can’t do for quite some time. On top of this I roughly know hundreds of patients now that I will see time and again and this helps immensely when judging their most acute presentations. By this I don’t mean I know them as lists of symptoms, but I know their personalities too and how this affects how they tell their stories and how seriously they take their symptoms from minor complaints to major problems. I could never take the approach of jumping from a hospital to hospital now that I’ve experienced this first hand.
This is the reason Watson is a game-changer, despite expert prediction systems (using linear regression!) performing at the level of expert humans for ~50 years. Doctors may carry a lot of information in their heads, but I’ve yet to meet a person that’s able to mentally invert matrices of non-trivial size, which helps quite a bit with determining the underlying structure of the data and how best to use it.
I think machines have several comparative advantages here. An AI with basic conversational functions can take a history, and is better at evaluating some parts of the reliability and worse at others. It can compare with ‘other physicians’ more easily, or check public records, but probably can’t determine whether or not it’s a coherent narrative as easily (“What is Toronto?”). A webcam can measure pulse rate just by looking, and so I suspect it’ll be about as good at detecting deflection and lying as the average doctor. (I don’t remember seeing doctors as being particularly good at lie-detection, but it’s been a while since I’ve read any of the lie-detection literature.)
Note that if the AI is sufficiently broadly used (here I’m imagining, say, the NHS in the UK using just one) then everyone will always have access to a doctor that’s known them as long as they’ve been in the system.
Is this because using them is incredibly slow or something else?
Lies make no sense medically, or make too much sense. Once I’ve spotted a few lies, many of them fit a stereotypical pattern many patients use even if there aren’t any other clues. I don’t need to rely on body language much.
People also misremember things, or have a helpful relative misremember things for them, or home care providers feeding their clueless preliminary diagnoses for these people. People who don’t remember fill in the gap with something they think is plausible. Some people are also psychotic or don’t even remember what year it is or why they came in the first place. Some people treat every little ache like it’s the end of the world and some don’t seem to care if their leg’s missing.
I think even an independent AI could make up for many of its faults simply by being more accurate at interpreting the records and current test results.
I hope that when an AI can do my job I don’t need a job anymore :)
My understanding is that the ~4 measurements the system would use as inputs were typically measured by the doctor, and by the time the doctor had collected the data they had simultaneously come up with their own diagnosis. Typing the observations into the computer to get the same level of accuracy (or a few extra percentage points) rarely seemed worth it, and turning the doctor from a diagnostician to a tech was, to put it lightly, not popular with doctors. :P
There are other arguments which would take a long time to go into. One is “but what about X?”, where the linear regression wouldn’t take into account some other variable that the human could take into account, and so the human would want an override option. But, as one might expect, the only way for the regression to outperform the human is for the regression to be right more often than not when the two of them disagree, and humans are unfortunately not very good at determining whether or not the case in front of them is a special case where an override will increase accuracy or a normal case where an override will decrease accuracy. Here’s probably the best place to start if interested in reading more.
A rather limited subset of the natural language, I think it’s a surmountable problem.
All true, which is why I think a well-designed diagnostic AI will work in partnership with a doctor instead of replacing him.
I agree with you, but I fear that makes for a boring conversation :)
The language is already relatively standardized and I suppose you could standardize it more to make it easier for the AI. I suspect any attempt to mold the system for an AI would meet heavy resistance however.
Largely for the same reasons that weather forecasting still involves human meteorologists and the draft in baseball still includes human scouts: a system that integrates both human and automated reasoning produces better outcomes, because human beings can see patterns a lot better than computers can.
Also, we would be well-advised to avoid repeating the mistake made by the commercial-aviation industry, which seems to have fostered such extreme dependence on the automated system that many ‘pilots’ don’t know how to fly a plane. A system which automates almost all diagnoses would do that.
I am not saying this narrow AI should be given direct control of IV drips :-/
I am saying that a doctor, when looking at a patient’s chart, should be able to see what the expert system considers to be the most likely diagnoses and then the doctor can accept one, or ignore them all, or order more tests, or do whatever she wants.
No, I don’t think so because even if you rely on an automated diagnosis you still have to treat the patient.
Even assuming that the machine would not be modified to give treatment recommendations, that wouldn’t change the effect I’m concerned about. If the doctor is accustomed to the machine giving the correct diagnosis for every patient, they’ll stop remembering how to diagnose disease and instead remember how to use the machine. It’s called “transactive memory”.
I’m not arguing against a machine with a button on it that says, “Search for conditions matching recorded symptoms”. I’m not arguing against a machine that has automated alerts about certain low-probability risks—if there was a box that noted the conjunction of “from Liberia” and “temperature spiking to 103 Fahrenheit” in Thomas Eric Duncan during his first hospital visit, there’d probably only be one confirmed case of ebola in the US instead of three, and Duncan might be alive today. But no automated system can be perfectly reliable, and I want doctors who are accustomed to doing the job themselves on the case whenever the system spits out, “No diagnosis found”.
You are using the wrong yardstick. Ain’t no thing is perfectly reliable. What matters is whether an automated system will be more reliable than the alternative—human doctors.
Commercial aviation has a pretty good safety record while relying on autopilots. Are you quite sure that without the autopilot the safety record would be better?
And why do you think a doctor will do better in this case?
I was going to say “doctor’s don’t have the option of not picking the diagnosis”, but that’s actually not true; they just don’t have the option of not picking a treatment. I’ve had plenty of patients who were “symptom X not yet diagnosed” and the treatment is basically supportive, “don’t let them die and try to notice if they get worse, while we figure this out.” I suspect that often it never gets figured out; the patient gets better and they go home. (Less so in the ICU, because it’s higher stakes and there’s more of an attitude of “do ALL the tests!”)
They do, they call the problem “psychosomatic” and send you to therapy or give you some echinacea “to support your immune system” or prescribe “something homeopathic” or whatever… And in very rare cases especially honest doctors may even admit that they do not have any idea what to do.
Because the cases where the doctor is stumped are not uniformly the cases where the computer is stumped. The computer might be stumped because a programmer made a typo three weeks ago entering the list of symptoms for diphtheria, because a nurse recorded the patient’s hiccups as coughs, because the patient is a professional athlete whose resting pulse should be three standard deviations slower than the mean … a doctor won’t be perfectly reliable either, but like a professional scout who can say, “His college batting average is .400 because there aren’t many good curveball pitchers in the league this year”, a doctor can detect low-prior confounding factors a lot faster than a computer can.
Well, let’s imagine a system which actually is—and that might be a stretch—intelligently designed.
This means it doesn’t say “I diagnose this patient with X”. It says “Here is a list of conditions along with their probabilities”. It also doesn’t say “No diagnosis found”—it says “Here’s a list of conditions along with their probabilities, it’s just that the top 20 conditions all have probabilities between 2% and 6%”.
It also says things like “The best way to make the diagnosis more specific would be to run test A, then test B, and if it came back in this particular range, then test C”.
A doctor might ask it “What about disease Y?” and the expert system will answer “It’s probability is such-and-such, it’s not zero because of symptoms Q and P, but it’s not high because test A came back negative and test B showed results in this range. If you want to get more certain with respect to disease Y, use test C.”
And there probably would be button which says “Explain” and pressing it will show the precisely what leads to the probability of disease X being what it is, and the doctor should be able to poke around it and say things like “What happens if we change these coughs to hiccups?”
An intelligently designed expert system often does not replace the specialist—it supports her, allows her to interact with it, ask questions, refine queries, etc.
If you have a patient with multiple nonspecific symptoms who takes a dozen different medications every day, a doctor cannot properly evaluate all the probabilities and interactions in her head. But an expert system can. It works best as a teammate of a human, not as something which just tells her.
Us? I’m a mechanical engineer. I haven’t even read The Checklist Manifesto. I am manifestly unqualified either to design a user interface or to design a system for automated diagnosis of disease—and, as decades of professional failure have shown, neither of these is a task to be lightly ventured upon by dilettantes. The possible errors are simply too numerous and subtle for me to be assured of avoiding them. Case in point: prior to reading that article about Air France Flight 447, it never occurred to me that automation had allowed some pilots to completely forget how to fly a plane.
The details of automation are much less important to me than the ability of people like Swimmer963 to be a part of the decision-making process. Their position grants them a much better view of what’s going on with one particular patient than a doctor who reads a chart once a day or a computer programmer who writes software intended to read billions of charts over its operational lifespan. The system they are incorporated in should take advantage of that.