Would you say that “The obvious truth is that mind -design space contains every combination of intelligence and rationality”? How about “The obvious truth is that mind -design space contains every combination of intelligence and effectiveness”?
One of my fundamental contentions is that empathy is a requirement for intelligence beyond a certain point because the consequences of lacking it are too severe to overcome.
One of my fundamental contentions is that empathy is a requirement for intelligence beyond a certain point because the consequences of lacking it are too severe to overcome.
Two questions:
1) The consequences for whom?
2) How much empathy do you have for, oh, say, an E. coli bacterium?
Connecting these two questions is left as an exercise for the reader. ;-)
I can’t play positive-sum games with an E. coli. The AGI is missing out on tremendous opportunities if it bypasses positive-sum games of potentially infinite length and utility for a short-term finite gain. This is called time-discounting. In nature, there is a very high correlation (to the point that many call it causation) between increasing intelligence and time-discounting.
Please give an example of why the AGI should co-operate with something that cannot do anything the AGI itself cannot.
2) zero
Right. E. coli don’t offer us anything we can’t do for ourselves, that we can’t just whip up a batch of E. coli for on demand.
The AGI is missing out on tremendous opportunities if it bypasses positive-sum games of potentially infinite length and utility for a short-term finite gain
If I’m a god, what would I need a human for? If I need humans, I can just make some. Better still, I could replace them with something more efficient that doesn’t complain or rebel.
The fundamental flaw in your reasoning here is that you keep trying to construct paths through probability space that could support your hypothesis, but ONLY if you had presented some evidence for singling out that hypothesis in the first place!
It’s like you’re a murder investigator opening up the phonebook to a random place and saying, “well, we haven’t ruled out the possibility that this guy did it”, and when people quite reasonably point out that there is no connection between that random guy and the murder, you reply, “yeah, but I just called this guy, and he has no alibi.” (That is, you’re ignoring the fact that a huge number of people in that phonebook will also have no alibi, so your “evidence” isn’t actually increasing the expected probability that that guy did it.)
And that’s why you’re getting so many downvotes: in LW terms, you are failing basic reasoning.
But that is not a shameful thing: any normal human being fails basic reasoning, by default, in exactly the same way. Our brains simply aren’t built to do reasoning: they’re built to argue, by finding the most persuasive evidence that supports our pre-existing beliefs and hypotheses, rather than trying to find out what is true.
When I first got here, I argued for some of my pet hypotheses in the exact same way, although I was righteously certain that I was not doing such a thing. It took a long time before I really “got” Bayesian reasoning sufficiently to understand what I was doing wrong, and before that, I couldn’t have said here what you were doing wrong either.
Please give an example of why the AGI should co-operate with something that cannot do anything the AGI itself cannot.
If the overall price (including time, gaining requisite knowledge, etc) of co-operation is less expensive than the AGI doing it itself, the AGI should co-operate. No?
If I’m a god, what would I need a human for? If I need humans, I can just make some. Better still, I could replace them with something more efficient that doesn’t complain or rebel.
How expensive is making humans vs. their utility? Is there something markedly more efficient that won’t complain or rebel if you treat it poorly? How efficient/useful could a human be if you treated it well?
There are also useful pseudo-moral arguments of the type of pre-committing to a benevolent strategy so that others (bigger than you) will be benevolent to you.
The fundamental flaw in your reasoning here is that you keep trying to construct paths through probability space that could support your hypothesis, but ONLY if you had presented some evidence for singling out that hypothesis in the first place!
Agreed. So your argument is that I’m not adequately presenting evidence for singling out that hypothesis. That’s a useful criticism. Thanks!
And that’s why you’re getting so many downvotes: in LW terms, you are failing basic reasoning.
I disagree. I believe that I am failing to successfully communicate my reasoning. I understand your arguments perfectly well (and appreciate them) and agree with them if that is what I was trying to do. Since they are not what I’m trying to do—although they apparently are what I AM doing—I’m assuming (yes, ASS-U-ME) that I’m failing elsewhere and am currently placing the blame on my communication skills.
Are you willing to accept that premise and see if you can draw any helpful conclusions or give any helpful advice?
And, once again, thank you for already taking the time to give such a detailed thoughtful response.
How expensive is making humans vs. their utility? Is there something markedly more efficient that won’t complain or rebel if you treat it poorly?
Yes. The nano bots that you could build out of my dismantled raw materials.There is something humbling to realise that my complete submission and wholehearted support is worth less to a non-friendly AI than my spleen.
Oh, worth much much less than your spleen. It might be a fun exercise to take the numbers from Seth Lloyd and figure out how molecules (optimistically, the volume of a cell or two) your brain is worth.
Utility for what purpose? If we’re talking about say, a paperclip maximizer, then its utility for human beings will be measured in paperclip production.
Is there something markedly more efficient that won’t complain or rebel if you treat it poorly? How efficient/useful could a human be if you treated it well?
It won’t be as efficient as specialized paperclip-production machines will, for the production of paperclips.
Are you willing to accept that premise and see if you can draw any helpful conclusions or give any helpful advice?
Yes, but you’re unlikely to be happy with it: read the sequences, or at least the parts of them that deal with reasoning, the use of words, and inferential distances. (For now at least, you can skip the quantum mechanics, AI, and Fun Theory parts.)
At minimum, this will help you understand LW’s standards for basic reasoning, and how much higher a bar they are than what constitutes “reasoning” pretty much anywhere else.
If you’re reasoning as well as you say, then the material will be a breeze, and you’ll be able to make your arguments in terms that the rest of us can understand. Or, if you’re not, then you’ll probably learn that along the way.
Comparative advantage explains how to make use of inefficient agents, so that ignoring them is a worse option. But if you can convert them into something else, you are no longer comparing the gain from trading with them to indifference of ignoring them, you are comparing the gain from trading with them to the gain from converting them. And if they can be cheaply converted into something much more efficient than they are, converting them is the winning move. This is a move largely not available to the present society, hence its absence is a reasonable assumption for now but one that breaks when you consider indifferent smart AGI.
The law of comparative advantage relies on some implicit assumptions that are not likely to hold between a superintelligence and humans:
The transactions costs must be small enough not to negate the gains from trade. A superintelligence may require more resources to issue a trade request to slow thinking humans and to receive the result, while possibly letting processes idle while waiting for the result, than to just do it itself.
Your trading partner must not have the option of building a more desirable trading partner out of your component parts. A superintelligence could get more productivity of atoms arranged as an extension of itself than atoms arranged as humans. (ETA: See Nesov’s comment.)
A sufficiently clever AI should understand Comparative Advantage
And a sufficiently clever human should realize that clever humans can and do routinely increase the efficiencies of their industry enough to shift the comparative advantage.
It really doesn’t take that much human-level intelligence to change how things are done—all it takes is a lack of attachment to the current ways.
And that’s perhaps the biggest “natural resource” an AI has: the lack of status quo bias.
And a sufficiently clever human should realize that clever humans can and do routinely increase the efficiencies of their industry enough to shift the comparative advantage.
I don’t understand what are you arguing for. That people become better off doing something different, doesn’t necessarily imply that they become obsolete, or even that they can’t continue doing the less-efficient thing.
And a sufficiently clever human should realize that clever humans can and do routinely increase the efficiencies of their industry enough to shift the comparative advantage.
I’m not sure I understand what “shift the comparative advantage” could mean, and I have no idea why this is supposed to be a response to my point.
Maybe I didn’t make my point clearly enough. My contention is that even if an AI is better at absolutely everything than a human being, it could still be better off trading with human beings for certain goods, for the simple reason that it can’t do everything, and in such a scenario both human beings and the AI would get gains from trade.
As Nesov points out, if the AI has the option of, say, converting human beings into computational substrate and using them to simulate new versions of itself, then this ceases to be relevant.
One of my fundamental contentions is that empathy is a requirement for intelligence beyond a certain point because the consequences of lacking it are too severe to overcome.
Human psychopaths are a counterexample to this claim, and they seem to be doing alright in spite of active efforts by the rest of humanity to detect and eliminate them.
Human psychopaths are a counterexample to this claim, and they seem to be doing alright in spite of active efforts by the rest of humanity to detect and eliminate them.
‘Detect and eliminate’ or ‘detect and affiliate with the most effective ones’. One or the other. ;)
There are no efforts by the rest of humanity to detect and eliminate the sort of psychopaths who understand it’s in their own interests to cooperate with society.
The sort of psychopaths who fail to understand that, and act accordingly, typically end up doing very badly.
Why all the focus on psychopaths? It could be said that certain forms of autism are equally empathy-blinded, and yet people along that portion of the spectrum are often hugely helpful to the human race, and get along just fine with the more neurotypical.
No. There are two bad assumptions in your counterexample.
They are:
Human psychopaths are above the certain point of intelligence that I was talking about.
Human psychopaths are sufficiently long-lived for the consequences to be severe enough.
Hmmmm. #2 says that I probably didn’t make clear enough the importance of the length of interaction.
You also appear to have the assumption that my argument is that the AGI fears detection of its unfriendly behavior and any consequences that humanity can apply. Humanity CANNOT apply sufficient negative consequences to a sufficiently powerful AGI. The severe consequences are all missed opportunity costs which means that the AGI is thereby sub-optimal and thereby less intelligent than is possible.
The underlying disorders of what is commonly referred to as psychopathy are indeed detectable. I also find it comforting that they are in fact disorders and that being evil in this fashion is not an attribute of an otherwise high-functioning mind. Psychopaths can be high-functioning in some areas, but a short interaction with them almost always makes it clear that there is something is.wrong.
Cat burning was also a form of entertainment once. Defining something as fun or entertainment is a matter of politics as much as anything else. The same goes for friendliness. I fear that once we pinpoint it, it’ll be outdated.
Everybody who is known to be a psychopath is a bad psychopath, by definition; a skilled psychopath is one who will not let people figure out that he’s a psychopath.
Of course, this means that the existence of sufficiently skilled psychopath is, in everyday practice, unprovable and unfalsifiable (at least to the degree that we cannot tell the difference between a good actor and someone genuinely feeling empathy; I suppose you might figure out something by measuring people’s brain activity while they watch a torture scene).
I suppose you might figure out something by measuring people’s brain activity while they watch a torture scene
Even then it is far from definitive. Experienced doctors, for example, lose a lot the ability to feel certain kinds of physical empathy—their brains will look closer to a good actor’s brain than that of a naive individual exposed to the same stimulus. That’s just practical adaptation and good for patient and practitioner alike.
Considering the number of horror stories I’ve heard about doctors who just don’t pay attention, I’m not sure you’re right that doctors acting their empathy is good for patients.
Cite? I’m curious about where and when that study was done.
Thanks for your reply, but I think I’m going to push for some community norms for sourcing information from studies, ranging from read the whole thing carefully to heard about it from someone.
You don’t have enough information to arrive at that level of certainty. He was not, for example, a general practitioner and I was not a client of his. I was actually working with him in medical education at the time. Come to think of it, bizarrely enough and by pure happenstance that does put the subject into the realm of his specialist knowledge.
I don’t present that as a reason to be persuaded—I actually think not taking official status, particularly medicine related official status, seriously is a good thing. It is just a reply to your presumption.
While I don’t expect you to take my (or his) word for anything I also wouldn’t expect you to need to. This is exactly the finding I would expect based off general knowledge of human behavior. When people are constantly exposed to stimulus that is emotionally laden they will tend to become desensitized to it. There are whole schools of cognitive therapy based on this fact. If someone has taken on the role of a torturer then their emotional response to witnessing torture will be drastically altered. Either it will undergo extinction or the individual will be crippled with PTSD. This can be expected to apply even more when they fully identify with their role due to, for example, the hazing processes involved in joining military and paramilitary organisations.
Part of what seemed iffy was the claim that it was good for both the patients and the practitioner, when it was correlated (from what you said) with experience, with no mention of quality of care.
When someone says their source is “a doctor”, what are the odds that it’s a researcher specializing in that particular area? Especially when the information is something which could as easily be a fluffy popular report as something clearly related to a specialty?
Also, I had a prior from Bernard Siegal which is also intuitively plausible—that doctors who are emotionally numb around their patients are more likely to burn out. This was likely to have been based on anecdote, but not a crazy hypothesis.
I believe you have a sign error in your last paragraph. Doctors who do not emotionally numb themselves are the ones considered at risk to burn out. I have a friend from one of my T groups who is a physician at M. D. Anderson Cancer Center and she is now working in intensive care where the people are really messed up and people die all the time. She believes genuine loving care for her patients is her duty and makes her a better physician; she was trained be emotionally numb and she felt like it was an epiphany for herself to rebel against this after a couple of years in her current assignment.
I have not asked her if her attitude is obvious to her supervisors. My guess is that it probably is not; I do not think she is secretive about it (although she probably does not go around evangelizing to the other doctors much) but I would think that the other doctors are too preoccupied to observe it.
In the book Consciousness and Healing Larry Dossey M.D. also explicitly discusses behavioral norms of professional physicians being to minimize emotional involvement and he has arguments that this is a bad practice. (That book is not an example of good rational thinking from cover to cover.)
Part of what seemed iffy was the claim that it was good for both the patients and the practitioner, when it was correlated (from what you said) with experience, with no mention of quality of care.
Desensitization to powerful negative emotional reactions is not the same thing as not caring and not building a personal relationship.
Most of our default emotional reactions when we are in close contact with others who have physical or emotional injuries aren’t exactly optimal for the purpose of providing assistance. Particularly when what the doctor needs to do will cause more pain.
I’ll add that at particularly high levels of competence it makes very little difference whether you are a psychopath who has mastered the deception of others or a hypocrite (normal person) who has mastered deception of yourself.
One of my fundamental contentions is that empathy is a requirement for intelligence beyond a certain point because the consequences of lacking it are too severe to overcome.
That is probably because you don’t share a definition of intelligence with most of those here.
Nope. I agree with the vast majority of the vetta definitions.
But let’s go with Marcus Hutter—“There are strong arguments that AIXI is the most intelligent unbiased agent possible in the sense that AIXI behaves optimally in any computable environment.”
Now, which is more optimal—opting to play a positive-sum game of potentially infinite length and utility with cooperating humans OR passing up the game forever for a modest short-term gain?
Assume, for the purposes of argument, that the AGI does not have an immediate pressing need for the gain (since we could then go into a recursion of how pressing is the need—and yes, if the need is pressing enough, the intelligent thing to do unless the agent’s goal is to preserve humanity is to take the short-term gain and wipe out humanity—but how would a super-intelligent AGI have gotten itself into that situation?). This should answer all of the questions about “Well, what if the AGI had a short-term preference and humans weren’t it”.
I am jumping in here from Recent Comments, so perhaps I am missing context—but how is AIXI interacting with humanity an infinite positive-sum gain for it?
It doesn’t seem like AIXI could even expect zero-sum gains from humanity: we are using up a lot of what could be computronium.
That definition doesn’t explicity mention goals. Many of the definitions do explicity mention goals. What the definitions usually don’t mention is what those goals are—and that permits super-villains, along the lines of General Zod.
If (as it appears) you want to argue that evolution is likely to produce super-saints—rather than super-villains—then that’s a bit of a different topic. If you wanted to argue that, “requirement” was probably the wrong way of putting it.
One of my fundamental contentions is that empathy is a requirement for intelligence beyond a certain point because the consequences of lacking it are too severe to overcome.
Now if you had suggested that intelligence cannot evolve beyond a certain point unless accompanied by empathy … that would be another matter. I could easily be convinced that a social animal requires empathy almost as much as it requires eyesight, and that non-social animals cannot become very intelligent because they would never develop language.
But I see no reason to think that an evolved intelligence would have empathy for entities with whom it had no social interactions during its evolutionary history. And no a priori reason to expect any kind of empathy at all in an engineered intelligence.
Which brings up an interesting thought. Perhaps human-level AI already exists. But we don’t realize it because we have no empathy for AIs.
But I see no reason to think that an evolved intelligence would have empathy for entities with whom it had no social interactions during its evolutionary history.
I don’t find that “truth” either obvious or true.
Would you say that “The obvious truth is that mind -design space contains every combination of intelligence and rationality”? How about “The obvious truth is that mind -design space contains every combination of intelligence and effectiveness”?
One of my fundamental contentions is that empathy is a requirement for intelligence beyond a certain point because the consequences of lacking it are too severe to overcome.
Two questions:
1) The consequences for whom?
2) How much empathy do you have for, oh, say, an E. coli bacterium?
Connecting these two questions is left as an exercise for the reader. ;-)
1) the AGI
2) zero
I can’t play positive-sum games with an E. coli. The AGI is missing out on tremendous opportunities if it bypasses positive-sum games of potentially infinite length and utility for a short-term finite gain. This is called time-discounting. In nature, there is a very high correlation (to the point that many call it causation) between increasing intelligence and time-discounting.
Please give an example of why the AGI should co-operate with something that cannot do anything the AGI itself cannot.
Right. E. coli don’t offer us anything we can’t do for ourselves, that we can’t just whip up a batch of E. coli for on demand.
If I’m a god, what would I need a human for? If I need humans, I can just make some. Better still, I could replace them with something more efficient that doesn’t complain or rebel.
The fundamental flaw in your reasoning here is that you keep trying to construct paths through probability space that could support your hypothesis, but ONLY if you had presented some evidence for singling out that hypothesis in the first place!
It’s like you’re a murder investigator opening up the phonebook to a random place and saying, “well, we haven’t ruled out the possibility that this guy did it”, and when people quite reasonably point out that there is no connection between that random guy and the murder, you reply, “yeah, but I just called this guy, and he has no alibi.” (That is, you’re ignoring the fact that a huge number of people in that phonebook will also have no alibi, so your “evidence” isn’t actually increasing the expected probability that that guy did it.)
And that’s why you’re getting so many downvotes: in LW terms, you are failing basic reasoning.
But that is not a shameful thing: any normal human being fails basic reasoning, by default, in exactly the same way. Our brains simply aren’t built to do reasoning: they’re built to argue, by finding the most persuasive evidence that supports our pre-existing beliefs and hypotheses, rather than trying to find out what is true.
When I first got here, I argued for some of my pet hypotheses in the exact same way, although I was righteously certain that I was not doing such a thing. It took a long time before I really “got” Bayesian reasoning sufficiently to understand what I was doing wrong, and before that, I couldn’t have said here what you were doing wrong either.
If the overall price (including time, gaining requisite knowledge, etc) of co-operation is less expensive than the AGI doing it itself, the AGI should co-operate. No?
How expensive is making humans vs. their utility? Is there something markedly more efficient that won’t complain or rebel if you treat it poorly? How efficient/useful could a human be if you treated it well?
There are also useful pseudo-moral arguments of the type of pre-committing to a benevolent strategy so that others (bigger than you) will be benevolent to you.
Agreed. So your argument is that I’m not adequately presenting evidence for singling out that hypothesis. That’s a useful criticism. Thanks!
I disagree. I believe that I am failing to successfully communicate my reasoning. I understand your arguments perfectly well (and appreciate them) and agree with them if that is what I was trying to do. Since they are not what I’m trying to do—although they apparently are what I AM doing—I’m assuming (yes, ASS-U-ME) that I’m failing elsewhere and am currently placing the blame on my communication skills.
Are you willing to accept that premise and see if you can draw any helpful conclusions or give any helpful advice?
And, once again, thank you for already taking the time to give such a detailed thoughtful response.
Yes. The nano bots that you could build out of my dismantled raw materials.There is something humbling to realise that my complete submission and wholehearted support is worth less to a non-friendly AI than my spleen.
Oh, worth much much less than your spleen. It might be a fun exercise to take the numbers from Seth Lloyd and figure out how molecules (optimistically, the volume of a cell or two) your brain is worth.
Utility for what purpose? If we’re talking about say, a paperclip maximizer, then its utility for human beings will be measured in paperclip production.
It won’t be as efficient as specialized paperclip-production machines will, for the production of paperclips.
Yes, but you’re unlikely to be happy with it: read the sequences, or at least the parts of them that deal with reasoning, the use of words, and inferential distances. (For now at least, you can skip the quantum mechanics, AI, and Fun Theory parts.)
At minimum, this will help you understand LW’s standards for basic reasoning, and how much higher a bar they are than what constitutes “reasoning” pretty much anywhere else.
If you’re reasoning as well as you say, then the material will be a breeze, and you’ll be able to make your arguments in terms that the rest of us can understand. Or, if you’re not, then you’ll probably learn that along the way.
A sufficiently clever AI should understand Comparative Advantage
Comparative advantage explains how to make use of inefficient agents, so that ignoring them is a worse option. But if you can convert them into something else, you are no longer comparing the gain from trading with them to indifference of ignoring them, you are comparing the gain from trading with them to the gain from converting them. And if they can be cheaply converted into something much more efficient than they are, converting them is the winning move. This is a move largely not available to the present society, hence its absence is a reasonable assumption for now but one that breaks when you consider indifferent smart AGI.
The law of comparative advantage relies on some implicit assumptions that are not likely to hold between a superintelligence and humans:
The transactions costs must be small enough not to negate the gains from trade. A superintelligence may require more resources to issue a trade request to slow thinking humans and to receive the result, while possibly letting processes idle while waiting for the result, than to just do it itself.
Your trading partner must not have the option of building a more desirable trading partner out of your component parts. A superintelligence could get more productivity of atoms arranged as an extension of itself than atoms arranged as humans. (ETA: See Nesov’s comment.)
And a sufficiently clever human should realize that clever humans can and do routinely increase the efficiencies of their industry enough to shift the comparative advantage.
It really doesn’t take that much human-level intelligence to change how things are done—all it takes is a lack of attachment to the current ways.
And that’s perhaps the biggest “natural resource” an AI has: the lack of status quo bias.
I don’t understand what are you arguing for. That people become better off doing something different, doesn’t necessarily imply that they become obsolete, or even that they can’t continue doing the less-efficient thing.
I’m not sure I understand what “shift the comparative advantage” could mean, and I have no idea why this is supposed to be a response to my point.
Maybe I didn’t make my point clearly enough. My contention is that even if an AI is better at absolutely everything than a human being, it could still be better off trading with human beings for certain goods, for the simple reason that it can’t do everything, and in such a scenario both human beings and the AI would get gains from trade.
As Nesov points out, if the AI has the option of, say, converting human beings into computational substrate and using them to simulate new versions of itself, then this ceases to be relevant.
Human psychopaths are a counterexample to this claim, and they seem to be doing alright in spite of active efforts by the rest of humanity to detect and eliminate them.
‘Detect and eliminate’ or ‘detect and affiliate with the most effective ones’. One or the other. ;)
There are no efforts by the rest of humanity to detect and eliminate the sort of psychopaths who understand it’s in their own interests to cooperate with society.
The sort of psychopaths who fail to understand that, and act accordingly, typically end up doing very badly.
Why all the focus on psychopaths? It could be said that certain forms of autism are equally empathy-blinded, and yet people along that portion of the spectrum are often hugely helpful to the human race, and get along just fine with the more neurotypical.
No. There are two bad assumptions in your counterexample.
They are:
Human psychopaths are above the certain point of intelligence that I was talking about.
Human psychopaths are sufficiently long-lived for the consequences to be severe enough.
Hmmmm. #2 says that I probably didn’t make clear enough the importance of the length of interaction.
You also appear to have the assumption that my argument is that the AGI fears detection of its unfriendly behavior and any consequences that humanity can apply. Humanity CANNOT apply sufficient negative consequences to a sufficiently powerful AGI. The severe consequences are all missed opportunity costs which means that the AGI is thereby sub-optimal and thereby less intelligent than is possible.
What sort of opportunity costs?
The AI can simulate humans if it needs them, for a lower energy cost than keeping the human race alive.
So, why should it keep the human race alive?
The underlying disorders of what is commonly referred to as psychopathy are indeed detectable. I also find it comforting that they are in fact disorders and that being evil in this fashion is not an attribute of an otherwise high-functioning mind. Psychopaths can be high-functioning in some areas, but a short interaction with them almost always makes it clear that there is something is.wrong.
Homosexuality was also a disorder once. Defining something as a sickness or disorder is a matter of politics as much as anything else.
Cat burning was also a form of entertainment once. Defining something as fun or entertainment is a matter of politics as much as anything else. The same goes for friendliness. I fear that once we pinpoint it, it’ll be outdated.
What do you mean by psychopathy?
At least one sort of no-empathy person is unusually good at manipulating most people.
Everybody who is known to be a psychopath is a bad psychopath, by definition; a skilled psychopath is one who will not let people figure out that he’s a psychopath.
Of course, this means that the existence of sufficiently skilled psychopath is, in everyday practice, unprovable and unfalsifiable (at least to the degree that we cannot tell the difference between a good actor and someone genuinely feeling empathy; I suppose you might figure out something by measuring people’s brain activity while they watch a torture scene).
Even then it is far from definitive. Experienced doctors, for example, lose a lot the ability to feel certain kinds of physical empathy—their brains will look closer to a good actor’s brain than that of a naive individual exposed to the same stimulus. That’s just practical adaptation and good for patient and practitioner alike.
Considering the number of horror stories I’ve heard about doctors who just don’t pay attention, I’m not sure you’re right that doctors acting their empathy is good for patients.
Cite? I’m curious about where and when that study was done.
Don’t know. Never saw it first hand—I heard it from a doctor.
Thanks for your reply, but I think I’m going to push for some community norms for sourcing information from studies, ranging from read the whole thing carefully to heard about it from someone.
Only on lesswrong—we look down our noses at people who take the word of medical specialists.
That doctor almost certainly wasn’t speaking out of his specialist knowledge.
You don’t have enough information to arrive at that level of certainty. He was not, for example, a general practitioner and I was not a client of his. I was actually working with him in medical education at the time. Come to think of it, bizarrely enough and by pure happenstance that does put the subject into the realm of his specialist knowledge.
I don’t present that as a reason to be persuaded—I actually think not taking official status, particularly medicine related official status, seriously is a good thing. It is just a reply to your presumption.
While I don’t expect you to take my (or his) word for anything I also wouldn’t expect you to need to. This is exactly the finding I would expect based off general knowledge of human behavior. When people are constantly exposed to stimulus that is emotionally laden they will tend to become desensitized to it. There are whole schools of cognitive therapy based on this fact. If someone has taken on the role of a torturer then their emotional response to witnessing torture will be drastically altered. Either it will undergo extinction or the individual will be crippled with PTSD. This can be expected to apply even more when they fully identify with their role due to, for example, the hazing processes involved in joining military and paramilitary organisations.
Part of what seemed iffy was the claim that it was good for both the patients and the practitioner, when it was correlated (from what you said) with experience, with no mention of quality of care.
When someone says their source is “a doctor”, what are the odds that it’s a researcher specializing in that particular area? Especially when the information is something which could as easily be a fluffy popular report as something clearly related to a specialty?
Also, I had a prior from Bernard Siegal which is also intuitively plausible—that doctors who are emotionally numb around their patients are more likely to burn out. This was likely to have been based on anecdote, but not a crazy hypothesis.
I believe you have a sign error in your last paragraph. Doctors who do not emotionally numb themselves are the ones considered at risk to burn out. I have a friend from one of my T groups who is a physician at M. D. Anderson Cancer Center and she is now working in intensive care where the people are really messed up and people die all the time. She believes genuine loving care for her patients is her duty and makes her a better physician; she was trained be emotionally numb and she felt like it was an epiphany for herself to rebel against this after a couple of years in her current assignment.
I have not asked her if her attitude is obvious to her supervisors. My guess is that it probably is not; I do not think she is secretive about it (although she probably does not go around evangelizing to the other doctors much) but I would think that the other doctors are too preoccupied to observe it.
In the book Consciousness and Healing Larry Dossey M.D. also explicitly discusses behavioral norms of professional physicians being to minimize emotional involvement and he has arguments that this is a bad practice. (That book is not an example of good rational thinking from cover to cover.)
Desensitization to powerful negative emotional reactions is not the same thing as not caring and not building a personal relationship.
Most of our default emotional reactions when we are in close contact with others who have physical or emotional injuries aren’t exactly optimal for the purpose of providing assistance. Particularly when what the doctor needs to do will cause more pain.
I’ll add that at particularly high levels of competence it makes very little difference whether you are a psychopath who has mastered the deception of others or a hypocrite (normal person) who has mastered deception of yourself.
That is probably because you don’t share a definition of intelligence with most of those here.
Perhaps look through http://www.vetta.org/definitions-of-intelligence/ - and see if you can find your position.
Nope. I agree with the vast majority of the vetta definitions.
But let’s go with Marcus Hutter—“There are strong arguments that AIXI is the most intelligent unbiased agent possible in the sense that AIXI behaves optimally in any computable environment.”
Now, which is more optimal—opting to play a positive-sum game of potentially infinite length and utility with cooperating humans OR passing up the game forever for a modest short-term gain?
Assume, for the purposes of argument, that the AGI does not have an immediate pressing need for the gain (since we could then go into a recursion of how pressing is the need—and yes, if the need is pressing enough, the intelligent thing to do unless the agent’s goal is to preserve humanity is to take the short-term gain and wipe out humanity—but how would a super-intelligent AGI have gotten itself into that situation?). This should answer all of the questions about “Well, what if the AGI had a short-term preference and humans weren’t it”.
I am jumping in here from Recent Comments, so perhaps I am missing context—but how is AIXI interacting with humanity an infinite positive-sum gain for it?
It doesn’t seem like AIXI could even expect zero-sum gains from humanity: we are using up a lot of what could be computronium.
That definition doesn’t explicity mention goals. Many of the definitions do explicity mention goals. What the definitions usually don’t mention is what those goals are—and that permits super-villains, along the lines of General Zod.
If (as it appears) you want to argue that evolution is likely to produce super-saints—rather than super-villains—then that’s a bit of a different topic. If you wanted to argue that, “requirement” was probably the wrong way of putting it.
Now if you had suggested that intelligence cannot evolve beyond a certain point unless accompanied by empathy … that would be another matter. I could easily be convinced that a social animal requires empathy almost as much as it requires eyesight, and that non-social animals cannot become very intelligent because they would never develop language.
But I see no reason to think that an evolved intelligence would have empathy for entities with whom it had no social interactions during its evolutionary history. And no a priori reason to expect any kind of empathy at all in an engineered intelligence.
Which brings up an interesting thought. Perhaps human-level AI already exists. But we don’t realize it because we have no empathy for AIs.
MIT’s Leonardo? Engineered super-cuteness!
The most likely location for an “unobserved” machine intelligence is probably the NSA’s basement.
However, it seems challenging to believe that a machine intelligence would need to stay hidden for very long.
Well, it does contain all those points, but some weird points are weighted much less heavily.