My guess is that the sorts of interventions that would cause someone to empathise with slaves are primarily epistemic, actually. To try and give a simple example, teaching someone how to accurately use their mirror-neurons when they see the fear in the eyes of a slave, is a skill that I expect would cause them to change their behaviour toward slaves.
There seems to be a disagreement on definitions here. Said thinks a pit epistemic if a platonic reasoner could receive information that takes him out of it. You think a pit epistemic if a human could receive information that takes him out of it.
It does seem true that for a fully rational agent with infinite computing power, moral concerns are indeed completely separate from epistemic concerns. However, for most non-trivial reasoners who are not fully rational or do not have infinite computing power, this is not the case.
I think it’s often valuable to talk about various problems in rationality from a perspective of a perfectly rational agent with infinite computing power, but in this case it seems important to distinguish between those, humans and other potential bounded agents (i.e. any AI we design will not have its moral and epistemic concerns completely separated, which is actually a pretty big problem in AI alignment).
Why do you think an AI we design won’t have such separation? If physics allowed us to run arbitrary amounts of computation, someone may have built AIXI, which has such separation.
Citation, please? I’m not sure I’m familiar with this.
it can be thought of as an informational update
Could you expand on this? Do you mean this in any but the most literal, technical sense? I am not sure how to view any gain of empathy (whether learned or otherwise) as an epistemic update.
I think I was a bit unclear, and that somehow you managed to not at all get what I meant. I have a draft post written on this point, I’ll publish it some time in the future, and tap out for now.
If “empathy” means “ability to understand the feelings of others” or “ability to predict what others will do”, then it seems straightforward that empathy is learnable. And learnability and teachability seem basically the same to me. Examples indicating that empathy is learnable:
As you get to know someone, things go more smoothly when you’re with them
Socializing is easier when you’ve been doing a lot of it (at least, I think so)
Those definitions of “empathy” are, however, totally inconsistent with Ben’s mention of mirror neurons; so I doubt that this is what he had in mind.
(Your argument is actually problematic for several other reasons, but the aforesaid inconsistency makes your points inapplicable, so it’s not necessary to spend the time to demonstrate the other problems.)
My guess is that the sorts of interventions that would cause someone to empathise with slaves are primarily epistemic, actually. To try and give a simple example, teaching someone how to accurately use their mirror-neurons when they see the fear in the eyes of a slave, is a skill that I expect would cause them to change their behaviour toward slaves.
There seems to be a disagreement on definitions here. Said thinks a pit epistemic if a platonic reasoner could receive information that takes him out of it. You think a pit epistemic if a human could receive information that takes him out of it.
It does seem true that for a fully rational agent with infinite computing power, moral concerns are indeed completely separate from epistemic concerns. However, for most non-trivial reasoners who are not fully rational or do not have infinite computing power, this is not the case.
I think it’s often valuable to talk about various problems in rationality from a perspective of a perfectly rational agent with infinite computing power, but in this case it seems important to distinguish between those, humans and other potential bounded agents (i.e. any AI we design will not have its moral and epistemic concerns completely separated, which is actually a pretty big problem in AI alignment).
Why do you think an AI we design won’t have such separation? If physics allowed us to run arbitrary amounts of computation, someone may have built AIXI, which has such separation.
What does this mean? (I know what “mirror neurons” are, but I don’t have any idea what you could mean by the quoted phrase.)
I mean that empathy is a teachable skill, and that it can be thought of as an informational update, yet apparently changing your ‘moral’ behaviour.
Citation, please? I’m not sure I’m familiar with this.
Could you expand on this? Do you mean this in any but the most literal, technical sense? I am not sure how to view any gain of empathy (whether learned or otherwise) as an epistemic update.
I think I was a bit unclear, and that somehow you managed to not at all get what I meant. I have a draft post written on this point, I’ll publish it some time in the future, and tap out for now.
If “empathy” means “ability to understand the feelings of others” or “ability to predict what others will do”, then it seems straightforward that empathy is learnable. And learnability and teachability seem basically the same to me. Examples indicating that empathy is learnable:
As you get to know someone, things go more smoothly when you’re with them
Socializing is easier when you’ve been doing a lot of it (at least, I think so)
Managers are regularly trained for their job
Those definitions of “empathy” are, however, totally inconsistent with Ben’s mention of mirror neurons; so I doubt that this is what he had in mind.
(Your argument is actually problematic for several other reasons, but the aforesaid inconsistency makes your points inapplicable, so it’s not necessary to spend the time to demonstrate the other problems.)