We can recognize ideas from the past that look like epistemic pits, but if you were to go back in time and try to argue against those ideas, you would be dismissed as incorrect. If you brought proof that the future society held your ideas instead of your interlocutors, that would taken as evidence of the increasing degeneracy of man rather than as evidence that your ideas were more correct than theirs. So what value does the concept of an epistemic pit bring?
I can name one epistemic pit that humanity fell into: slavery. At one point treating other human beings as property to be traded was considered normal, proper and right. After a very long time, and more than a few armed conflicts, western, liberal, capitalist societies updated away from this norm. However, if I were to go back to 1600 and try to argue that slavery was immoral, I would be seen holding an incorrect viewpoint. So, given that, how can one recognize that one is in an epistemic pit in the moment?
Slavery is not, and cannot be, an epistemic pit, because the error there is a moral one, not an epistemic one. Our values differ from those of people who viewed slavery as acceptable. That is very different from an “epistemic pit”.
My guess is that the sorts of interventions that would cause someone to empathise with slaves are primarily epistemic, actually. To try and give a simple example, teaching someone how to accurately use their mirror-neurons when they see the fear in the eyes of a slave, is a skill that I expect would cause them to change their behaviour toward slaves.
There seems to be a disagreement on definitions here. Said thinks a pit epistemic if a platonic reasoner could receive information that takes him out of it. You think a pit epistemic if a human could receive information that takes him out of it.
It does seem true that for a fully rational agent with infinite computing power, moral concerns are indeed completely separate from epistemic concerns. However, for most non-trivial reasoners who are not fully rational or do not have infinite computing power, this is not the case.
I think it’s often valuable to talk about various problems in rationality from a perspective of a perfectly rational agent with infinite computing power, but in this case it seems important to distinguish between those, humans and other potential bounded agents (i.e. any AI we design will not have its moral and epistemic concerns completely separated, which is actually a pretty big problem in AI alignment).
Why do you think an AI we design won’t have such separation? If physics allowed us to run arbitrary amounts of computation, someone may have built AIXI, which has such separation.
Citation, please? I’m not sure I’m familiar with this.
it can be thought of as an informational update
Could you expand on this? Do you mean this in any but the most literal, technical sense? I am not sure how to view any gain of empathy (whether learned or otherwise) as an epistemic update.
I think I was a bit unclear, and that somehow you managed to not at all get what I meant. I have a draft post written on this point, I’ll publish it some time in the future, and tap out for now.
If “empathy” means “ability to understand the feelings of others” or “ability to predict what others will do”, then it seems straightforward that empathy is learnable. And learnability and teachability seem basically the same to me. Examples indicating that empathy is learnable:
As you get to know someone, things go more smoothly when you’re with them
Socializing is easier when you’ve been doing a lot of it (at least, I think so)
Those definitions of “empathy” are, however, totally inconsistent with Ben’s mention of mirror neurons; so I doubt that this is what he had in mind.
(Your argument is actually problematic for several other reasons, but the aforesaid inconsistency makes your points inapplicable, so it’s not necessary to spend the time to demonstrate the other problems.)
Fair enough. I was conflating factual correctness and moral correctness. I guess a better example would be something like religious beliefs (e.g. the earth is 6000 years old, evolution is a lie, etc).
We can recognize ideas from the past that look like epistemic pits, but if you were to go back in time and try to argue against those ideas, you would be dismissed as incorrect. If you brought proof that the future society held your ideas instead of your interlocutors, that would taken as evidence of the increasing degeneracy of man rather than as evidence that your ideas were more correct than theirs. So what value does the concept of an epistemic pit bring?
I can name one epistemic pit that humanity fell into: slavery. At one point treating other human beings as property to be traded was considered normal, proper and right. After a very long time, and more than a few armed conflicts, western, liberal, capitalist societies updated away from this norm. However, if I were to go back to 1600 and try to argue that slavery was immoral, I would be seen holding an incorrect viewpoint. So, given that, how can one recognize that one is in an epistemic pit in the moment?
Slavery is not, and cannot be, an epistemic pit, because the error there is a moral one, not an epistemic one. Our values differ from those of people who viewed slavery as acceptable. That is very different from an “epistemic pit”.
My guess is that the sorts of interventions that would cause someone to empathise with slaves are primarily epistemic, actually. To try and give a simple example, teaching someone how to accurately use their mirror-neurons when they see the fear in the eyes of a slave, is a skill that I expect would cause them to change their behaviour toward slaves.
There seems to be a disagreement on definitions here. Said thinks a pit epistemic if a platonic reasoner could receive information that takes him out of it. You think a pit epistemic if a human could receive information that takes him out of it.
It does seem true that for a fully rational agent with infinite computing power, moral concerns are indeed completely separate from epistemic concerns. However, for most non-trivial reasoners who are not fully rational or do not have infinite computing power, this is not the case.
I think it’s often valuable to talk about various problems in rationality from a perspective of a perfectly rational agent with infinite computing power, but in this case it seems important to distinguish between those, humans and other potential bounded agents (i.e. any AI we design will not have its moral and epistemic concerns completely separated, which is actually a pretty big problem in AI alignment).
Why do you think an AI we design won’t have such separation? If physics allowed us to run arbitrary amounts of computation, someone may have built AIXI, which has such separation.
What does this mean? (I know what “mirror neurons” are, but I don’t have any idea what you could mean by the quoted phrase.)
I mean that empathy is a teachable skill, and that it can be thought of as an informational update, yet apparently changing your ‘moral’ behaviour.
Citation, please? I’m not sure I’m familiar with this.
Could you expand on this? Do you mean this in any but the most literal, technical sense? I am not sure how to view any gain of empathy (whether learned or otherwise) as an epistemic update.
I think I was a bit unclear, and that somehow you managed to not at all get what I meant. I have a draft post written on this point, I’ll publish it some time in the future, and tap out for now.
If “empathy” means “ability to understand the feelings of others” or “ability to predict what others will do”, then it seems straightforward that empathy is learnable. And learnability and teachability seem basically the same to me. Examples indicating that empathy is learnable:
As you get to know someone, things go more smoothly when you’re with them
Socializing is easier when you’ve been doing a lot of it (at least, I think so)
Managers are regularly trained for their job
Those definitions of “empathy” are, however, totally inconsistent with Ben’s mention of mirror neurons; so I doubt that this is what he had in mind.
(Your argument is actually problematic for several other reasons, but the aforesaid inconsistency makes your points inapplicable, so it’s not necessary to spend the time to demonstrate the other problems.)
Fair enough. I was conflating factual correctness and moral correctness. I guess a better example would be something like religious beliefs (e.g. the earth is 6000 years old, evolution is a lie, etc).