In practice, humanity as a whole has not obviously fallen into any permanent epistemic pits, but I think this is because no single ideology has clearly dominated the world.
If humanity had fallen into an epistemic pit, how would anyone know? Maybe we’re in an epistemic pit right now. After all, one of the characteristics of an epistemic pit is that it is a conclusion that takes huge or infinite amounts of time to update away from. How does one distinguish that from a conclusion that is correct?
Well, wouldn’t it be great if we had sound metaphilosophical principles that help us distinguish epistemic pits from correct conclusions! :P
I actually think humanity is in a bunch of epistemic pits that we mostly aren’t even aware of. For example, if you share my view that Buddhist enlightenment carries significant (albeit hard-to-articulate) epistemic content, then basically all of humanity over basically all of time has been in the epistemic pit of non-enlightenment.
If we figure out the metaphilosophy of how to robustly avoid epistemic pits, and build that into an aligned AGI, then in some sense none of our current epistemic pits are that bad, since that AGI would help us climb out in relatively short order. But if we don’t figure it out, we’ll plausibly stay in our epistemic pits for unacceptably long periods of time.
We can recognize ideas from the past that look like epistemic pits, but if you were to go back in time and try to argue against those ideas, you would be dismissed as incorrect. If you brought proof that the future society held your ideas instead of your interlocutors, that would taken as evidence of the increasing degeneracy of man rather than as evidence that your ideas were more correct than theirs. So what value does the concept of an epistemic pit bring?
I can name one epistemic pit that humanity fell into: slavery. At one point treating other human beings as property to be traded was considered normal, proper and right. After a very long time, and more than a few armed conflicts, western, liberal, capitalist societies updated away from this norm. However, if I were to go back to 1600 and try to argue that slavery was immoral, I would be seen holding an incorrect viewpoint. So, given that, how can one recognize that one is in an epistemic pit in the moment?
Slavery is not, and cannot be, an epistemic pit, because the error there is a moral one, not an epistemic one. Our values differ from those of people who viewed slavery as acceptable. That is very different from an “epistemic pit”.
My guess is that the sorts of interventions that would cause someone to empathise with slaves are primarily epistemic, actually. To try and give a simple example, teaching someone how to accurately use their mirror-neurons when they see the fear in the eyes of a slave, is a skill that I expect would cause them to change their behaviour toward slaves.
There seems to be a disagreement on definitions here. Said thinks a pit epistemic if a platonic reasoner could receive information that takes him out of it. You think a pit epistemic if a human could receive information that takes him out of it.
It does seem true that for a fully rational agent with infinite computing power, moral concerns are indeed completely separate from epistemic concerns. However, for most non-trivial reasoners who are not fully rational or do not have infinite computing power, this is not the case.
I think it’s often valuable to talk about various problems in rationality from a perspective of a perfectly rational agent with infinite computing power, but in this case it seems important to distinguish between those, humans and other potential bounded agents (i.e. any AI we design will not have its moral and epistemic concerns completely separated, which is actually a pretty big problem in AI alignment).
Why do you think an AI we design won’t have such separation? If physics allowed us to run arbitrary amounts of computation, someone may have built AIXI, which has such separation.
Citation, please? I’m not sure I’m familiar with this.
it can be thought of as an informational update
Could you expand on this? Do you mean this in any but the most literal, technical sense? I am not sure how to view any gain of empathy (whether learned or otherwise) as an epistemic update.
I think I was a bit unclear, and that somehow you managed to not at all get what I meant. I have a draft post written on this point, I’ll publish it some time in the future, and tap out for now.
If “empathy” means “ability to understand the feelings of others” or “ability to predict what others will do”, then it seems straightforward that empathy is learnable. And learnability and teachability seem basically the same to me. Examples indicating that empathy is learnable:
As you get to know someone, things go more smoothly when you’re with them
Socializing is easier when you’ve been doing a lot of it (at least, I think so)
Those definitions of “empathy” are, however, totally inconsistent with Ben’s mention of mirror neurons; so I doubt that this is what he had in mind.
(Your argument is actually problematic for several other reasons, but the aforesaid inconsistency makes your points inapplicable, so it’s not necessary to spend the time to demonstrate the other problems.)
Fair enough. I was conflating factual correctness and moral correctness. I guess a better example would be something like religious beliefs (e.g. the earth is 6000 years old, evolution is a lie, etc).
If humanity had fallen into an epistemic pit, how would anyone know? Maybe we’re in an epistemic pit right now. After all, one of the characteristics of an epistemic pit is that it is a conclusion that takes huge or infinite amounts of time to update away from. How does one distinguish that from a conclusion that is correct?
Well, wouldn’t it be great if we had sound metaphilosophical principles that help us distinguish epistemic pits from correct conclusions! :P
I actually think humanity is in a bunch of epistemic pits that we mostly aren’t even aware of. For example, if you share my view that Buddhist enlightenment carries significant (albeit hard-to-articulate) epistemic content, then basically all of humanity over basically all of time has been in the epistemic pit of non-enlightenment.
If we figure out the metaphilosophy of how to robustly avoid epistemic pits, and build that into an aligned AGI, then in some sense none of our current epistemic pits are that bad, since that AGI would help us climb out in relatively short order. But if we don’t figure it out, we’ll plausibly stay in our epistemic pits for unacceptably long periods of time.
We can recognize ideas from the past that look like epistemic pits, but if you were to go back in time and try to argue against those ideas, you would be dismissed as incorrect. If you brought proof that the future society held your ideas instead of your interlocutors, that would taken as evidence of the increasing degeneracy of man rather than as evidence that your ideas were more correct than theirs. So what value does the concept of an epistemic pit bring?
I can name one epistemic pit that humanity fell into: slavery. At one point treating other human beings as property to be traded was considered normal, proper and right. After a very long time, and more than a few armed conflicts, western, liberal, capitalist societies updated away from this norm. However, if I were to go back to 1600 and try to argue that slavery was immoral, I would be seen holding an incorrect viewpoint. So, given that, how can one recognize that one is in an epistemic pit in the moment?
Slavery is not, and cannot be, an epistemic pit, because the error there is a moral one, not an epistemic one. Our values differ from those of people who viewed slavery as acceptable. That is very different from an “epistemic pit”.
My guess is that the sorts of interventions that would cause someone to empathise with slaves are primarily epistemic, actually. To try and give a simple example, teaching someone how to accurately use their mirror-neurons when they see the fear in the eyes of a slave, is a skill that I expect would cause them to change their behaviour toward slaves.
There seems to be a disagreement on definitions here. Said thinks a pit epistemic if a platonic reasoner could receive information that takes him out of it. You think a pit epistemic if a human could receive information that takes him out of it.
It does seem true that for a fully rational agent with infinite computing power, moral concerns are indeed completely separate from epistemic concerns. However, for most non-trivial reasoners who are not fully rational or do not have infinite computing power, this is not the case.
I think it’s often valuable to talk about various problems in rationality from a perspective of a perfectly rational agent with infinite computing power, but in this case it seems important to distinguish between those, humans and other potential bounded agents (i.e. any AI we design will not have its moral and epistemic concerns completely separated, which is actually a pretty big problem in AI alignment).
Why do you think an AI we design won’t have such separation? If physics allowed us to run arbitrary amounts of computation, someone may have built AIXI, which has such separation.
What does this mean? (I know what “mirror neurons” are, but I don’t have any idea what you could mean by the quoted phrase.)
I mean that empathy is a teachable skill, and that it can be thought of as an informational update, yet apparently changing your ‘moral’ behaviour.
Citation, please? I’m not sure I’m familiar with this.
Could you expand on this? Do you mean this in any but the most literal, technical sense? I am not sure how to view any gain of empathy (whether learned or otherwise) as an epistemic update.
I think I was a bit unclear, and that somehow you managed to not at all get what I meant. I have a draft post written on this point, I’ll publish it some time in the future, and tap out for now.
If “empathy” means “ability to understand the feelings of others” or “ability to predict what others will do”, then it seems straightforward that empathy is learnable. And learnability and teachability seem basically the same to me. Examples indicating that empathy is learnable:
As you get to know someone, things go more smoothly when you’re with them
Socializing is easier when you’ve been doing a lot of it (at least, I think so)
Managers are regularly trained for their job
Those definitions of “empathy” are, however, totally inconsistent with Ben’s mention of mirror neurons; so I doubt that this is what he had in mind.
(Your argument is actually problematic for several other reasons, but the aforesaid inconsistency makes your points inapplicable, so it’s not necessary to spend the time to demonstrate the other problems.)
Fair enough. I was conflating factual correctness and moral correctness. I guess a better example would be something like religious beliefs (e.g. the earth is 6000 years old, evolution is a lie, etc).