There seems to be a disagreement on definitions here. Said thinks a pit epistemic if a platonic reasoner could receive information that takes him out of it. You think a pit epistemic if a human could receive information that takes him out of it.
It does seem true that for a fully rational agent with infinite computing power, moral concerns are indeed completely separate from epistemic concerns. However, for most non-trivial reasoners who are not fully rational or do not have infinite computing power, this is not the case.
I think it’s often valuable to talk about various problems in rationality from a perspective of a perfectly rational agent with infinite computing power, but in this case it seems important to distinguish between those, humans and other potential bounded agents (i.e. any AI we design will not have its moral and epistemic concerns completely separated, which is actually a pretty big problem in AI alignment).
Why do you think an AI we design won’t have such separation? If physics allowed us to run arbitrary amounts of computation, someone may have built AIXI, which has such separation.
There seems to be a disagreement on definitions here. Said thinks a pit epistemic if a platonic reasoner could receive information that takes him out of it. You think a pit epistemic if a human could receive information that takes him out of it.
It does seem true that for a fully rational agent with infinite computing power, moral concerns are indeed completely separate from epistemic concerns. However, for most non-trivial reasoners who are not fully rational or do not have infinite computing power, this is not the case.
I think it’s often valuable to talk about various problems in rationality from a perspective of a perfectly rational agent with infinite computing power, but in this case it seems important to distinguish between those, humans and other potential bounded agents (i.e. any AI we design will not have its moral and epistemic concerns completely separated, which is actually a pretty big problem in AI alignment).
Why do you think an AI we design won’t have such separation? If physics allowed us to run arbitrary amounts of computation, someone may have built AIXI, which has such separation.