If epistemic rationalists can’t speak of a “better approximation,” then how can an epistemic rationalist exist in a universe with finite computational resources?
Pure epistemic rationalists with no utility function? Well, they can’t, really. That’s part of the problem with the Oracle AI scenario.
They can speak of a “closer approximation” instead. (But that still needs a metric.)
If epistemic rationalists can’t speak of a “better approximation,” then how can an epistemic rationalist exist in a universe with finite computational resources?
Pure epistemic rationalists with no utility function? Well, they can’t, really. That’s part of the problem with the Oracle AI scenario.
They can speak of a “closer approximation” instead. (But that still needs a metric.)