I’m writing a book about epistemology. It’s about The Problem of the Criterion, why it’s important, and what it has to tell us about how we approach knowing the truth.
I’ve also written a lot about AI safety. Some of the more interesting stuff can be found at the site of my currently-dormant AI safety org, PAISRI.
What would it mean for this advice to not generalize? Like what cases are you thinking of where what someone needs to do to be more present isn’t some version of resolving automatic predictions of bad outcomes?
I ask because this feels like a place where disagreeing with the broad form of the claim suggests you disagree with the model of what it means to be present rather than that you disagree with the operationalization of the theory, which is something that might not generalize.