People’s stated moral beliefs are often gradient estimates instead of object-level point estimates. This makes sense if arguments from those beliefs are pulls on the group epistemology, and not if those beliefs are guides for individual action. Saying “humans are a blight on the planet” would mean something closer to “we should be more environmentalist on the margin” instead of “all things considered, humans should be removed.”
You can probably imagine how this can be disorienting, and how there’s a meta issue of the point estimate view is able to see what it’s doing in a way that the gradient view might not be able to see what it’s doing.
(metameta note, I think going meta often comes off as snarky even though not intended, which might contribute to Why Our Kind Can’t Get Along)
People’s metabeliefs are downstream of which knowledge representation they are using and what that representation tells them about
Which things are variant and invariant
Of the variant things how sensitive they are (huh, actually I guess you can just say the invariants have zero sensitivity, I haven’t had that thought before)
What sorts of things count as evidence that a parameter or metadata about a parameter should change
What sorts of representations are reasonable (where the base representation is hard to question) ie whether or not metaphorical reasoning is appropriate (hard to think about) and which metaphors capture causal structure better
Normativity and confidence have their own heuristics that cause them to be sticky on parts of the representation and help direct attention while traversing it
People’s stated moral beliefs are often gradient estimates instead of object-level point estimates. This makes sense if arguments from those beliefs are pulls on the group epistemology, and not if those beliefs are guides for individual action. Saying “humans are a blight on the planet” would mean something closer to “we should be more environmentalist on the margin” instead of “all things considered, humans should be removed.”
You can probably imagine how this can be disorienting, and how there’s a meta issue of the point estimate view is able to see what it’s doing in a way that the gradient view might not be able to see what it’s doing.
(metameta note, I think going meta often comes off as snarky even though not intended, which might contribute to Why Our Kind Can’t Get Along)
People’s metabeliefs are downstream of which knowledge representation they are using and what that representation tells them about
Which things are variant and invariant
Of the variant things how sensitive they are (huh, actually I guess you can just say the invariants have zero sensitivity, I haven’t had that thought before)
What sorts of things count as evidence that a parameter or metadata about a parameter should change
What sorts of representations are reasonable (where the base representation is hard to question) ie whether or not metaphorical reasoning is appropriate (hard to think about) and which metaphors capture causal structure better
Normativity and confidence have their own heuristics that cause them to be sticky on parts of the representation and help direct attention while traversing it
What about guides for changes to individual/personal action?