What would a “qualia-first-calibration” app would look like?
Or, maybe: “metadata-first calibration”
The thing with putting probabilities on things is that often, the probabilities are made up. And the final probability throws away a lot of information about where it actually came from.
I’m experimenting with primarily focusing on “what are all the little-metadata-flags associated with this prediction?”. I think some of this is about “feelings you have” and some of it is about “what do you actually know about this topic?”
The sort of app I’m imagining would help me identify whatever indicators are most useful to me. Ideally it has a bunch of users, and types of indicators that have been useful to lots of users can promoted as things to think about when you make predictions.
Braindump of possible prompts:
– is there a “reference class” you can compare it to?
– for each probability bucket, how do you feel? (including ‘confident’/‘unconfident’ as well as things like ‘anxious’, ‘sad’, etc)
– what overall feelings do you have looking at the question?
– what felt senses do you experience as you mull over the question (“my back tingles”, “I feel the Color Red”)
...
My first thought here is to have various tags you can re-use, but, another option is to just do totally unstructured text-dump and somehow do factor analysis on word patterns later?
“what are all the little-metadata-flags associated with this prediction?”
Some metadata flags I associate with predictions:
what kinds of evidence went into this prediction? (‘did some research’, ‘have seen things like this before’, ‘mostly trusting/copying someone else’s prediction’)
if I’m taking other people’s predictions into account, there’s a metadata-flags for ‘what would my prediction be if I didn’t consider other people’s predictions?’
is this a domain in which I’m well calibrated?
is my prediction likely to change a lot, or have I already seen most of the evidence that I expect to for a while?
What would a “qualia-first-calibration” app would look like?
Or, maybe: “metadata-first calibration”
The thing with putting probabilities on things is that often, the probabilities are made up. And the final probability throws away a lot of information about where it actually came from.
I’m experimenting with primarily focusing on “what are all the little-metadata-flags associated with this prediction?”. I think some of this is about “feelings you have” and some of it is about “what do you actually know about this topic?”
The sort of app I’m imagining would help me identify whatever indicators are most useful to me. Ideally it has a bunch of users, and types of indicators that have been useful to lots of users can promoted as things to think about when you make predictions.
Braindump of possible prompts:
– is there a “reference class” you can compare it to?
– for each probability bucket, how do you feel? (including ‘confident’/‘unconfident’ as well as things like ‘anxious’, ‘sad’, etc)
– what overall feelings do you have looking at the question?
– what felt senses do you experience as you mull over the question (“my back tingles”, “I feel the Color Red”)
...
My first thought here is to have various tags you can re-use, but, another option is to just do totally unstructured text-dump and somehow do factor analysis on word patterns later?
Some metadata flags I associate with predictions:
what kinds of evidence went into this prediction? (‘did some research’, ‘have seen things like this before’, ‘mostly trusting/copying someone else’s prediction’)
if I’m taking other people’s predictions into account, there’s a metadata-flags for ‘what would my prediction be if I didn’t consider other people’s predictions?’
is this a domain in which I’m well calibrated?
is my prediction likely to change a lot, or have I already seen most of the evidence that I expect to for a while?
how important is this?