First, I think of people/situation pairs rather than people. Specific situations influence things so much that one loses a lot by trying to think of people more abstractly; there is the danger of the fundamental attribution error.
Some people/situations are wrong more often than others are. Some people/situations lie more to others than others do. Some people/situations lie more to themselves than others do.
Some are more concerned with false positives, others with false negatives.
I also tend to think of people as components of decision making processes, as well as comprised of analogous decision making processes. Science takes advantage of this through the peer review process, which pits credulous humans against each other in attempts to prove each other’s ideas wrong, and it ultimately produces a body of knowledge each piece of which is unlikely to be false. It is the best input for anyone who instead cares about something slightly different, such as what is most likely to be true when false positives and false negatives would be similarly dangerous.
This is the source of my respect for Scott Adams (creator of Dilbert), which I’ve noticed is surprisingly prevalent if irregular among intelligent people I respect who have no particular reason to connect with anything having to do with office work or cubicles. It’s something that people either “get” or “don’t get,” like the orange joke. The man in an incomplete thinker, and many hundreds of millions of people are better decision makers than he, but as a member of a decision making group few could better come up with creative, topical, unique approaches to problems. Pair him with an intelligent, moderately critical mind and one would have a problem solving group better than one of two moderately intelligent and creative people.
Some people/situations produce more signal than others, others a better signal/noise ratio, some only advise when they are confident in their advice, some advise whenever they think it would have marginal gain, etc.
If you have an important decision to make, ask how to make the decision, not who should make it. Set up a person/situation network—even if the only person to trust is yourself (I have seen some research on patterns of decisions better made on a full bladder than an empty one, and vice versa. There is no you, there is only a you/situation (e.g. bladder) pair. Nothing corresponds to you/(no bladder situation, empty, full, or intermediate)! Likewise for decisions that differ dependent on whether or not your facial muscles are in the shape of a smile, etc.
Also, for every aspect of “trust,” beliefs are properly probabilistic; for the chances the person has good intentions, understands how you interpreted their words and actions, knows the right answer, knows they know the right answer, etc.
If you have a specific question you want advice to, asking about it most abstractly to avoid political associations was a great first move. Yet the abstract question is an imprecise summary and function of specific possible worlds. I think continuous rephrasing from more to less abstract might work well, as one could select from among variously abstract advice at different levels of political contamination and idiosyncratic specificity. Going in the other direction wouldn’t work as well, since the political content revealed early would taint later responses.
This can be unpacked/dissolved.
First, I think of people/situation pairs rather than people. Specific situations influence things so much that one loses a lot by trying to think of people more abstractly; there is the danger of the fundamental attribution error.
Some people/situations are wrong more often than others are. Some people/situations lie more to others than others do. Some people/situations lie more to themselves than others do.
Some are more concerned with false positives, others with false negatives.
I also tend to think of people as components of decision making processes, as well as comprised of analogous decision making processes. Science takes advantage of this through the peer review process, which pits credulous humans against each other in attempts to prove each other’s ideas wrong, and it ultimately produces a body of knowledge each piece of which is unlikely to be false. It is the best input for anyone who instead cares about something slightly different, such as what is most likely to be true when false positives and false negatives would be similarly dangerous.
This is the source of my respect for Scott Adams (creator of Dilbert), which I’ve noticed is surprisingly prevalent if irregular among intelligent people I respect who have no particular reason to connect with anything having to do with office work or cubicles. It’s something that people either “get” or “don’t get,” like the orange joke. The man in an incomplete thinker, and many hundreds of millions of people are better decision makers than he, but as a member of a decision making group few could better come up with creative, topical, unique approaches to problems. Pair him with an intelligent, moderately critical mind and one would have a problem solving group better than one of two moderately intelligent and creative people.
Some people/situations produce more signal than others, others a better signal/noise ratio, some only advise when they are confident in their advice, some advise whenever they think it would have marginal gain, etc.
If you have an important decision to make, ask how to make the decision, not who should make it. Set up a person/situation network—even if the only person to trust is yourself (I have seen some research on patterns of decisions better made on a full bladder than an empty one, and vice versa. There is no you, there is only a you/situation (e.g. bladder) pair. Nothing corresponds to you/(no bladder situation, empty, full, or intermediate)! Likewise for decisions that differ dependent on whether or not your facial muscles are in the shape of a smile, etc.
Also, for every aspect of “trust,” beliefs are properly probabilistic; for the chances the person has good intentions, understands how you interpreted their words and actions, knows the right answer, knows they know the right answer, etc.
If you have a specific question you want advice to, asking about it most abstractly to avoid political associations was a great first move. Yet the abstract question is an imprecise summary and function of specific possible worlds. I think continuous rephrasing from more to less abstract might work well, as one could select from among variously abstract advice at different levels of political contamination and idiosyncratic specificity. Going in the other direction wouldn’t work as well, since the political content revealed early would taint later responses.