A friend of mine, the most consistently rational person I know of, once told me that his major criteria for whether a piece of information is useful is if it can allow him to forget multiple other pieces of information, because they are now derivable from his corpus of information, given this new fact.
I have a vague feeling that there should be a useful test of rationality based on this. Some sort of information modeling test whereby one is given a complex set of interrelated but random data, and a randomly-generated data-expression language. Scoring is based on how close to optimal once gets on writing a generator for the given data in the given language.
Unfortunately, I think this is someone one could explicitly train for, and someone with knowledge of data compression theory would probably be at an advantage.
Yes, “not equals”, but compression is necessary for reality-mapping, which is one of the key components of rationality as defined at the beginning of this post. There’s a great quote on this:
“We can take this huge universe, and put it inside a very tiny head—you fold it.”
The thing is, humans do the compression thing very naturally. The heuristics and biases researchers’ innovation was that we suffer from specific mental illnesses such as overconfidence, confirmation bias, tribal politics, rationalizing after we’ve written the bottom line, etc.
I’m only now replying to this, since I’ve only just figured out what it was that I was groping for in the above.
The important thing is not compression, but integration of new knowledge so that it affects future cognition, and future behaviour. The ability to change one’s methodologies and approaches based on new knowledge would seem to be key to rationality. The more subtle the influence (ie, a new bit of math changes how you approach buying meat at the supermarket) then the better the evidence for deep integration of new knowledge.
A friend of mine, the most consistently rational person I know of, once told me that his major criteria for whether a piece of information is useful is if it can allow him to forget multiple other pieces of information, because they are now derivable from his corpus of information, given this new fact.
I have a vague feeling that there should be a useful test of rationality based on this. Some sort of information modeling test whereby one is given a complex set of interrelated but random data, and a randomly-generated data-expression language. Scoring is based on how close to optimal once gets on writing a generator for the given data in the given language.
Unfortunately, I think this is someone one could explicitly train for, and someone with knowledge of data compression theory would probably be at an advantage.
compression != rationality, methinks
Yes, “not equals”, but compression is necessary for reality-mapping, which is one of the key components of rationality as defined at the beginning of this post. There’s a great quote on this:
The thing is, humans do the compression thing very naturally. The heuristics and biases researchers’ innovation was that we suffer from specific mental illnesses such as overconfidence, confirmation bias, tribal politics, rationalizing after we’ve written the bottom line, etc.
EDITED
That’s, um, hardly my own innovation...
I’m only now replying to this, since I’ve only just figured out what it was that I was groping for in the above.
The important thing is not compression, but integration of new knowledge so that it affects future cognition, and future behaviour. The ability to change one’s methodologies and approaches based on new knowledge would seem to be key to rationality. The more subtle the influence (ie, a new bit of math changes how you approach buying meat at the supermarket) then the better the evidence for deep integration of new knowledge.