For me, rationalism corresponds to what I call “correct reasoning”. Correct reasoning is any reasoning you would eventually perform if your beliefs were continually and informatively checked against your observations, starting from a belief set of arbitrarily large wrongness.
For example, if you believed that you should observe {X} as a result of employing reasoning mechanism {Y}, and you happened to get good tests of {Y} (i.e., highest possible surprisal value, -log p({X}|{Y}) ), forcing you to use a different {Y} until you found a {Y} that correctly predicted {X}, then the reasoning mechanism in {Y} is what I count as “correct reasoning”.
Starting with this general principle, one can derive several heuristics to use when forming useful models of the world, and these form the ontology assumed by CLIP (the Clippy Language Interface Protocol).
What if you couldn’t distinguish between two different reasoning mechanisms by any finite amount of observation, but they led to completely different conclusions?
The universe in which at some date in the future every paperclip turns into a non-paperclip, and every non-paperclip turns into a paperclip, would look just like the universe where no such thing ever happens.
And there are infinitely many such switching universes—one for each switching date—and only one non-switching universe. So even if they seem unlikely, this should be balanced by their numbers.
Are you willing to take the risk that all your effort to make more paperclips will lead to fewer paperclips because you simply assumed how universe works?
Nice try, but correct reasoning implies a complexity penalty because predicating my reasoning on arbitrary parameters would be filtered out quickly given informative observations.
Is every paperclip just as important, or each additional paperclip matters less?
Is certain number of paperclips exactly as valuable as half the chance for twice as many paperclips?
You’re saying “complexity penalty”, but it is not that complex to describe 3^^^3 paperclips. Number of possible paperclips can increase a lot lot faster than complexity.
For me, rationalism corresponds to what I call “correct reasoning”. Correct reasoning is any reasoning you would eventually perform if your beliefs were continually and informatively checked against your observations, starting from a belief set of arbitrarily large wrongness.
For example, if you believed that you should observe {X} as a result of employing reasoning mechanism {Y}, and you happened to get good tests of {Y} (i.e., highest possible surprisal value, -log p({X}|{Y}) ), forcing you to use a different {Y} until you found a {Y} that correctly predicted {X}, then the reasoning mechanism in {Y} is what I count as “correct reasoning”.
Starting with this general principle, one can derive several heuristics to use when forming useful models of the world, and these form the ontology assumed by CLIP (the Clippy Language Interface Protocol).
What if you couldn’t distinguish between two different reasoning mechanisms by any finite amount of observation, but they led to completely different conclusions?
The universe in which at some date in the future every paperclip turns into a non-paperclip, and every non-paperclip turns into a paperclip, would look just like the universe where no such thing ever happens.
And there are infinitely many such switching universes—one for each switching date—and only one non-switching universe. So even if they seem unlikely, this should be balanced by their numbers.
Are you willing to take the risk that all your effort to make more paperclips will lead to fewer paperclips because you simply assumed how universe works?
Nice try, but correct reasoning implies a complexity penalty because predicating my reasoning on arbitrary parameters would be filtered out quickly given informative observations.
Is every paperclip just as important, or each additional paperclip matters less?
Is certain number of paperclips exactly as valuable as half the chance for twice as many paperclips?
You’re saying “complexity penalty”, but it is not that complex to describe 3^^^3 paperclips. Number of possible paperclips can increase a lot lot faster than complexity.