It occurred to me that looking through first-order logic could be the answer to many questions. For example, the choice of complexity by the number of rules or the number of objects, the formulas of quantum mechanics do not predict some specific huge combination of particles, they, like all hypotheses, limit your expectations compared to the space of all hypotheses/objects, so for at least complexity according to the rules, at least according to objects will be given one answer. At the same time, limiting the complexity of objects should be the solution to the Pascal robbery (the original articles have no link if they are already solved), this is the answer, where the leverage penalty comes from. When you postulate a hypothesis, you narrow the space of objects, initially there is much more googolplex of people, in fact, but you specify only a specific googolplex as axioms of your hypothesis, and for this you need the corresponding number of bits, because in logic an object with identical properties cannot be different objects (and if I’m not mistaken, quantum mechanics says exactly that), so that each person in the googolplex must be different in some way, and to prove / indicate / describe this you need at least the logarithm of bits. And as long as you’re human, you can’t really even formulate that hypothesis exactly, define all the axioms when that is 1 your hypothesis is 1, let alone get enough bits of evidence to establish that they really are 1. But also the hypothesis about any number of people is the sum of the hypotheses “there is at least n-1 people” and “there is one more person”, so increasing its probability by a billion times it will be literally equivalent to believing at least that part of the hypothesis where there are a billion people , which will be affected by the master of the matrix. This can also be expressed as that each very unlikely hypothesis is the sum of two less complex and unlikely hypotheses, and so on until you have enough memory to consider them, or in other words, you must start with more likely hypotheses, test, and only then add new axioms to them, new bits of complexity. Or a version of leverage penalty, only not for the probability of being on such a significant node, but for choosing from the space of hypotheses, where for the hypothesis about the googolplex of people there will be a googolplex of analogues for smaller numbers. That is, according to first-order logic, our programs have unreasonably high priors for regular hypotheses, in fact, infinite, although in fact you have to choose from two options for each bit, so the longer the set of certain bit values, the less likely it will be. Of course, we have evidence that things behave regularly, but not all evidence goes there, much less an infinite amount of it, since we haven’t even tested all 10^80 atoms in our Hubble sphere, so our prior is larger the probabilities of regular hypotheses will not be strong enough to overpower even a googolplex.
It occurred to me that looking through first-order logic could be the answer to many questions. For example, the choice of complexity by the number of rules or the number of objects, the formulas of quantum mechanics do not predict some specific huge combination of particles, they, like all hypotheses, limit your expectations compared to the space of all hypotheses/objects, so for at least complexity according to the rules, at least according to objects will be given one answer. At the same time, limiting the complexity of objects should be the solution to the Pascal robbery (the original articles have no link if they are already solved), this is the answer, where the leverage penalty comes from. When you postulate a hypothesis, you narrow the space of objects, initially there is much more googolplex of people, in fact, but you specify only a specific googolplex as axioms of your hypothesis, and for this you need the corresponding number of bits, because in logic an object with identical properties cannot be different objects (and if I’m not mistaken, quantum mechanics says exactly that), so that each person in the googolplex must be different in some way, and to prove / indicate / describe this you need at least the logarithm of bits. And as long as you’re human, you can’t really even formulate that hypothesis exactly, define all the axioms when that is 1 your hypothesis is 1, let alone get enough bits of evidence to establish that they really are 1. But also the hypothesis about any number of people is the sum of the hypotheses “there is at least n-1 people” and “there is one more person”, so increasing its probability by a billion times it will be literally equivalent to believing at least that part of the hypothesis where there are a billion people , which will be affected by the master of the matrix. This can also be expressed as that each very unlikely hypothesis is the sum of two less complex and unlikely hypotheses, and so on until you have enough memory to consider them, or in other words, you must start with more likely hypotheses, test, and only then add new axioms to them, new bits of complexity. Or a version of leverage penalty, only not for the probability of being on such a significant node, but for choosing from the space of hypotheses, where for the hypothesis about the googolplex of people there will be a googolplex of analogues for smaller numbers. That is, according to first-order logic, our programs have unreasonably high priors for regular hypotheses, in fact, infinite, although in fact you have to choose from two options for each bit, so the longer the set of certain bit values, the less likely it will be. Of course, we have evidence that things behave regularly, but not all evidence goes there, much less an infinite amount of it, since we haven’t even tested all 10^80 atoms in our Hubble sphere, so our prior is larger the probabilities of regular hypotheses will not be strong enough to overpower even a googolplex.