I don’t remember if I already wrote about this, but I was thinking about the space of hypotheses from first and second order logic, about where recursive reasoning hits the bottom and so on, and I came to the conclusion that if you actually find some mathematical formulation of the falsifiability criterion Popper, then it must be deeper than Bayes’ Theorem.
In other words, Bayes’ Theorem shows not that positive cognition also exists, it’s just weaker, but that negative cognition has indirect effects that can be mistaken for weak positive cognition.
If we try to formulate it concretely, then Bayes’ Theorem is usually depicted as a square, divided in one dimension by one line, and in the other by two. And after you’ve done the calculation, you cut off the probabilities that didn’t happen, and then you normalize it back to a nice square.
However, if you treat this square as a space of hypotheses from which you cut out falsified clusters of hypotheses, then you will see that no probabilities ever increase, just some fall much more than others and in a relative ratio less fallen ones look larger, and after normalization this difference generally lost.
The main plus of such a view is that there is no crisis of confidence in it, in general, in principle, you cannot confirm something, you can only more or less refute it. So bits of information become rebuttal or contradiction scores, you don’t validate the bit’s current value, you disprove the opposite, because that would mean a contradiction.
The probabilities of this number are less than one, therefore, with mutual multiplication, they can always only fall, but you look at the probability distribution for those that fell the least.
For example, religion has already been pierced by so many spears that in order for you to reach such a low probability, we need to lower even lower all possible probabilities, which are now higher. But for quantum mechanics, it doesn’t matter if there are any flaws in it, it still remains our best hypothesis.
In other words, it allows you not to reach self-destruction even if you are a contradictory agent, you just need to look for a less contradictory model. And this also works in comparison between agents, no one can increase someone’s rating, you can only find new contradictions, including in yourself, the one with whom others found the least contradictions wins.
In a broader sense, I believe that contradictions are generally a more fundamental category than lies and truth. False is something that is contradictory only in combination with some external system, so it can win in the system with which it does not contradict. And there are also things that are already contradictory in themselves, and for them it will not work out to find an ideal external world, the number of contradictions in which reaches zero.
In other words, contradictory things in themselves are worse than things that contradict only something specific, but there is no fundamental difference, even though false, even contradictory systems do not destroy the mechanism of your knowledge.
In addition, of course, in addition to false, true and contradictory, there is a fourth category, indefinite, they certainly score the least points of contradiction, but they are not particularly useful, because the essence of Bayesian knowledge is to distinguish between alternative worlds among themselves, and if a certain fact is true for all possible worlds, it does not allow you to discern which world you are in.
However, this does not mean that they are completely useless, because it is precisely from such facts that all mathematics / logic consists, facts that are true for all possible worlds, distinguishing them from contradictory facts that are not true for all possible worlds. That is, in other words, the meaning is again that it is impossible to prove something, mathematics is not a pool of statements proven true for all worlds, it is rather a pool of those statements that have not yet proven that they are wrong for all worlds.
I don’t remember if I already wrote about this, but I was thinking about the space of hypotheses from first and second order logic, about where recursive reasoning hits the bottom and so on, and I came to the conclusion that if you actually find some mathematical formulation of the falsifiability criterion Popper, then it must be deeper than Bayes’ Theorem. In other words, Bayes’ Theorem shows not that positive cognition also exists, it’s just weaker, but that negative cognition has indirect effects that can be mistaken for weak positive cognition. If we try to formulate it concretely, then Bayes’ Theorem is usually depicted as a square, divided in one dimension by one line, and in the other by two. And after you’ve done the calculation, you cut off the probabilities that didn’t happen, and then you normalize it back to a nice square. However, if you treat this square as a space of hypotheses from which you cut out falsified clusters of hypotheses, then you will see that no probabilities ever increase, just some fall much more than others and in a relative ratio less fallen ones look larger, and after normalization this difference generally lost. The main plus of such a view is that there is no crisis of confidence in it, in general, in principle, you cannot confirm something, you can only more or less refute it. So bits of information become rebuttal or contradiction scores, you don’t validate the bit’s current value, you disprove the opposite, because that would mean a contradiction. The probabilities of this number are less than one, therefore, with mutual multiplication, they can always only fall, but you look at the probability distribution for those that fell the least. For example, religion has already been pierced by so many spears that in order for you to reach such a low probability, we need to lower even lower all possible probabilities, which are now higher. But for quantum mechanics, it doesn’t matter if there are any flaws in it, it still remains our best hypothesis. In other words, it allows you not to reach self-destruction even if you are a contradictory agent, you just need to look for a less contradictory model. And this also works in comparison between agents, no one can increase someone’s rating, you can only find new contradictions, including in yourself, the one with whom others found the least contradictions wins. In a broader sense, I believe that contradictions are generally a more fundamental category than lies and truth. False is something that is contradictory only in combination with some external system, so it can win in the system with which it does not contradict. And there are also things that are already contradictory in themselves, and for them it will not work out to find an ideal external world, the number of contradictions in which reaches zero. In other words, contradictory things in themselves are worse than things that contradict only something specific, but there is no fundamental difference, even though false, even contradictory systems do not destroy the mechanism of your knowledge. In addition, of course, in addition to false, true and contradictory, there is a fourth category, indefinite, they certainly score the least points of contradiction, but they are not particularly useful, because the essence of Bayesian knowledge is to distinguish between alternative worlds among themselves, and if a certain fact is true for all possible worlds, it does not allow you to discern which world you are in. However, this does not mean that they are completely useless, because it is precisely from such facts that all mathematics / logic consists, facts that are true for all possible worlds, distinguishing them from contradictory facts that are not true for all possible worlds. That is, in other words, the meaning is again that it is impossible to prove something, mathematics is not a pool of statements proven true for all worlds, it is rather a pool of those statements that have not yet proven that they are wrong for all worlds.