Science relies on philosophy and the current philosophy of science is limited and in some cases useless. We need a new epistemology that is not scientific in nature to answer big questions.
The scientific method in principle is as follows; predict, test, interpret, theorize, repeat, this has major problems. The root of the issue is that for any set of data there are theoretically infinite possible explanations. In modern science we have no way to distinguish between these different possibilities. Oftentimes Occam’s Razor is used as an intuitive fill in for reasoning but intuition is known to be flawed at least in some circumstances. The reason why our scientific method will never realize this flaw from within the paradigm is because this critique is in principle unverifiable by scientific tests therefore unknowable through scientific methods. The reason for why the scientific method still works regardless of this is because it takes the simplest model based on akums razor therefore no matter what it can still make relatively easy predictions that are often correct if they are not correct then the model is changed.
What I am interested in is not what is predictable but what is true and real so in order to answer these questions we need a new paradigm. We can start with the principle of limiting what might be to what is possible this can be done using mathematics for example something can not both be and not be. But how do we place probabilities on things that are possible. In order to do this we need to have a starting principle of probability for one we can use bayesian methods in order to adjust probabilities with observations for example if we flip a coin and we get heads 10 out of 10 times we can make a distribution of the probabilities of each of the probabilities of getting heads. But this relies on an assumption: that all the possibilities are of equal starting distribution. But there is no proof that this is the case. What we are looking for is the philosophical foundation of probability theory. This comes in the form of what I am calling the theorem of equal probability: this theorem states that if given a random statement out of the set of all true or false statements then this statement is 50 percent likely to be true. This is because the set of all truth statements consists of two subsets one being statements tsub p the primary statements and the other being tsub i the inverse of the primary statements since every statement has inverse tsub can also be said to contain the inverse of the statements of t usb i in other words it goes both ways. The average truth value of t sub i = 1- avg t sub p thus the average of t sub p and t sub i is .5 in other words the average probability of a statement being true if given a random statement is 50 percent. From this we can have a starting distribution.
But this seems to present a paradox if the probability of a is fifty percent and the probability of b is also fifty percent then the probability of a and b is 25 percent but this seemingly contradicts our original statement. What i argue is that only irreducible statements:statements that can not be phrased in terms of and without using or and can not be phrased in terms of or without using and. This necessarily brings up the question what what objects are irreducible as these irreducible statements correlate to this irreducible objects. For that i will leave it up to the philosophers to debate.
rationalistic probability(litterally just throwing shit out there)
Science relies on philosophy and the current philosophy of science is limited and in some cases useless. We need a new epistemology that is not scientific in nature to answer big questions.
The scientific method in principle is as follows; predict, test, interpret, theorize, repeat, this has major problems. The root of the issue is that for any set of data there are theoretically infinite possible explanations. In modern science we have no way to distinguish between these different possibilities. Oftentimes Occam’s Razor is used as an intuitive fill in for reasoning but intuition is known to be flawed at least in some circumstances. The reason why our scientific method will never realize this flaw from within the paradigm is because this critique is in principle unverifiable by scientific tests therefore unknowable through scientific methods. The reason for why the scientific method still works regardless of this is because it takes the simplest model based on akums razor therefore no matter what it can still make relatively easy predictions that are often correct if they are not correct then the model is changed.
What I am interested in is not what is predictable but what is true and real so in order to answer these questions we need a new paradigm. We can start with the principle of limiting what might be to what is possible this can be done using mathematics for example something can not both be and not be. But how do we place probabilities on things that are possible. In order to do this we need to have a starting principle of probability for one we can use bayesian methods in order to adjust probabilities with observations for example if we flip a coin and we get heads 10 out of 10 times we can make a distribution of the probabilities of each of the probabilities of getting heads. But this relies on an assumption: that all the possibilities are of equal starting distribution. But there is no proof that this is the case. What we are looking for is the philosophical foundation of probability theory. This comes in the form of what I am calling the theorem of equal probability: this theorem states that if given a random statement out of the set of all true or false statements then this statement is 50 percent likely to be true. This is because the set of all truth statements consists of two subsets one being statements tsub p the primary statements and the other being tsub i the inverse of the primary statements since every statement has inverse tsub can also be said to contain the inverse of the statements of t usb i in other words it goes both ways. The average truth value of t sub i = 1- avg t sub p thus the average of t sub p and t sub i is .5 in other words the average probability of a statement being true if given a random statement is 50 percent. From this we can have a starting distribution.
But this seems to present a paradox if the probability of a is fifty percent and the probability of b is also fifty percent then the probability of a and b is 25 percent but this seemingly contradicts our original statement. What i argue is that only irreducible statements:statements that can not be phrased in terms of and without using or and can not be phrased in terms of or without using and. This necessarily brings up the question what what objects are irreducible as these irreducible statements correlate to this irreducible objects. For that i will leave it up to the philosophers to debate.