I’m currently learning about hypothesis testing in my statistics class. The idea is that you perform some test and you use the results of that test to calculate:
P(data at least as extreme as your data | Null hypothesis)
This is the p-value. If the p-value is below a certain threshold then you can reject the null hypothesis.
This is correct.
Put another way:
P(data | hypothesis) = 1 - p-value
and if 1 - p-value is high enough then you accept the hypothesis. (My use of “data” is handwaving and not quite correct but it doesn’t matter.)
This is not correct. You seem to be under the impression that
P(data | null hypothesis) + P(data | complement(null hypothesis)) = 1,
but this is not true because
complement(null hypothesis) may not have a well-defined distribution (frequentists might especially object to defining a prior here), and
even if complement(null hypothesis) were well defined, the sum could fall anywhere in the closed interval [0, 2].
More generally, most people (both frequentists and bayesians) would object to “accepting the hypothesis” based on rejecting the null, because rejecting the null means exactly what it says, and no more. You cannot conclude that an alternative hypothesis (such as the complement of the null) has higher likelihood or probability.
Here.