Because your point is in terms of the truth of specific probabilities, which are already always wrong, your point is ill-formed. T1=0, T2=1, the end. To do better you need to understand probability distributions.
If your first probability estimate is wrong, without any error bar—but simply wrong in an unknown way—then you’re screwed, right?
Edit: And what are you talking about with T2=1? It does not have a probability of 1. That sounds like your “signs flip” thing which I addressed already. I still think you are imagining a different regress than the one I was talking about.
If your first probability estimate is wrong, without any error bar—but simply wrong in an unknown way—then you’re screwed, right?
Think of it this way—if it’s wrong in an utterly unknown way, then the wrongness has perfect symmetry; there’s nothing to distinguish being wrong one way from being wrong in another. By the axiom that you shouldn’t make up information, when the information is symmetric, that part of the distribution (“part” as in you convolve the different parts together to get the total distribution) should be symmetric too. And since the final probability estimate is just the average over your distribution, the symmetry makes the problem easy—or if the problem is poorly defined or poorly understood, it at the very least gives you error bars—it makes the answer somewhere between your current estimate and the maximum entropy estimate.
If you’re wrong in an unknown way, then it could just as well be 1% or 99%.
You might try to claim this averages to 50%. But theories don’t have uniform probability. There are more possible mistakes than truths. Almost all theories are mistaken. So when the probability is unknown, we have every reason to think it’s a mistake (if we’re just going to guess; we could of course use Popper’s epistemology instead which handles all this stuff), and there’s no justification for the theory. Right?
Your comments about error bars are subject to regresses (what is the probability you are right about that method? about the maximum entropy estimate? etc)
You don’t seem to be thinking with the concept of an probability distribution, or an average of one. You say “If you’re wrong in an unknown way, then it could just as well be 1% or 99%” as if it spells doom for any attempt to quantify probabilities. When really all it is is a symmetry property for a probability distribution.
I guess I shouldn’t be expected to give you a class in probability over the internet when you are already convinced it’s all wrong. But again, I think you should read a textbook on this stuff, or take a class.
If that’s what you’re using “the regress” to mean, sure, sign me up. But this has even less bearing than usual on whether uncertainty can be represented by probability, unless you are making the (unlikely and terrible) argument that nothing can be represented by anything.
Because your point is in terms of the truth of specific probabilities, which are already always wrong, your point is ill-formed. T1=0, T2=1, the end. To do better you need to understand probability distributions.
If your first probability estimate is wrong, without any error bar—but simply wrong in an unknown way—then you’re screwed, right?
Edit: And what are you talking about with T2=1? It does not have a probability of 1. That sounds like your “signs flip” thing which I addressed already. I still think you are imagining a different regress than the one I was talking about.
Think of it this way—if it’s wrong in an utterly unknown way, then the wrongness has perfect symmetry; there’s nothing to distinguish being wrong one way from being wrong in another. By the axiom that you shouldn’t make up information, when the information is symmetric, that part of the distribution (“part” as in you convolve the different parts together to get the total distribution) should be symmetric too. And since the final probability estimate is just the average over your distribution, the symmetry makes the problem easy—or if the problem is poorly defined or poorly understood, it at the very least gives you error bars—it makes the answer somewhere between your current estimate and the maximum entropy estimate.
If you’re wrong in an unknown way, then it could just as well be 1% or 99%.
You might try to claim this averages to 50%. But theories don’t have uniform probability. There are more possible mistakes than truths. Almost all theories are mistaken. So when the probability is unknown, we have every reason to think it’s a mistake (if we’re just going to guess; we could of course use Popper’s epistemology instead which handles all this stuff), and there’s no justification for the theory. Right?
Your comments about error bars are subject to regresses (what is the probability you are right about that method? about the maximum entropy estimate? etc)
You don’t seem to be thinking with the concept of an probability distribution, or an average of one. You say “If you’re wrong in an unknown way, then it could just as well be 1% or 99%” as if it spells doom for any attempt to quantify probabilities. When really all it is is a symmetry property for a probability distribution.
I guess I shouldn’t be expected to give you a class in probability over the internet when you are already convinced it’s all wrong. But again, I think you should read a textbook on this stuff, or take a class.
Are you aware that Yudkowsky doesn’t dispute the regress? He has an article on it.
http://lesswrong.com/lw/s0/where_recursive_justification_hits_bottom/
If that’s what you’re using “the regress” to mean, sure, sign me up. But this has even less bearing than usual on whether uncertainty can be represented by probability, unless you are making the (unlikely and terrible) argument that nothing can be represented by anything.