You assign T0 a probability, say 99.999%. Never mind how or why, the probability people aren’t big on explanations like that. Just do your best. It doesn’t matter. Moving on, what we have to wonder if that 99.999% figure is correct.
Subjective probabilities don’t work like that. Your subjective probability just is what it is. In Bayesian terms the closest thing to a “real” probability is whatever probability estimation is the best you can do with the available data. There is no “correct” or “incorrect” subjective probability, just predictably doing worse than possible to different degrees.
There is no “correct” or “incorrect” subjective probability, just predictably doing worse than possible to different degrees.
There is a correct P(T0|X) where X is your entire state of information. Probabilities aren’t strictly speaking subjective, they’re subjectively objective.
“Subjectively objective” just means that trying to do the best you can doesn’t leave any room for choice. You can argue that you aren’t really talking about probabilities if you knowingly do worse than you could, but that’s just a matter of semantics.
No, just that it doesn’t manifest itself in the form of a pyramid of probabilities of probabilities being “correct”. There certainly is the problem of priors, and the justification for reasoning that way in the first place (which were sketched by others in the other thread).
Yeah, you’re making a flawed argument by analogy. “There’s an infinite regress in deductive logic, so therefore any attempt at justification using probability will also lead to an infinite regress.” The reason that probabilistic justification doesn’t run into this (or at least, not the exact analogous thing) is that “being wrong” is a definite state with known properties, that is taken into account when you make your estimate. This is very unlike deductive logic.
Agent beliefs don’t normally regress to before they were conceived. They get assigned some priors around when they are born—usually by an evolutionary process.
Subjective probabilities don’t work like that. Your subjective probability just is what it is. In Bayesian terms the closest thing to a “real” probability is whatever probability estimation is the best you can do with the available data. There is no “correct” or “incorrect” subjective probability, just predictably doing worse than possible to different degrees.
There is a correct P(T0|X) where X is your entire state of information. Probabilities aren’t strictly speaking subjective, they’re subjectively objective.
“Subjectively objective” just means that trying to do the best you can doesn’t leave any room for choice. You can argue that you aren’t really talking about probabilities if you knowingly do worse than you could, but that’s just a matter of semantics.
Are you saying that there is no regress problem? Yudkowsky disagrees. And so do other commenters here, one of whom called it a “necessary flaw”.
No, just that it doesn’t manifest itself in the form of a pyramid of probabilities of probabilities being “correct”. There certainly is the problem of priors, and the justification for reasoning that way in the first place (which were sketched by others in the other thread).
Yeah, you’re making a flawed argument by analogy. “There’s an infinite regress in deductive logic, so therefore any attempt at justification using probability will also lead to an infinite regress.” The reason that probabilistic justification doesn’t run into this (or at least, not the exact analogous thing) is that “being wrong” is a definite state with known properties, that is taken into account when you make your estimate. This is very unlike deductive logic.
That essay seems pretty yuck to me.
Agent beliefs don’t normally regress to before they were conceived. They get assigned some priors around when they are born—usually by an evolutionary process.
I’m not clear on what you are saying.