This math is exactly why we say a rational agent can never assign a perfect 1 or 0 to any probability estimate. Doing so in a universe which then presents you with counterevidence means you’re not rational.
Which I suppose could be termed “infinitely confused”, but that feels like a mixing of levels. You’re not confused about a given probability, you’re confused about how probability works.
In practice, when a well-calibrated person says 100% or 0%, they’re rounding off from some unspecified-precision estimate like 99.9% or 0.000000000001.
Which I suppose could be termed “infinitely confused”, but that feels like a mixing of levels. You’re not confused about a given probability, you’re confused about how probability works.
Or alternatively, it’s a clever turn of phrase: “infinitely confused” as in confused about infinities.
This math is exactly why we say a rational agent can never assign a perfect 1 or 0 to any probability estimate.
Yes, of course. i just thought i found an amusing situation thinking about it.
You’re not confused about a given probability, you’re confused about how probability works.
nice way to put it :)
I think i might have framed the question wrong. it was clear to me that it wouldn’t be rational (so maybe i shouldn’t have used the term “Bayesian agent”). but it did seem that if you put the numbers this way you get a mathematical “definition” of “infinite confusion”.
The point goes both ways—following Bayes’ rule means not being able to update away from 100%, but the reverse is likely as well—unless there exists for every hypothesis, not only evidence against it, but also evidence that completely disproves it, there isn’t evidence that if agent B observes, they will ascribe anything 100% or 0% probability (if they didn’t start out that way).
So a Bayesian agent can’t become infinitely confused unless they obtain infinite knowledge, or have bad priors. (One may simulate a Bayesian with bad priors.)
Pattern, i miscommunicated my question, i didn’t mean to ask about a Bayesian agent in the sense of a rational agent. just what is the mathematical result from plucking certain numbers into the equation.
I am well aware now and before the post, that a rational agent won’t have a 100% prior, and won’t find evidence equal to a 100%, that wasn’t where the question stemmed from.
This math is exactly why we say a rational agent can never assign a perfect 1 or 0 to any probability estimate. Doing so in a universe which then presents you with counterevidence means you’re not rational.
Which I suppose could be termed “infinitely confused”, but that feels like a mixing of levels. You’re not confused about a given probability, you’re confused about how probability works.
In practice, when a well-calibrated person says 100% or 0%, they’re rounding off from some unspecified-precision estimate like 99.9% or 0.000000000001.
Or alternatively, it’s a clever turn of phrase: “infinitely confused” as in confused about infinities.
Yes, of course. i just thought i found an amusing situation thinking about it.
nice way to put it :)
I think i might have framed the question wrong. it was clear to me that it wouldn’t be rational (so maybe i shouldn’t have used the term “Bayesian agent”). but it did seem that if you put the numbers this way you get a mathematical “definition” of “infinite confusion”.
The point goes both ways—following Bayes’ rule means not being able to update away from 100%, but the reverse is likely as well—unless there exists for every hypothesis, not only evidence against it, but also evidence that completely disproves it, there isn’t evidence that if agent B observes, they will ascribe anything 100% or 0% probability (if they didn’t start out that way).
So a Bayesian agent can’t become infinitely confused unless they obtain infinite knowledge, or have bad priors. (One may simulate a Bayesian with bad priors.)
Pattern, i miscommunicated my question, i didn’t mean to ask about a Bayesian agent in the sense of a rational agent. just what is the mathematical result from plucking certain numbers into the equation.
I am well aware now and before the post, that a rational agent won’t have a 100% prior, and won’t find evidence equal to a 100%, that wasn’t where the question stemmed from.