My point was that the SIA(human) is less plausible, meaning you shouldn’t base conclusions on it, not that the resulting calculus (conditional on its truth) would be different.
Surely the extremes “update on all available information” and “never update on anything” are each more plausible than any mixture like “update on the observation that I exist, but not on the observation that I’m human”.
My point was that the SIA(human) is less plausible, meaning you shouldn’t base conclusions on it, not that the resulting calculus (conditional on its truth) would be different.
That’s what I meant, though: you don’t calculate the probability of SIA(human) any differently than you would for any other category of observer.
Surely the extremes “update on all available information” and “never update on anything” are each more plausible than any mixture like “update on the observation that I exist, but not on the observation that I’m human”.