I would like to explore some examples outside of random chatter about p(doom) here. Mostly, I think there’s no way to know what “all the way” even means until the updates have stopped. I also suspect you’re using “update” in a non-rigorous way, including human-level sentiment and changes in prediction based on modeling and re-weighting existing evidence, rather than the more numeric Bayesean use of “update” on new evidence. It’s unclear what “conservation of expected evidence” means when “evidence” isn’t well-defined.
I don’t think (though it would only take a few rigorous examples to convince me) that we’re going to find a generality that applies to how humans change their social opinions over time, with and without points of reinforcement among their contacts and reading.
I would like to explore some examples outside of random chatter about p(doom) here.
I think I’ve seen discussion about it in the case of prediction markets on whether figures like Vladimir Putin would be assassinated by a given date. In the case of that market, there was a constant downwards trend.
I also suspect you’re using “update” in a non-rigorous way, including human-level sentiment and changes in prediction based on modeling and re-weighting existing evidence, rather than the more numeric Bayesean use of “update” on new evidence. It’s unclear what “conservation of expected evidence” means when “evidence” isn’t well-defined.
I would like to explore some examples outside of random chatter about p(doom) here. Mostly, I think there’s no way to know what “all the way” even means until the updates have stopped. I also suspect you’re using “update” in a non-rigorous way, including human-level sentiment and changes in prediction based on modeling and re-weighting existing evidence, rather than the more numeric Bayesean use of “update” on new evidence. It’s unclear what “conservation of expected evidence” means when “evidence” isn’t well-defined.
I don’t think (though it would only take a few rigorous examples to convince me) that we’re going to find a generality that applies to how humans change their social opinions over time, with and without points of reinforcement among their contacts and reading.
I think I’ve seen discussion about it in the case of prediction markets on whether figures like Vladimir Putin would be assassinated by a given date. In the case of that market, there was a constant downwards trend.
I don’t agree with this. E.g. radical probabilism does away with Bayesian updates, but it still has conservation of expected evidence.