It might be rare to a non-Bayesian, but if you take the position that you must update on observations seriously, it sems rather straightforward to me to say that there is an infinite sigma-algebra of events that you assign probabilities to over time (if you are an idealized, as opposed to bounded, reasoner), and an infinite set of possible observations that you can update on. Everything affects everything else, in the reinterpretation of “justification” as conditioning. And your priors relative to a certain Bayesian updating are really your posteriors from the Bayesian update that came immediately before it, which were modified by the update right before that, and on, and on, and on. (And there is nothing theoretically wrong with updating continuously over time, either, such that when you start with a prior at time t=0, at t=1 you have already gone through an “infinite” chain of justification-updates)
I’m not really sure what infinite chain of justification you are imagining a Bayesian position to suggest. A posterior seems justified by a prior combined with evidence. We can view this as a single justificatory step if we like, or as a sequence of updates. So long as the amount of information we are updating on is finite, this seems like a finite justification chain. I don’t think it matters if the sigma-algebra is infinite or if the space of possible observations is infinite, although as you seem to recognize, neither of these assumptions seems particularly plausible for bounded beings. The idea that everything effects everything else doesn’t actually need to make the justificatory chain infinite.
I think you might be conflating the idea that Bayesianism in some sense points at idealized rationality, with the idea that everything about a Bayesian needs to be infinite. It is possible to specify perfect Bayesian beliefs which can be calculated finitely. It is possible to have finite sigma algebras, finite amounts of information in an update, etc.
The idea of continuous updating is interesting, but not very much a part of the most common Bayesian picture. Also, its relationship to infinite chains seems more complex than you suggest. If I am observing, say, a temperature in real time, then you can model me as having a prior which gets updated on an observation of [the function specifying how the temperature has changed over time]. This would still be a finite chain. If you insist that the proper justification chain includes all my intermediate updates rather than just one big update, then it ceases to be a “chain” at all (because ALL pairs of distinct times have a third time between them, so there are no “direct” justification links between times whatsoever—any link between two times must summarize what happened at infinitely many times between).
I’m not really sure what infinite chain of justification you are imagining a Bayesian position to suggest. A posterior seems justified by a prior combined with evidence. We can view this as a single justificatory step if we like, or as a sequence of updates. So long as the amount of information we are updating on is finite, this seems like a finite justification chain. I don’t think it matters if the sigma-algebra is infinite or if the space of possible observations is infinite, although as you seem to recognize, neither of these assumptions seems particularly plausible for bounded beings. The idea that everything effects everything else doesn’t actually need to make the justificatory chain infinite.
I think you might be conflating the idea that Bayesianism in some sense points at idealized rationality, with the idea that everything about a Bayesian needs to be infinite. It is possible to specify perfect Bayesian beliefs which can be calculated finitely. It is possible to have finite sigma algebras, finite amounts of information in an update, etc.
The idea of continuous updating is interesting, but not very much a part of the most common Bayesian picture. Also, its relationship to infinite chains seems more complex than you suggest. If I am observing, say, a temperature in real time, then you can model me as having a prior which gets updated on an observation of [the function specifying how the temperature has changed over time]. This would still be a finite chain. If you insist that the proper justification chain includes all my intermediate updates rather than just one big update, then it ceases to be a “chain” at all (because ALL pairs of distinct times have a third time between them, so there are no “direct” justification links between times whatsoever—any link between two times must summarize what happened at infinitely many times between).