We can already give a partial answer: because we’re working in a Bayesian frame, the outputs of the semantics box need to be assignments of values to random variables in the world model, like X=12.3
Why random variables, rather than events? In terms of your sketched formalism so far, it seems like events are the obvious choice—events are the sort of thing we can condition on. Assigning a random variable to a value is just an indirect way to point out an event; and, this indirect method creates a lot of redundancy, since there are many many assignments-of-random-variables-to-values which would point out the same event.
First: if the random variables include latents which extend some distribution, then values of those latents are not necessarily representable as events over the underlying distribution. Events are less general. (Related: updates allowed under radical probabilism can be represented by assignments of values to latents.)
Second: I want formulations which feel like they track what’s actually going on in my head (or other peoples’ heads) relatively well. Insofar as a Bayesian model makes sense for the stuff going on in my head at all, it feels like there’s a whole structure of latent variables, and semantics involves assignments of values to those variables. Events don’t seem to match my mental structure as well. (See How We Picture Bayesian Agents for the picture in my head here.)
The two perspectives are easily interchangeable, so I don’t think this is a big disagreement. But the argument about extending a distribution seems… awful? I could just as well say that I can extend my event algebra to include some new events which cannot be represented as values of random variables over the original event algebra, “so random variables are less general”.
Why random variables, rather than events? In terms of your sketched formalism so far, it seems like events are the obvious choice—events are the sort of thing we can condition on. Assigning a random variable to a value is just an indirect way to point out an event; and, this indirect method creates a lot of redundancy, since there are many many assignments-of-random-variables-to-values which would point out the same event.
First: if the random variables include latents which extend some distribution, then values of those latents are not necessarily representable as events over the underlying distribution. Events are less general. (Related: updates allowed under radical probabilism can be represented by assignments of values to latents.)
Second: I want formulations which feel like they track what’s actually going on in my head (or other peoples’ heads) relatively well. Insofar as a Bayesian model makes sense for the stuff going on in my head at all, it feels like there’s a whole structure of latent variables, and semantics involves assignments of values to those variables. Events don’t seem to match my mental structure as well. (See How We Picture Bayesian Agents for the picture in my head here.)
The two perspectives are easily interchangeable, so I don’t think this is a big disagreement. But the argument about extending a distribution seems… awful? I could just as well say that I can extend my event algebra to include some new events which cannot be represented as values of random variables over the original event algebra, “so random variables are less general”.