What do you mean by embedded here? It seems you are asking the question “does a particular input-output behavior / computation correspond to some Naive Bayes model”, which is not what I would intuitively think of as “embedded Naive Bayes”.
Here’s the use case I have in mind. We have some neural network or biological cell or something performing computation. It’s been optimized via gradient descent/evolution, and we have some outside-view arguments saying that optimal reasoning should approximate Bayesian inference. We also know that the “true” behavior of the environment is causal—so optimal reasoning for our system should approximate Bayesian reasoning on some causal model of the environment.
The problem, then, is to go check whether the system actually is approximating Bayesian reasoning over some causal model, and what that causal model is. In other words, we want to check whether the system has a particular causal model (e.g. a Naive Bayes model) of its input data embedded within it.
I usually imagine the problems of embedded agency (at least when I’m reading LW/AF), where the central issue is that the agent is a part of its environment (in contrast to the Cartesian model, where there is a clear, bright line dividing the agent and the environment). Afaict, “embedded Naive Bayes” is something that makes sense in a Cartesian model, which I wasn’t expecting.
It’s not that big a deal, but if you want to avoid that confusion, you might want to change the word “embedded”. I kind of want to say “The Intentional Stance towards Naive Bayes”, but that’s not right either.
Ok, that’s what I was figuring. My general position is that the problems of agents embedded in their environment reduce to problems of abstraction, i.e. world-models embedded in computations which do not themselves obviously resemble world-models. At some point I’ll probably write that up in more detail, although the argument remains informal for now.
The immediately important point is that, while the OP makes sense in a Cartesian model, it also makes sense without a Cartesian model. We can just have some big computation, and pick a little chunk of it at random, and say “does this part here embed a Naive Bayes model?” In other words, it’s the sort of thing you could use to detect agenty subsystems, without having a Cartesian boundary drawn in advance.
What do you mean by embedded here? It seems you are asking the question “does a particular input-output behavior / computation correspond to some Naive Bayes model”, which is not what I would intuitively think of as “embedded Naive Bayes”.
Here’s the use case I have in mind. We have some neural network or biological cell or something performing computation. It’s been optimized via gradient descent/evolution, and we have some outside-view arguments saying that optimal reasoning should approximate Bayesian inference. We also know that the “true” behavior of the environment is causal—so optimal reasoning for our system should approximate Bayesian reasoning on some causal model of the environment.
The problem, then, is to go check whether the system actually is approximating Bayesian reasoning over some causal model, and what that causal model is. In other words, we want to check whether the system has a particular causal model (e.g. a Naive Bayes model) of its input data embedded within it.
What do you imagine “embedded” to mean?
I usually imagine the problems of embedded agency (at least when I’m reading LW/AF), where the central issue is that the agent is a part of its environment (in contrast to the Cartesian model, where there is a clear, bright line dividing the agent and the environment). Afaict, “embedded Naive Bayes” is something that makes sense in a Cartesian model, which I wasn’t expecting.
It’s not that big a deal, but if you want to avoid that confusion, you might want to change the word “embedded”. I kind of want to say “The Intentional Stance towards Naive Bayes”, but that’s not right either.
Ok, that’s what I was figuring. My general position is that the problems of agents embedded in their environment reduce to problems of abstraction, i.e. world-models embedded in computations which do not themselves obviously resemble world-models. At some point I’ll probably write that up in more detail, although the argument remains informal for now.
The immediately important point is that, while the OP makes sense in a Cartesian model, it also makes sense without a Cartesian model. We can just have some big computation, and pick a little chunk of it at random, and say “does this part here embed a Naive Bayes model?” In other words, it’s the sort of thing you could use to detect agenty subsystems, without having a Cartesian boundary drawn in advance.