Question for AI people in the crowd: To implement Bayes’ Theorem, the prior of something must be known, and the conditional likelihood must be known. I can see how to estimate the prior of something, but for real-life cases, how could accurate estimates of P(A|X) be obtained?
Also, we talk about world-models a lot here, but what exactly IS a world-model?
To implement Bayes’ Theorem, the prior of something must be known
Not quite the way I’d put it. If you know the exact prior for the unique event you’re predicting, you already know the posterior. All you need is a non-pathologically-terrible prior, although better ones will get you to a good prediction with fewer observations.
but for real-life cases, how could accurate estimates of P(A|X) be obtained?
In order of (decreasing) reliability: through science, through expert consensus, through crowd-sourcing, through personal estimates.
but what exactly IS a world-model?
Simply the set of sentences or events declared true. For a world-model to be useful those sentences are better to be relevant, that is, can be used to derive probabilities of the questions at hand.
Machine learning can sorta do this, with human guidance. For instance, if we want to predict whether an animal is a dog or an elephant given its weight and its height, we could find a training set (containing a bunch of dogs and a bunch of elephants) and then fit 2 2-variate lognormal distributions to this training set—one for the dogs, and one for the elephants. (Using some sort of gradient descent, say). Then P(weight=w, height=h | species=s) is just the probability density at the point (w, h) on the distribution for species s. Search term: “generative model”.
And in this context a world-model might be a joint distribution over, say, all triples (weight, height, label). Though IRL there’s too much stuff in the world for us to just hold a joint distribution over everything in our heads, we have to make do with something between a Bayes net and a big ball of adhockery.
Question for AI people in the crowd: To implement Bayes’ Theorem, the prior of something must be known, and the conditional likelihood must be known. I can see how to estimate the prior of something, but for real-life cases, how could accurate estimates of P(A|X) be obtained?
Also, we talk about world-models a lot here, but what exactly IS a world-model?
Machine learning. More speculatively, approximations to solomonoff induction.
Not quite the way I’d put it. If you know the exact prior for the unique event you’re predicting, you already know the posterior. All you need is a non-pathologically-terrible prior, although better ones will get you to a good prediction with fewer observations.
In order of (decreasing) reliability: through science, through expert consensus, through crowd-sourcing, through personal estimates.
Simply the set of sentences or events declared true. For a world-model to be useful those sentences are better to be relevant, that is, can be used to derive probabilities of the questions at hand.
Machine learning can sorta do this, with human guidance. For instance, if we want to predict whether an animal is a dog or an elephant given its weight and its height, we could find a training set (containing a bunch of dogs and a bunch of elephants) and then fit 2 2-variate lognormal distributions to this training set—one for the dogs, and one for the elephants. (Using some sort of gradient descent, say). Then P(weight=w, height=h | species=s) is just the probability density at the point (w, h) on the distribution for species s. Search term: “generative model”.
And in this context a world-model might be a joint distribution over, say, all triples (weight, height, label). Though IRL there’s too much stuff in the world for us to just hold a joint distribution over everything in our heads, we have to make do with something between a Bayes net and a big ball of adhockery.