From LessWrong posts such as ‘Created in Motion’ and ‘Where Recursive Justification Hits Rock Bottom’ I’ve come to see that humans are born with priors (the post ‘inductive bias’ is also related, where an agent must have some sort of prior to be able to learn anything at all ever—a pebble has no priors, but a mind does, which means it can update on evidence. What Yudkowsky calls a ‘philosophical ghost of perfect emptiness’ is other people’s image of a mind with no prior, suddenly updating to have a map that perfectly reflects the territory. Once you have a thorough understanding of Bayes Theorem, this is blatantly impossible/incoherent).
So, we’re born with priors about the environment, and then our further experience give us new priors for our next experiences.
Of course, this is all rather abstract, and if you’d like to have a guide to actually forming priors about real life situations that you find confusing… Well, put in an edit, maybe someone can give you that :-)
I don’t have a specific situation in mind, it’s just that priors from nowhere make me twitch—I have the same reaction to the idea that mathematical axioms are arbitrary. No, they aren’t! Mathematicians have to have some way of choosing axioms which lead to interesting mathematics.
At the moment, I’m stalking the idea that priors have a hierarchy or possibly some more complex structure, and being confused means that you suspect you have to dig deep into your structure of priors. Being surprised means that your priors have been attacked on a shallow level.
What do you mean ‘priors from nowhere’? The idea that we’re just born with a prior, or people just saying ‘this is my prior, and therefore a fact’ when given some random situation (that was me paraphrasing my mum’s ‘this is my opinion, and therefore a fact’).
More like “here are the priors I’m plugging into the bright and shiny Bayes equation”, without any indication of why the priors were plausible enough to be worth bothering with.
In Bayesian statistics there’s the concept of ‘weakly informative priors’, which are priors that are quite broad and conservative, but don’t concentrate almost all of their mass on values that no one thinks are plausible. For example, if I’m estimating the effect of a drug, I might choose priors that give low mass to biologically implausible effect sizes. If it’s a weight gain drug, perhaps I’d pick a normal distribution with less than 1% probability mass for more than 100% weight increase or 50% weight decrease. Still pretty conservative, but mostly captures people’s intuitions of what answers would be crazy.
Sometimes this is pretty useful, and sometimes not. Its going to be most useful when you have not much evidence, and also when your model is not well constrained along some dimensions (such as when you have multiple sources of variance). Its also going to be useful when there are a ton of answers that seem implausible.
I don’t know how much this answers your question.
From LessWrong posts such as ‘Created in Motion’ and ‘Where Recursive Justification Hits Rock Bottom’ I’ve come to see that humans are born with priors (the post ‘inductive bias’ is also related, where an agent must have some sort of prior to be able to learn anything at all ever—a pebble has no priors, but a mind does, which means it can update on evidence. What Yudkowsky calls a ‘philosophical ghost of perfect emptiness’ is other people’s image of a mind with no prior, suddenly updating to have a map that perfectly reflects the territory. Once you have a thorough understanding of Bayes Theorem, this is blatantly impossible/incoherent).
So, we’re born with priors about the environment, and then our further experience give us new priors for our next experiences.
Of course, this is all rather abstract, and if you’d like to have a guide to actually forming priors about real life situations that you find confusing… Well, put in an edit, maybe someone can give you that :-)
I don’t have a specific situation in mind, it’s just that priors from nowhere make me twitch—I have the same reaction to the idea that mathematical axioms are arbitrary. No, they aren’t! Mathematicians have to have some way of choosing axioms which lead to interesting mathematics.
At the moment, I’m stalking the idea that priors have a hierarchy or possibly some more complex structure, and being confused means that you suspect you have to dig deep into your structure of priors. Being surprised means that your priors have been attacked on a shallow level.
What do you mean ‘priors from nowhere’? The idea that we’re just born with a prior, or people just saying ‘this is my prior, and therefore a fact’ when given some random situation (that was me paraphrasing my mum’s ‘this is my opinion, and therefore a fact’).
More like “here are the priors I’m plugging into the bright and shiny Bayes equation”, without any indication of why the priors were plausible enough to be worth bothering with.
In Bayesian statistics there’s the concept of ‘weakly informative priors’, which are priors that are quite broad and conservative, but don’t concentrate almost all of their mass on values that no one thinks are plausible. For example, if I’m estimating the effect of a drug, I might choose priors that give low mass to biologically implausible effect sizes. If it’s a weight gain drug, perhaps I’d pick a normal distribution with less than 1% probability mass for more than 100% weight increase or 50% weight decrease. Still pretty conservative, but mostly captures people’s intuitions of what answers would be crazy.
Andrew Gelman has some recent discussion here.
Sometimes this is pretty useful, and sometimes not. Its going to be most useful when you have not much evidence, and also when your model is not well constrained along some dimensions (such as when you have multiple sources of variance). Its also going to be useful when there are a ton of answers that seem implausible.
The extent of my usefulness here is used.
Related Hanson paper: http://hanson.gmu.edu/prior.pdf