If there is a mistake deep in the belief of someone
Are they not ideal Bayesians? Also, do they update based off other people’s priors? It could be intresting to make them all ultra-finitists.
Mimemis land is confusing from the outside. I’m not sure how they could avoid stumbling upon “correct” forms of manipulating beliefs, if they persist for long enough and there are large enough stochastic shocks to the communities beliefs. If they also copid succesful people in the past, I feel like this would be even more likely. Unless they happen to be the equivalent of chinese rooms: just an archive of if else clauses.
Anyway, thank you for introducing this delightful style of thought experiments.
On First Principles Land: Even if they are ideal Bayesians, they could come to a mistake with unfortunate evidence. I’m not sure how we should handle updating on the information of others, that complicates things significantly. I was mostly imagining this as each person independently acts as a semi-ideal Bayesian agent and knows everything from the fundamental truths and evidence themselves. I would be interested in variations with various kinds of knowledge sharing.
On Mimesis Land: Yea, this land is confusing to me too. I guess belief manipulation would essentially act as an evolutionary process. Some clusters would learn some techniques for belief selection, and the successful clusters would pass on these belief-selection techniques. That said, this would take a while, and a most people could be oblivious to this.
Are they not ideal Bayesians? Also, do they update based off other people’s priors? It could be intresting to make them all ultra-finitists.
Mimemis land is confusing from the outside. I’m not sure how they could avoid stumbling upon “correct” forms of manipulating beliefs, if they persist for long enough and there are large enough stochastic shocks to the communities beliefs. If they also copid succesful people in the past, I feel like this would be even more likely. Unless they happen to be the equivalent of chinese rooms: just an archive of if else clauses.
Anyway, thank you for introducing this delightful style of thought experiments.
Happy to hear you enjoyed it!
On First Principles Land:
Even if they are ideal Bayesians, they could come to a mistake with unfortunate evidence. I’m not sure how we should handle updating on the information of others, that complicates things significantly. I was mostly imagining this as each person independently acts as a semi-ideal Bayesian agent and knows everything from the fundamental truths and evidence themselves. I would be interested in variations with various kinds of knowledge sharing.
On Mimesis Land:
Yea, this land is confusing to me too. I guess belief manipulation would essentially act as an evolutionary process. Some clusters would learn some techniques for belief selection, and the successful clusters would pass on these belief-selection techniques. That said, this would take a while, and a most people could be oblivious to this.