The issues you raised are interesting but actually make this a pretty good example of my problem—how do you account for weak evidence and assign it a proper likelihood. One way i am testing this is by taking an example which i think is agreed to be ‘most likely’ (that he existed as opposed to not existing). Then i want to work backwards and see if we there is a method for assessing probability that seems to work well on small scale questions, like probability’s of minted coins and give me the expected answer when i add it all together.
At this point i am still trying to work out the objective priors issue. The method either needs to be immediately agreeable by all potential critics or have an open and fair way of arguing over how to formulate the answer. When i work that out i will move to the next stages although no guarantee i keep using the Alexander example.
My point was that ‘probability of minted coins’ isn’t a much “smaller-scale” question than ‘probability of Alexander’, that is, it isn’t much simpler and easier to decide.
In our model of the world, P(coins) doesn’t serve as a a simple ‘input’ to P(Alexander). Rather, we use P(Alexander) to judge the meaning of the coins we find. This is true not only on the Bayesian level, where all links are bidirectional, but in our high-level conscious model of the world, where we can’t assign meaning to a coin with the single word Alexander on it without already believing that Alexander did all the things we think he did.
There’s very little you can say about these coins if you don’t already believe in Alexander.
The issues you raised are interesting but actually make this a pretty good example of my problem—how do you account for weak evidence and assign it a proper likelihood. One way i am testing this is by taking an example which i think is agreed to be ‘most likely’ (that he existed as opposed to not existing). Then i want to work backwards and see if we there is a method for assessing probability that seems to work well on small scale questions, like probability’s of minted coins and give me the expected answer when i add it all together.
At this point i am still trying to work out the objective priors issue. The method either needs to be immediately agreeable by all potential critics or have an open and fair way of arguing over how to formulate the answer. When i work that out i will move to the next stages although no guarantee i keep using the Alexander example.
My point was that ‘probability of minted coins’ isn’t a much “smaller-scale” question than ‘probability of Alexander’, that is, it isn’t much simpler and easier to decide.
In our model of the world, P(coins) doesn’t serve as a a simple ‘input’ to P(Alexander). Rather, we use P(Alexander) to judge the meaning of the coins we find. This is true not only on the Bayesian level, where all links are bidirectional, but in our high-level conscious model of the world, where we can’t assign meaning to a coin with the single word Alexander on it without already believing that Alexander did all the things we think he did.
There’s very little you can say about these coins if you don’t already believe in Alexander.