I think everyone agrees on the directions “more subjective” and “more objective”, but they use the words “subjective”/”objective” to mean “more subjective/objective than me”.
A very subjective position would be to believe that there are no “right” prior probabilities, and that it’s okay to just pick any prior depending on personal choice. (i.e. Agents with the same knowledge can assign different probabilities)
A very objective position would be to believe that there are some probabilities that must be the same even for agents with different knowledge. For example they might say that you must assign probability 1⁄2 to a fair coin coming up heads, no matter what your state of knowledge is. (i.e. Agents with different knowledge must (sometimes) assign the same probabilities)
Jaynes and Yudkowsky are somewhere in between these two positions (i.e. agents with the same knowledge must assign the same probabilities, but the probability of any event can vary depending on your knowledge of it), so they get called “objective” by the maximally subjective folk, and “subjective” by the maximally objective folk.
The definitions in the SEP above would definitely put Jaynes and Yudkowsky in the objective camp, but there’s a lot of room on the scale past the SEP definition of “objective”.
Under his definitions he’s subjective. But he would definitely say that agents with the same state of knowledge must assign the same probabilities, which rules him out of the very subjective camp.
What about the idea of “well calibrated judgement”, where I must be right 9 times of 10 when I say something has 90 per cent probability? Yudkowsky discussed it in the article “Cognitive biases in global risks”, if I correctly remember.
In that case I assigned some probability distribution over my judgements which could be about completely different external objects?
I’m not quite sure what you mean here, but I don’t think the idea of calibration is directly related to the subjective/objective dichotomy. Both subjective and objective Bayesians could desire to be well calibrated.
I think everyone agrees on the directions “more subjective” and “more objective”, but they use the words “subjective”/”objective” to mean “more subjective/objective than me”.
A very subjective position would be to believe that there are no “right” prior probabilities, and that it’s okay to just pick any prior depending on personal choice. (i.e. Agents with the same knowledge can assign different probabilities)
A very objective position would be to believe that there are some probabilities that must be the same even for agents with different knowledge. For example they might say that you must assign probability 1⁄2 to a fair coin coming up heads, no matter what your state of knowledge is. (i.e. Agents with different knowledge must (sometimes) assign the same probabilities)
Jaynes and Yudkowsky are somewhere in between these two positions (i.e. agents with the same knowledge must assign the same probabilities, but the probability of any event can vary depending on your knowledge of it), so they get called “objective” by the maximally subjective folk, and “subjective” by the maximally objective folk.
The definitions in the SEP above would definitely put Jaynes and Yudkowsky in the objective camp, but there’s a lot of room on the scale past the SEP definition of “objective”.
Also, here’s Eliezer on the subject: Probability is Subjectively Objective
Under his definitions he’s subjective. But he would definitely say that agents with the same state of knowledge must assign the same probabilities, which rules him out of the very subjective camp.
What about the idea of “well calibrated judgement”, where I must be right 9 times of 10 when I say something has 90 per cent probability? Yudkowsky discussed it in the article “Cognitive biases in global risks”, if I correctly remember.
In that case I assigned some probability distribution over my judgements which could be about completely different external objects?
I’m not quite sure what you mean here, but I don’t think the idea of calibration is directly related to the subjective/objective dichotomy. Both subjective and objective Bayesians could desire to be well calibrated.