Actually it is unsolvable in Bayesian framework, and the only honest answer would be to admit it.
Bayesianism gives you consistency, but it doesn’t anchor you to reality in any way. Assignment of probabilities that prefers green, and assignment of probabilities that prefers grue are both equally consistent.
Many people on lesswrong have been trying to handwave the problem away with Kolmogorov Complexity, but if you check real math, then you’ll see that for any finite amount of data it solves exactly nothing—two different computational models have finite difference in probability assignment, but this finite difference is unbounded, and for any computational model you can find another that’s arbitrarily far away from it.
No finite amount of data will cause non-negligible amount of convergence between models, since their differences are unbounded times greater than informational contents of that information.
At some point you’ll have to admit that green and grue versions are equally consistent with data and all logical a priori reasons, and it’s just your personal (or societal or whatever) preference to accept green over grue.
PS. This is all completely unrelated to the second big issue with Bayesianism that you only get consistency over infinite models by breaking Gödel’s incompleteness theorems—and every theory where you’re not allowed to say “I don’t know” without assigning specific probability number to it shares this problem. Between these two problems I see Bayesianism as an useful tool, not as any deeper theory of reality.
“If you’re insane enough, and have unreasonable enough priors, even Bayesianism won’t save you,” is an argument against insanity and unreasonableness, not against Bayesianism.
Bayesianism only attempts to give you consistency, different grue-Bayesians would see green-Bayesian as “insane and unreasonable”, just as green-Bayesians would see grue-Bayesians.
They’re both just as consistent, and nothing about their systems of beliefs is internally different.
If you want to solve green/grue problem, Bayesianism won’t hurt your attempts but neither will it help you in any way.
Yea, what really helped about the bayesian analogy to the table, line, ball thing, was remembering that there was a physical bases for right, but that reft did not have a physical basis in the same way. The same goes for grue.
I completely agree that if you want to understand the reason for the use of grue over the use of green in the conclusion, you need to use more than the syntactical definitions of the terms. Bayes is of course syntactical. You have to look at the semantic meanings of the terms, their test for applicability.
How does property “it is grue for until some point, then it becomes bleen” have more physical basis than property “it’s grue all along”? What you’re saying makes no sense (...to a grue-ist).
If I wrote a program to find things that were green before time t, and things that were blue after time t, I owuld not save any time on the programing by making it just look for grue. Grue could not be coherently defined without committing to observers, but green could be defined (even if very complicatedly) without reference to observers, and thus we can be realists about it. I am a realist about green, and not about grue. THis makes sense since grue requires observers in its definition.
If I wrote a program to find things that were grue before time t, and things that were bleen after time t, I would not save any time on the programing by making it just look for green. Green could not be coherently defined without committing to observers, but grue could be defined (even if very complicatedly) without reference to observers, and thus we can be realists about it. I am a realist about grue, and not about green. THis makes sense since green requires observers in its definition.
Meanwhile, in a parallel universe, grue-potato wrote this, and grue-taw is trying to make him see that green is just as consistent.
Actually it is unsolvable in Bayesian framework, and the only honest answer would be to admit it.
Bayesianism gives you consistency, but it doesn’t anchor you to reality in any way. Assignment of probabilities that prefers green, and assignment of probabilities that prefers grue are both equally consistent.
Many people on lesswrong have been trying to handwave the problem away with Kolmogorov Complexity, but if you check real math, then you’ll see that for any finite amount of data it solves exactly nothing—two different computational models have finite difference in probability assignment, but this finite difference is unbounded, and for any computational model you can find another that’s arbitrarily far away from it.
No finite amount of data will cause non-negligible amount of convergence between models, since their differences are unbounded times greater than informational contents of that information.
At some point you’ll have to admit that green and grue versions are equally consistent with data and all logical a priori reasons, and it’s just your personal (or societal or whatever) preference to accept green over grue.
PS. This is all completely unrelated to the second big issue with Bayesianism that you only get consistency over infinite models by breaking Gödel’s incompleteness theorems—and every theory where you’re not allowed to say “I don’t know” without assigning specific probability number to it shares this problem. Between these two problems I see Bayesianism as an useful tool, not as any deeper theory of reality.
“If you’re insane enough, and have unreasonable enough priors, even Bayesianism won’t save you,” is an argument against insanity and unreasonableness, not against Bayesianism.
Bayesianism only attempts to give you consistency, different grue-Bayesians would see green-Bayesian as “insane and unreasonable”, just as green-Bayesians would see grue-Bayesians.
They’re both just as consistent, and nothing about their systems of beliefs is internally different.
If you want to solve green/grue problem, Bayesianism won’t hurt your attempts but neither will it help you in any way.
Yea, what really helped about the bayesian analogy to the table, line, ball thing, was remembering that there was a physical bases for right, but that reft did not have a physical basis in the same way. The same goes for grue.
I completely agree that if you want to understand the reason for the use of grue over the use of green in the conclusion, you need to use more than the syntactical definitions of the terms. Bayes is of course syntactical. You have to look at the semantic meanings of the terms, their test for applicability.
How does property “it is grue for until some point, then it becomes bleen” have more physical basis than property “it’s grue all along”? What you’re saying makes no sense (...to a grue-ist).
If I wrote a program to find things that were green before time t, and things that were blue after time t, I owuld not save any time on the programing by making it just look for grue. Grue could not be coherently defined without committing to observers, but green could be defined (even if very complicatedly) without reference to observers, and thus we can be realists about it. I am a realist about green, and not about grue. THis makes sense since grue requires observers in its definition.
Meanwhile, in a parallel universe, grue-potato wrote this, and grue-taw is trying to make him see that green is just as consistent.