True beliefs reliably control anticipation. If you build a sky-detector that bases its decisions on color discrimination, you should anticipate that the detector’s decisions will be appropriate to the extent that your theory of the sky’s color is correct.
Let’s imagine that the sky-detector’s function is to orient a water spray toward the sky, so as to maximize the area sprayed. (Because “sky” is what’s up when outdoors.)
Place the device on various surfaces. The color of sky is the color of whatever makes the device most confused, spraying in random directions.
While you make an interesting case for testing my belief, I do not know how to build a “sky-detector”. So I still am oblivious as to whether my belief is true or not.
“Green” or other color words are already in your vocabulary of trusted primitives, otherwise the question couldn’t even be spoken. “Green” is (to a good enough approximation) a particular pattern of activation of cone cells in your retina, or (an approximation of an approximation, but still good enough) a point in the color space computed by a camera’s CCD photoreceptors.
The query at hand is whether the same cone cells are firing (or whether the camera’s CCDs return the same value) when looking straight up when out of doors on a cloudless day, as they do when looking at whatever type of object results in the color judgement “green” (for instance, grass or a tree).
That’s the content of the belief. The original questions are answered from that basis.
Question 2 (of which question 1 is a special case) consists of evaluating the fit between the above and the observations we can collect. So, we point cameras (or eyes) at trees, then at the sky, and compare the results.
Question 3 entails pointing cameras at objects of different colors and picking to describe “sky” the same color word that we use to describe objects that have roughly the same values as the sky.
(Though question 3 is more complicated in its most general case—for instance to describe what happens when we start out with no color words whatsoever and learn them from experience. We’re still updating on the results of a procedure that’s very similar, but we do it without having yet formed the category “color word”.)
I have tested my belief using the built-in “sky-detector”(my eyes) and I can tell that it’s false, for the sky is clearly blue with a tint of white. Still, there are some instances where my “sky-detector” could be faulty(e.g., eye disorders, neurological conditions), but since other people’s “sky-detectors” and machines have confirmed my belief, I guess it’s true.
But how true? Is there, say, an algorithm I can use to assign numerical values to the probability of my belief? Assuming that there is such an ‘algorithm’, how can I use it to compare my initial belief to the belief I now have(i.e., the sky is blue with a tint of white)?
It’s kind of simple: Does it seem that the sky is blue? Then accept it as a temporary truth and move on. Learn about physics, colors, human brain, et cetera. Every Friday review your beliefs about the color of the sky, and update them based on your current knowledge.
I suppose that there are infinitely many such algorithms!
Again, imagine building some sort of robot to keep your lawn watered. You could program it with explicit hard-coded values for “blue”, or you could equip it with some subprogram to “learn” the color of the sky. So the robot makes a note of the average color in the direction its nozzle its pointed at and (let’s suppose) it receives feedback in the form of how well the lawn has been watered on each such occasion.
The numerical value of the “belief” that the sky is a certain color, then, is simply the value of the “rightness” of certain colors to spray water at (it’s a probability distribution of “sky-ness” over the color space). The robot updates this distribution each time it receives some feedback, and there are any number of ways you could program that.
The laws of probability dictate certain constraints on this algorithm, for instance that you can’t associate 0% or 100% probability to the proposition “the sky is blue”, on pain of becoming unable to ever update away from these positions through a Bayesian update, if the robot finds itself in circumstances where the sky is a different color. (Though in the situation of the linked article, the lawn-watering robot wouldn’t be much use at all.)
There isn’t a unique, exact algorithm for such calculations, because there isn’t a unique, exact meaning for the words “sky” and for our color words, independent of what we use the concepts of sky and color for. There are as many different algorithms as there are purposes for the programs that embody them; you yourself embody an unknown algorithm resulting from your genetic and personal history.
But, it seems, there are some constraints on any such algorithm, unique in the sense that they arise from the mathematical structure of the universe.
True beliefs reliably control anticipation. If you build a sky-detector that bases its decisions on color discrimination, you should anticipate that the detector’s decisions will be appropriate to the extent that your theory of the sky’s color is correct.
Let’s imagine that the sky-detector’s function is to orient a water spray toward the sky, so as to maximize the area sprayed. (Because “sky” is what’s up when outdoors.)
Place the device on various surfaces. The color of sky is the color of whatever makes the device most confused, spraying in random directions.
While you make an interesting case for testing my belief, I do not know how to build a “sky-detector”. So I still am oblivious as to whether my belief is true or not.
Get a camera, or use the two you were born with.
I guess the first step is that you have to want to test your belief. :)
To beg the question (which I think I know the answer to), in this context, how do I trust a camera, or my eyes, to give me a true result?
“Green” or other color words are already in your vocabulary of trusted primitives, otherwise the question couldn’t even be spoken. “Green” is (to a good enough approximation) a particular pattern of activation of cone cells in your retina, or (an approximation of an approximation, but still good enough) a point in the color space computed by a camera’s CCD photoreceptors.
The query at hand is whether the same cone cells are firing (or whether the camera’s CCDs return the same value) when looking straight up when out of doors on a cloudless day, as they do when looking at whatever type of object results in the color judgement “green” (for instance, grass or a tree).
That’s the content of the belief. The original questions are answered from that basis.
Question 2 (of which question 1 is a special case) consists of evaluating the fit between the above and the observations we can collect. So, we point cameras (or eyes) at trees, then at the sky, and compare the results.
Question 3 entails pointing cameras at objects of different colors and picking to describe “sky” the same color word that we use to describe objects that have roughly the same values as the sky.
(Though question 3 is more complicated in its most general case—for instance to describe what happens when we start out with no color words whatsoever and learn them from experience. We’re still updating on the results of a procedure that’s very similar, but we do it without having yet formed the category “color word”.)
Thanks for writing this explanation.
I have tested my belief using the built-in “sky-detector”(my eyes) and I can tell that it’s false, for the sky is clearly blue with a tint of white. Still, there are some instances where my “sky-detector” could be faulty(e.g., eye disorders, neurological conditions), but since other people’s “sky-detectors” and machines have confirmed my belief, I guess it’s true.
But how true? Is there, say, an algorithm I can use to assign numerical values to the probability of my belief? Assuming that there is such an ‘algorithm’, how can I use it to compare my initial belief to the belief I now have(i.e., the sky is blue with a tint of white)?
It’s kind of simple: Does it seem that the sky is blue? Then accept it as a temporary truth and move on. Learn about physics, colors, human brain, et cetera. Every Friday review your beliefs about the color of the sky, and update them based on your current knowledge.
I suppose that there are infinitely many such algorithms!
Again, imagine building some sort of robot to keep your lawn watered. You could program it with explicit hard-coded values for “blue”, or you could equip it with some subprogram to “learn” the color of the sky. So the robot makes a note of the average color in the direction its nozzle its pointed at and (let’s suppose) it receives feedback in the form of how well the lawn has been watered on each such occasion.
The numerical value of the “belief” that the sky is a certain color, then, is simply the value of the “rightness” of certain colors to spray water at (it’s a probability distribution of “sky-ness” over the color space). The robot updates this distribution each time it receives some feedback, and there are any number of ways you could program that.
The laws of probability dictate certain constraints on this algorithm, for instance that you can’t associate 0% or 100% probability to the proposition “the sky is blue”, on pain of becoming unable to ever update away from these positions through a Bayesian update, if the robot finds itself in circumstances where the sky is a different color. (Though in the situation of the linked article, the lawn-watering robot wouldn’t be much use at all.)
There isn’t a unique, exact algorithm for such calculations, because there isn’t a unique, exact meaning for the words “sky” and for our color words, independent of what we use the concepts of sky and color for. There are as many different algorithms as there are purposes for the programs that embody them; you yourself embody an unknown algorithm resulting from your genetic and personal history.
But, it seems, there are some constraints on any such algorithm, unique in the sense that they arise from the mathematical structure of the universe.