Maybe this is covered in another post, but I’m having trouble cramming this into my brain, and I want to make sure I get this straight:
Consider a thingspace. We can divide the thingspace into any number of partially-overlapping sets that don’t necessarily span the space. Each set is assigned a word, and the words are not unique.
Our job is to compress mental concepts in a lossy way into short messages to send between people, and we do so by referring to the words. Inferences drawn from the message have associated uncertanties that depended on the characteristics we believe members of the sets to have, word redundancy, etc.
In principle, we can draw whichever boundaries we like in thingspace (and, I suppose, they don’t need to be hard boundaries). But EY is saying that it’s wise to draw the boundaries in a way that “feels” right, which presumably means that the members have certain things in common. Then when we make inferences, the pdfs are sharply peaked (since we required that for set membership), and the calculation is simpler to do.
He also says that it’s possible to make a “mistake” in defining the sets. Does this result from the failure to be consistent in our definitions, a failure to assign uncertainties correctly, or a failure to define the sets in a wise way?
Maybe this is covered in another post, but I’m having trouble cramming this into my brain, and I want to make sure I get this straight:
Consider a thingspace. We can divide the thingspace into any number of partially-overlapping sets that don’t necessarily span the space. Each set is assigned a word, and the words are not unique.
Our job is to compress mental concepts in a lossy way into short messages to send between people, and we do so by referring to the words. Inferences drawn from the message have associated uncertanties that depended on the characteristics we believe members of the sets to have, word redundancy, etc.
In principle, we can draw whichever boundaries we like in thingspace (and, I suppose, they don’t need to be hard boundaries). But EY is saying that it’s wise to draw the boundaries in a way that “feels” right, which presumably means that the members have certain things in common. Then when we make inferences, the pdfs are sharply peaked (since we required that for set membership), and the calculation is simpler to do.
He also says that it’s possible to make a “mistake” in defining the sets. Does this result from the failure to be consistent in our definitions, a failure to assign uncertainties correctly, or a failure to define the sets in a wise way?