Posts like this are deceptively hard to write, so I really appreciate how well done this is.
Providing reasons feels fractal, or ship of theseus like to me. The metaphor that comes to mind is something like
Imagine two martial artists sparring, you are listening to a commentator describe the match over a radio. Two commentators would describe the match differently. In principle, a fight between two novices and a fight between two masters might sound very similar if the commentary captures a low enough resolution of events. When trying to communicate, we’re something like the commentator looking directly at the mashing together of felt senses and using various mental moves to carve up the high dimensional space differently. Groups of people will fall into commentator norms to improve bandwidth, but these choices carry (usually unacknowledged) trade offs. Reification at one particular abstraction level forces a lot of structure on things that is a result of the choice of level as much as a result of the territory.
This is one of the reasons for Chapman’s ‘if a problem seems hard, the representation is probably wrong.’ Different initial basis choices tend to push the complexity around to different parts of the model. And this process isn’t even always perverse. Often the whole point is that you really can shove the uncertainty somewhere where it doesn’t matter for your current purposes.
Posts like this are deceptively hard to write, so I really appreciate how well done this is.
Providing reasons feels fractal, or ship of theseus like to me. The metaphor that comes to mind is something like
Imagine two martial artists sparring, you are listening to a commentator describe the match over a radio. Two commentators would describe the match differently. In principle, a fight between two novices and a fight between two masters might sound very similar if the commentary captures a low enough resolution of events. When trying to communicate, we’re something like the commentator looking directly at the mashing together of felt senses and using various mental moves to carve up the high dimensional space differently. Groups of people will fall into commentator norms to improve bandwidth, but these choices carry (usually unacknowledged) trade offs. Reification at one particular abstraction level forces a lot of structure on things that is a result of the choice of level as much as a result of the territory.
This is one of the reasons for Chapman’s ‘if a problem seems hard, the representation is probably wrong.’ Different initial basis choices tend to push the complexity around to different parts of the model. And this process isn’t even always perverse. Often the whole point is that you really can shove the uncertainty somewhere where it doesn’t matter for your current purposes.