Hmm, I think you must have misunderstood the above sentence/we failed to get the correct point across. This is a statement about epistemology that I think is pretty fundamental, and is not something that one can choose not to do.
In a system of mutual understanding, I have a model of your model, and you have a model of my model, but nevertheless any prediction about the world is a result of one of our two models (which might have converged, or at the very least include parts of one another). You can have systems that generate predictions and policies and actions that are not understood by any individual (as is common in many large organizations), but that is the exact state you want to avoid in a small team where you can invest the cost to have everything be driven by things at least one person on the team understands.
The thing described above is something you get to do if you can invest a lot of resources into communication, not something you have to do if you don’t invest enough resources.
I get the sense that you don’t understand me here.
In a system of mutual understanding, I have a model of your model, and you have a model of my model, but nevertheless any prediction about the world is a result of one of our two models (which might have converged, or at the very least include parts of one another).
We can choose to live in a world where the model in my head is the same as the model in your head, and that this is common knowledge. In this world, you could think about a prediction being made by either the model in my head or the model in your head, but it makes more sense to think about it as being made by our model, the one that results from all the information we both have (just like the integer 3 in my head is the same number as the integer 3 in your head, not two numbers that happen to coincide). If I believed that this was possible, I wouldn’t talk about how official group models are going to be impoverished ‘common denominator’ models, or conclude a paragraph with a sentence like “Organizations don’t have models, people do.”
In this world, you could think about a prediction being made by either the model in my head or the model in your head, but it makes more sense to think about it as being made by our model …
I don’t think this actually makes sense. Models only make predictions when they’re instantiated, just as algorithms only generate output when run. And models can only be instantiated in someone’s head[1].
… the integer 3 in my head is the same number as the integer 3 in your head, not two numbers that happen to coincide …
This is a statement about philosophy of mathematics, and not exactly an uncontroversial one! As such, I hardly think it can support the sort of rhetorical weight you’re putting on it…
[1] Or, if the model is sufficiently formal, in a computer—but that is, of course, not the sort of model we’re discussing.
I think models can be run on computers and I think people passing papers can work as computers. I do think it’s possible to have an organization that does informational work that none of it’s human participants do. I do appriciate that such work is often very secondary to the work that actual individuals do. But I think that if someone aggressively tried to make a system that would survive a “bad faith” human actor it might be possible and even feasible.
I would phrase is that the number 3 in my head and the number 3 in your head both correspond to the number 3 “out there” or to the “”common social” number 3.
For example my number 3 might participate in being part of a input to a cached results of multiplication tables while I am not expecting everyone else to do so.
The old philosphical problem of whether the red I see the the same red that you see kind of highlights how the reds could plausibly be incomparable while the practical reality that color talk is possible is not in question.
Hmm, I think you must have misunderstood the above sentence/we failed to get the correct point across. This is a statement about epistemology that I think is pretty fundamental, and is not something that one can choose not to do.
In a system of mutual understanding, I have a model of your model, and you have a model of my model, but nevertheless any prediction about the world is a result of one of our two models (which might have converged, or at the very least include parts of one another). You can have systems that generate predictions and policies and actions that are not understood by any individual (as is common in many large organizations), but that is the exact state you want to avoid in a small team where you can invest the cost to have everything be driven by things at least one person on the team understands.
The thing described above is something you get to do if you can invest a lot of resources into communication, not something you have to do if you don’t invest enough resources.
I get the sense that you don’t understand me here.
We can choose to live in a world where the model in my head is the same as the model in your head, and that this is common knowledge. In this world, you could think about a prediction being made by either the model in my head or the model in your head, but it makes more sense to think about it as being made by our model, the one that results from all the information we both have (just like the integer 3 in my head is the same number as the integer 3 in your head, not two numbers that happen to coincide). If I believed that this was possible, I wouldn’t talk about how official group models are going to be impoverished ‘common denominator’ models, or conclude a paragraph with a sentence like “Organizations don’t have models, people do.”
I don’t think this actually makes sense. Models only make predictions when they’re instantiated, just as algorithms only generate output when run. And models can only be instantiated in someone’s head[1].
This is a statement about philosophy of mathematics, and not exactly an uncontroversial one! As such, I hardly think it can support the sort of rhetorical weight you’re putting on it…
[1] Or, if the model is sufficiently formal, in a computer—but that is, of course, not the sort of model we’re discussing.
I think models can be run on computers and I think people passing papers can work as computers. I do think it’s possible to have an organization that does informational work that none of it’s human participants do. I do appriciate that such work is often very secondary to the work that actual individuals do. But I think that if someone aggressively tried to make a system that would survive a “bad faith” human actor it might be possible and even feasible.
I would phrase is that the number 3 in my head and the number 3 in your head both correspond to the number 3 “out there” or to the “”common social” number 3.
For example my number 3 might participate in being part of a input to a cached results of multiplication tables while I am not expecting everyone else to do so.
The old philosphical problem of whether the red I see the the same red that you see kind of highlights how the reds could plausibly be incomparable while the practical reality that color talk is possible is not in question.