I’m a big fan of collaborative truth-seeking, so lemme try to explain what distinction I’d be communicating with it:
In an idealized individual world, you would be individually truth-seeking. This would include observing information, and using te information to update your model. You might also talk to others, which for various reasons (e.g. to help your allies or to fit in with social norms about honesty and information-sharing or …) might include telling them (to the best of your ability) the sorts of true, relevant information that you would usually use in your own decision-making.
However, the above scenario runs into some problems, mostly because the true, relevant information that you’d usually use in your own decision-making might be simplified in various ways, for instance rather than concerning your observations directly, it concerns latent variable inferences that you’ve made on the basis of these observations. These latent variables are inferred from your capacity to observe, and for your capacity to make decisions, so it can be difficult for others to apply them. In particular:
You might have phrased it in language that is factually very incorrect but evocative for your purposes (e.g. I remember talking to someone who kept saying gay men have female brains, and then it turned out that what he really meant was that gay men were psychologically feminine in a lot of ways).
You might be engaging in mind-projection (e.g. if it is tricky from your perspective, under your constraints, to precisely observe the differences between some entities, then you might just model them as being inherently the same, and when differences do pop up, you might assume them to be stochastic).
You might have implicit assumptions that you haven’t explicitly stated or precisified, which affect how you interpret things.
There’s also the issue that everyone involved might have far less evidence than could be collected if one went out and systematically collected it.
If one simply optimizes one’s model for one’s own purposes and then dumps the content of the model into collective discourse, the above problems tend to make the discourse get stuck because nobody is ready to deeply change anybody’s mind, and instead only ready to make minor corrections to others who are engaging from basically the same perspective.
These problems don’t seem inevitable though. If both parties agree to set a bunch of time aside to dive in deep, they could work to fix it. For instance:
By clarifying what purposes one is applying the concepts to, and what one is trying to evoke, one can come up with some better definitions that make information more straightforwardly transferable.
By explicitly listing the areas of uncertainty, one can figure out what open questions there are, and what effects independence assumptions have, which can guide further investigations.
Similarly, explicating implicit assumptions helps with identifying cruxes, or communicating methods for obtaining new information, or better understanding the meaning of one’s claims.
When working together, the amount of evidence it is “worth” collecting will often be higher, because it influences more people. Furthermore, the evidence that does get collected can be of higher quality, because it is made robust against more concerns, for more perspectives.
Basically, collaborative truth-seeking involves modifying one’s map so that it becomes easier to resolve disputes by collecting likelihood ratios, and then going out to collect those likelihood ratios.
One thing that I’ve come to think is that a major factor to clarify is one’s “perspective” or “Cartesian frame”, as in. which sources of information is one getting, and which areas of actions can one take within the subject matter. This perspective influences many of the other issues, and therefore is an efficient thing to talk about if one wants to understand them.
I’m a big fan of collaborative truth-seeking, so lemme try to explain what distinction I’d be communicating with it:
In an idealized individual world, you would be individually truth-seeking. This would include observing information, and using te information to update your model. You might also talk to others, which for various reasons (e.g. to help your allies or to fit in with social norms about honesty and information-sharing or …) might include telling them (to the best of your ability) the sorts of true, relevant information that you would usually use in your own decision-making.
However, the above scenario runs into some problems, mostly because the true, relevant information that you’d usually use in your own decision-making might be simplified in various ways, for instance rather than concerning your observations directly, it concerns latent variable inferences that you’ve made on the basis of these observations. These latent variables are inferred from your capacity to observe, and for your capacity to make decisions, so it can be difficult for others to apply them. In particular:
You might have phrased it in language that is factually very incorrect but evocative for your purposes (e.g. I remember talking to someone who kept saying gay men have female brains, and then it turned out that what he really meant was that gay men were psychologically feminine in a lot of ways).
You might be engaging in mind-projection (e.g. if it is tricky from your perspective, under your constraints, to precisely observe the differences between some entities, then you might just model them as being inherently the same, and when differences do pop up, you might assume them to be stochastic).
You might have implicit assumptions that you haven’t explicitly stated or precisified, which affect how you interpret things.
There’s also the issue that everyone involved might have far less evidence than could be collected if one went out and systematically collected it.
If one simply optimizes one’s model for one’s own purposes and then dumps the content of the model into collective discourse, the above problems tend to make the discourse get stuck because nobody is ready to deeply change anybody’s mind, and instead only ready to make minor corrections to others who are engaging from basically the same perspective.
These problems don’t seem inevitable though. If both parties agree to set a bunch of time aside to dive in deep, they could work to fix it. For instance:
By clarifying what purposes one is applying the concepts to, and what one is trying to evoke, one can come up with some better definitions that make information more straightforwardly transferable.
By explicitly listing the areas of uncertainty, one can figure out what open questions there are, and what effects independence assumptions have, which can guide further investigations.
Similarly, explicating implicit assumptions helps with identifying cruxes, or communicating methods for obtaining new information, or better understanding the meaning of one’s claims.
When working together, the amount of evidence it is “worth” collecting will often be higher, because it influences more people. Furthermore, the evidence that does get collected can be of higher quality, because it is made robust against more concerns, for more perspectives.
Basically, collaborative truth-seeking involves modifying one’s map so that it becomes easier to resolve disputes by collecting likelihood ratios, and then going out to collect those likelihood ratios.
One thing that I’ve come to think is that a major factor to clarify is one’s “perspective” or “Cartesian frame”, as in. which sources of information is one getting, and which areas of actions can one take within the subject matter. This perspective influences many of the other issues, and therefore is an efficient thing to talk about if one wants to understand them.