A point I didn’t get to very clearly in the OP, that I’ll throw into the comments:
When shared endeavors are complicated, it often makes sense for them to coordinate internally via a shared set of ~“beliefs”, for much the same reason that organisms acquire beliefs in the first place (rather than simply learning lots of stimulus-response patterns or something).
This sometimes makes it useful for various collaborators in a project to act on a common set of “as if beliefs,” that are not their own individual beliefs.
I gave an example of this in the OP:
If my various timeslices are collaborating in writing a single email, it’s useful to somehow hold in mind, as a target, a single coherent notion of how I want to trade off between quality-of-email and time-cost-of-writing. Otherwise I leave value on the table.
The above was an example within me, across my subagents. But there are also examples held across sets of people, e.g. how much a given project needs money vs insights on problem X vs data on puzzle Y, and what the cruxes are that’ll let us update about that, and so on.
A “believing in” is basically a set of ~beliefs that some portion of your effort-or-other-resources is invested in taking as a premise, that usually differ from your base-level beliefs.
(Except, sometimes people coordinate more easily via things that’re more like goals or deontologies or whatever, and English uses the phrase “believing in” for marking investment in a set of ~beliefs, or in a set of ~goals, or in a set of ~deontologies.)
A point I didn’t get to very clearly in the OP, that I’ll throw into the comments:
When shared endeavors are complicated, it often makes sense for them to coordinate internally via a shared set of ~“beliefs”, for much the same reason that organisms acquire beliefs in the first place (rather than simply learning lots of stimulus-response patterns or something).
This sometimes makes it useful for various collaborators in a project to act on a common set of “as if beliefs,” that are not their own individual beliefs.
I gave an example of this in the OP:
If my various timeslices are collaborating in writing a single email, it’s useful to somehow hold in mind, as a target, a single coherent notion of how I want to trade off between quality-of-email and time-cost-of-writing. Otherwise I leave value on the table.
The above was an example within me, across my subagents. But there are also examples held across sets of people, e.g. how much a given project needs money vs insights on problem X vs data on puzzle Y, and what the cruxes are that’ll let us update about that, and so on.
A “believing in” is basically a set of ~beliefs that some portion of your effort-or-other-resources is invested in taking as a premise, that usually differ from your base-level beliefs.
(Except, sometimes people coordinate more easily via things that’re more like goals or deontologies or whatever, and English uses the phrase “believing in” for marking investment in a set of ~beliefs, or in a set of ~goals, or in a set of ~deontologies.)