I generally agree with this argument, and I endorse and encourage further exploration with the eventual goal of being able to predict the meaning of a ritual from its form and vice versa. The definition of ritual presented in the conclusions and further discussion in 4.1 strike me as a very good start toward that goal.
My biggest concern with the argument as presented is a slightly waffling attitude between the extremely strong (too strong?) statement of immutable motivation presented in track 2.3 and repeated in 3.5 and Conclusions, and the weaker treatment of that idea necessary for other parts of the argument such as 3.2. In the spirit of constructive feedback, I’ve included my full notes below.
2.3
one [model] of the world as it should be, that contains all our motives for action, and should not be vulnerable to any information about the current situation
Yes, goals and motives must be durable or we could not hold them up against (the current) reality and expect to win. But they must not be immutable. If our goals/motives could never change, we would not be able to abandon hopeless or lost causes, or to adopt new goals/motives as the situation evolves. This includes events where we succeed at a one-shot goal: it wouldn’t be very useful to keep striving for that which we have already achieved.
(e.g. I don’t have money—from which I might learn that I should not have money, if my desires adapted automatically to my experience)
You might, indeed, learn that very thing. It might feel like you don’t deserve to have money (or attention, cookies, sex, authority, &c.) or like having money has a subtle or overt wrongness about it. (Both could lead to self-sabotage behaviors such as ignoring obvious opportunities to increase your cashflow.) Or you could end up with a feeling that you can’t have enough money—no matter how wealthy you become it never seems like enough because “I have enough money” doesn’t rhyme with how the world (feels like it) works. This sort of learning seems to happen most in childhood, but with some work or sufficient pressure (such as trauma or radical resocialization) we can unlearn or newly-learn this sort of thing as adults.
By discounting automatic adaptation to experience and holding up a certain class of belief as invulnerable to changing conditions, this section appears to predict that growing beyond our childhood programming should be nigh impossible. In context, that would seem to undermine the utility of ritual beyond some critical life-stage when the programming solidifies. Perhaps this part of the argument is stated too strongly?
3.2
I think that motives like sex or self-interest need to be reinforced just as much as motives like justice and piety in order to keep influencing behavior throughout one’s life.
If such objects are not “vulnerable to any information about the current situation”, then why and how is their maintenance necessary?
To be clear, I’ve seen people give up on all those motives due to depression alone (and other circumstances) and fully agree that they can (but not necessarily will) corrode in response to changing conditions; but the quoted statement appears to be inconsistent with the argument presented in section 2.3.
3.4
Example: the myth of scarcity evokes the possibility of a desire of economic abundance, the rites of commerce and consumption transform this theoretical desire into a visceral one in a market setting
they are intrinsically write-protected and beyond falsification
Referenced paragraph explicitly agrees with 2.3.
Conclusions
all motives are learned and reinforced by ritual, symbolism and emotional anchors, with biology nudging much more than it imposes.
I think this could use some more support, especially as regards very small children. Infants exhibit apparently deliberate behaviors; whence the motivations if not biology? Red flag on the word “all”.
our own motives differ from other elements of knowledge only by the fact that they are held beyond falsification, like a special subset of our general mental model of the universe (how the world should be, versus how the world is)
I suggest a third category to moderate the other two: that of how the world could possibly be. This would resolve the issues in tracks 2 and 3 by providing a (difficult) way for our motives to yield to reality when too much conflict arises between them.
You homed in exactly on the point where I have theoretical doubts (I need to better think through predictive theories and what they really imply) but I can tell you where I stand as of now.
My current idea to resolve this (and I will amend the main text, either to commit to this or to at least avoid contradictory phrasing) is to invoke multiagent models of the mind:
An agent must indeed have immutable goals to function as an agent
Our mind, on the other hand, is probably better modelled not as an agent but an agora of agents with all sorts of different goals (the usual picture is a competition or a market, but why not cooperation and other interactions as well)
This agora needs to pretend that it is a single agent in order to actually act sometimes. Thus, mind-wide goals are immutable for the duration of an “agentic burst”, for as long as a given agent is singled out at the agora—which could be the duration of a single gesture for very low-level goals, or the typical time span of a coherent self-image for the most high-level ones.
The way that mind-wide goals are changed is not by modifying an agent, but by (1) adding another agent to the agora, typically a predictive model of other people in a certain setting, and (2) providing evidence that this one is a better model of “myself”, at least in the current situation.
As for biological drives, I’ll concede that the word “all” is probably untrue and I wil retract it, though I do mean “the overwhelming majority as soon as the cultural learning machine kicks in”. This may be overcorrection in response to sociobiology (which itself was overcorrection in response to blank slate cultural relativism), but I want to try to commit to this and see how far it goes!
As for biological drives, I’ll concede that the word “all” is probably untrue and I wil retract it, though I do mean “the overwhelming majority as soon as the cultural learning machine kicks in”. This may be overcorrection in response to sociobiology (which itself was overcorrection in response to blank slate cultural relativism), but I want to try to commit to this and see how far it goes!
As far as my own view on this is concerned, I do think the blank slate view of human nature is mostly correct, and that ev psych/sociobiology is drastically wrong here due to harsh limits on how much information can be encoded in a genetic prior.
I generally agree with this argument, and I endorse and encourage further exploration with the eventual goal of being able to predict the meaning of a ritual from its form and vice versa. The definition of ritual presented in the conclusions and further discussion in 4.1 strike me as a very good start toward that goal.
My biggest concern with the argument as presented is a slightly waffling attitude between the extremely strong (too strong?) statement of immutable motivation presented in track 2.3 and repeated in 3.5 and Conclusions, and the weaker treatment of that idea necessary for other parts of the argument such as 3.2. In the spirit of constructive feedback, I’ve included my full notes below.
2.3
Yes, goals and motives must be durable or we could not hold them up against (the current) reality and expect to win. But they must not be immutable. If our goals/motives could never change, we would not be able to abandon hopeless or lost causes, or to adopt new goals/motives as the situation evolves. This includes events where we succeed at a one-shot goal: it wouldn’t be very useful to keep striving for that which we have already achieved.
You might, indeed, learn that very thing. It might feel like you don’t deserve to have money (or attention, cookies, sex, authority, &c.) or like having money has a subtle or overt wrongness about it. (Both could lead to self-sabotage behaviors such as ignoring obvious opportunities to increase your cashflow.) Or you could end up with a feeling that you can’t have enough money—no matter how wealthy you become it never seems like enough because “I have enough money” doesn’t rhyme with how the world (feels like it) works. This sort of learning seems to happen most in childhood, but with some work or sufficient pressure (such as trauma or radical resocialization) we can unlearn or newly-learn this sort of thing as adults.
By discounting automatic adaptation to experience and holding up a certain class of belief as invulnerable to changing conditions, this section appears to predict that growing beyond our childhood programming should be nigh impossible. In context, that would seem to undermine the utility of ritual beyond some critical life-stage when the programming solidifies. Perhaps this part of the argument is stated too strongly?
3.2
If such objects are not “vulnerable to any information about the current situation”, then why and how is their maintenance necessary?
To be clear, I’ve seen people give up on all those motives due to depression alone (and other circumstances) and fully agree that they can (but not necessarily will) corrode in response to changing conditions; but the quoted statement appears to be inconsistent with the argument presented in section 2.3.
3.4
reminds me of: https://www.lesswrong.com/posts/QAmY46pciqYRZYcyM/bureaucracy-is-a-world-of-magic
3.5
Referenced paragraph explicitly agrees with 2.3.
Conclusions
I think this could use some more support, especially as regards very small children. Infants exhibit apparently deliberate behaviors; whence the motivations if not biology? Red flag on the word “all”.
I suggest a third category to moderate the other two: that of how the world could possibly be. This would resolve the issues in tracks 2 and 3 by providing a (difficult) way for our motives to yield to reality when too much conflict arises between them.
You homed in exactly on the point where I have theoretical doubts (I need to better think through predictive theories and what they really imply) but I can tell you where I stand as of now.
My current idea to resolve this (and I will amend the main text, either to commit to this or to at least avoid contradictory phrasing) is to invoke multiagent models of the mind:
An agent must indeed have immutable goals to function as an agent
Our mind, on the other hand, is probably better modelled not as an agent but an agora of agents with all sorts of different goals (the usual picture is a competition or a market, but why not cooperation and other interactions as well)
This agora needs to pretend that it is a single agent in order to actually act sometimes. Thus, mind-wide goals are immutable for the duration of an “agentic burst”, for as long as a given agent is singled out at the agora—which could be the duration of a single gesture for very low-level goals, or the typical time span of a coherent self-image for the most high-level ones.
The way that mind-wide goals are changed is not by modifying an agent, but by (1) adding another agent to the agora, typically a predictive model of other people in a certain setting, and (2) providing evidence that this one is a better model of “myself”, at least in the current situation.
As for biological drives, I’ll concede that the word “all” is probably untrue and I wil retract it, though I do mean “the overwhelming majority as soon as the cultural learning machine kicks in”. This may be overcorrection in response to sociobiology (which itself was overcorrection in response to blank slate cultural relativism), but I want to try to commit to this and see how far it goes!
As far as my own view on this is concerned, I do think the blank slate view of human nature is mostly correct, and that ev psych/sociobiology is drastically wrong here due to harsh limits on how much information can be encoded in a genetic prior.