My Failed Situation/Action Belief System
Note: This is a description pieced together many, many years after my younger self subconsciously created it. This is part of my explanation of how I ended up me. I highly doubt all of this was as neatly defined as I present it to you here. Just know: The me in this post is me between the age of self-awareness and 17 years old. I am currently 25.
An action based belief system asks what to do when given a specific scenario. The input is Perceived Reality and the output is an Action. Most of my old belief system was built with such beliefs. A quick example: If the stop light is red, stop before the intersection.
These beliefs form a network of really complicated chains of conditionals:
If the stop light is red
And you are not stopped
Stop in the next available space before the intersection
If the stop light is red
And it is not useful to turn right at this intersection
And you are not stopped
Stop in the next available space before the intersection
Each node can be broken into more specific instructions if need be:
If the stop light is red
And it is not useful to turn right at this intersection
If turning right puts me on a path toward my destination
And there is a dedicated turn lane
Or no one is in the turn or go straight lane
Then turn right
Else continue
And you are not stopped
Stop in the next available space before the intersection
Yada
Yada
Yada
I did not sit down and decide that this was an optimal way to build a belief system. It just happened. My current best guess is that I spent most of my childhood trying to optimize my behavior to match my environment. And I did a fantastic job: I didn’t get in trouble; didn’t do drugs, smoke, drink, have sex, disobey my parents, or blaspheme God. My matrix put in a situation and an action came out.
The underlying motivation was a set of things I liked and things I didn’t like. The belief system adapted over time to accommodate enough scenarios to provide me with a relatively stress free childhood. (I do not take all the credit for that; my parents are great.)
The next level of the system is the ability to abstract scenarios so I can apply the matrix to scenarios that I had never encountered. Intersections that were new would not break the system. I could traverse unfamiliar environments and learn how to act quickly. The more I learned, the quicker I learned. It was great!
The problem with this belief system is that it has nothing to do with reality. Essentially, this system is the universal extrapolation of guessing the teacher’s password. If a problem was presented, I knew the answer. Because I could abstract these question/answer pairs, I knew all of the answers. “Reality” was a keyword that dropped into a particular area of the matrix. An action would appear with the right password and I would get my gold star.
That being said, this was a powerful system. It could simulate passwords to teachers I hadn’t even met. I would allow myself to daydream about hypothetical teachers asking questions that I expected around the corner. Which implies that my predictor beliefs were driving the whole engine. The Action beliefs were telling me how to act but the Predictors were creating the actual situation/action matrix. Abstraction and extension of my experiences were reliant on my ability to see the future accurately. When I became surprised I would begin the simulations until I found something that worked within my given experiences.
This worked wonders during childhood but now I have an entire belief system made out of correctly anticipating what other people expected from me. Oops. The day I pondered reality the whole system came crashing down. But that is a story for another day.
In form this post seems similar to many of Eliezer’s backwards-looking posts. However, it seems to be missing an interesting insight or otherwise a bit that makes it relevant to the reader.
Answer the question: “So what?”
He is going through an ad-hoc ritualistic process of conversion to a new psychological tribe. He is publicly shaming his past self for its folly, which serves as a mild form of hazing. This is the most effective way of changing identity related beliefs that I am aware of.
Agreed, this post doesn’t add value on its own. If this is going somewhere, that destination should have been combined with this.
The “so what” is that this style of belief system isn’t hinged on Reality. It basically follows the popular beliefs of the environment around. It can detect inconsistencies in popular beliefs but it really doesn’t have a way to get more True.
I suppose I didn’t actually say this… and I suppose the reason is that I didn’t want to list all of the details needed to fill in the gaps between what I have been reading in the sequences and the “Ah ha!” moment that allowed me to see the core problem in a Situation/Action belief system. This is pure laziness on my part. Sorry (but not enough to fix it).
Would you please fix it? Listing at least some of the details filling in the gaps would be very helpful, more so than this post by itself.
Sure. It will take a while though.
Having a functional model of what will be approved by other people is very useful. I would hardly say that it “has nothing to do with reality.” I think much of the trauma of my own childhood would have been completely avoided if I had been able to pull that off. Alas! Pity to my 9-year-old-self trying to convince the other children they were wrong.
Sure, the functional model of predicting other people’s approval is great. The problem with what I did is organize all of the beliefs by situation. These things aren’t tied to Reality. They are tied to perceptions. It would be the equivalent of claiming your belief system should be a Map of other people’s Maps of the Territory. When none of the people around you are terribly concerned about mapping the territory, your map won’t be either.
Building a worldview based on other peoples’ approvals results in a worldview with all of the problems of those peoples. It makes a child’s life easier because a child doesn’t need to understand reality. At least, not the way a non-child does.
What’s especially odd about this post
is that MrHen hasn’t replied yet.
He always replies to all his comments, and has written comments elsewhere since posting this.
Do you think this is some kind of experiment?
If so, what’s the experiment?
I bided my time on this topic for two reasons:
I have noticed that my posts go up in karma very quickly and than drop slowly. This post spiked at +7 and is now at +3. This happened on my last post as well, but the drop coincided with me commenting on the post. I decided to wait longer this time and see if the same pattern happened. It did. My theory is that I generally post in the afternoon and the afternoon readers vote me up. The evening readers then come through and vote me down. I personally wouldn’t consider this an experiment, but I guess I cannot deny that it is strange behavior.
Second, I spent significantly less time editing and tuning this post. I have very little invested in this post and don’t really care what happens to it. I didn’t know how to start this topic or say what I wanted to say without writing at least twice what I did here. As such, I didn’t really expect much in the comments. I am about to go through and reply to everything but I doubt the conversation will be that interesting.
FYI, I never posted a comment in Easy Predictor Tests.
I see you’ve located the situation in your matrix.
I will make the prediction “if this post is some kind of experiment then the rating of this thread will reduce by at least 4 points within one day of the moment MrHen announces it”.
Eh, no conspiracies here. It’s just a lazy post.
EDIT: For historical reasons, when I started commenting, the post was at +3.
I’m also confused and possibly missing the point. You’ve described the development of an apparently useful, functional algorithm for how to act in the world. I don’t see the problem with such a system; don’t we all have one?
I also don’t see what this has to do with beliefs. This is about how to act.
The system was defining situation/action pairs as beliefs. As in, “Given X, I should Y.” Should, in this case, holds all of the weight of believing in gravity. This wordage is great, but when you start applying the pattern to mundane tasks such as, “I should pour milk after cereal” you can spin off into a world that has nothing to do with Reality. “I should blork” is just as valid because nothing is requesting that these beliefs satisfy some coda of “proper beliefs.” If I can convince myself that blorking is going to make me happy, I will firmly believe that I should blork.
This idea of beliefs flies completely against the concepts promoted in The Simple Truth.
I think an important point missing from your post is that this is how many (most?) people model the world. ‘Causality’ doesn’t necessarily enter into most people’s computation of true and false. It would be nice to see this idea expanded with examples of how other people are using this model, why it gives them the opinions (output) that it does, and how we can begin to approach reasoning with people who model the world in this way.
Why do you think this? I am not disagreeing, I am just wondering if you had any information I don’t. :)
The model you present seems to explain a lot human behavior, though I admit it might just be broad enough to explain anything (which is why I was interested to see it applied and tested). There have been comments referencing the idea that many people don’t reason or think but just do, and the world appears magical to them. Your model does seem to explain how these people can get by in the world without much need for thinking- just green-go, red-stop. If you really just meant to model yourself, that is fine, but not as interesting to me as the more general idea.
I agree. This seems to give much more accurate predictions of most peoples’ actual actions than modeling them as consequentialists or deontologists. (The latter is close to this, but fails to account for how people fail to generalize rules across contexts.)
This model works extremely well for predicting other people’s actions. Your point about it being broad is true. People probably shortcut decisions into behavior patterns and habits after a while. I doubt a large number of them do it consciously.
I think the model is applicable to more than me. The underlying point was that some people (such as myself) use this as their belief system. I don’t know how often people do that or if it is common.
In other words, this model can explain and predict people’s actions well but I don’t know how often it ends up absorbing the role of those people’s belief system.
I agree with Blueberry. This reads like a reflective account of how I (and many others, I’d bet) have always learned and navigated the regularities in my life. Why would you have fused this kind of procedural knowledge with belief? Did you focus on it so hard that you forgot to think about truth? This is the part where I feel like I’m missing something. In my case, I developed efficient action systems in order to free up mental cycles, precisely so that I would have as many free cycles as possible to think about computer programming, reality, and truth.
No. The problem is that when I thought about truth an action popped out. It only mattered when a scenario called for The Truth. Then I entered the Matrix looking for actions and passwords relating to The Truth. The Truth was a valid relative statement with regards to a scenario or question. The idea that “The sky is blue” was true in the scenario of “Being asked for the color of the sky.”
This was abstracted to allow the color that I saw in the sky to apply to other objects I saw in life. I could look at the sky, see the color, associate the Action “Label the color blue” with the Situation “I need to label the color of the sky” and reuse the association for the Situation “I need to label the color of the ocean.”
This has nothing to do with Reality. If I grew up in a world where the sky was never visible I would still be happy as a clam calling the sky blue (or green) because this was the correct action. If you phrased the question in terms of a prediction (“What do you predict for the color of the sky?”) it would be internally translated into the Situation “I need a prediction for the color of the sky.” I would look up the right answer relative to your expectations and return the result. The answer would have nothing to do with me predicting the color of the sky. It had everything to do with my expectation of you predicting the color of the sky.
Would you say this behavior was primarily driven by other-approval-seeking (as opposed to achievement for achievement’s sake)?
I don’t know. It is hard for me to remember the driving reasons why. I don’t think approval was really the target so much as low stress was. I would rather be left alone than praised a whole bunch.
“Achievement” really doesn’t seem to describe my younger self well either. “Achievement” is an action without a matching scenario. As a description, it would be too vague to be of much use. Specifically, the action “Achieve a goal” is impossible to perform without more information.
I like this idea of the situation/action matrix. You seem to be complaining that it doesn’t support goal-directed behavior—or am I reading too much into your post?
The matrix is useful, but they aren’t valid predictors. You cannot test them; they only go one way. The predictors are needed to adjust the matrix which produces the question, “Why the matrix?” If the answer is “useful,” that’s fine, but “useful” doesn’t make the best system of beliefs.
I think he is complaining that it doesn’t support epistemically rational beliefs which are assumed to be intrinsically valued for ideological reasons. “Doesn’t support goal-directed behavior” would seem to be the answer to the next question, “Why should I, for practical purposes, care whether my beliefs match reality?”
Could you give an example of an abstract matrix?
It sounds like you were essentially codifying common sense as it applied to situations you encountered frequently, which makes sense from a willpower-conservation point of view because doing things you’ve planned in advance requires less willpower.
An abstract scenario would be the difference between, “my teacher is yelling at me” and “someone is yelling at me.” Further abstractions would include, “someone is upset with me” and “someone has an unfulfilled expectation of me.”
Sure. It doesn’t make sense as a belief system.
By abstract matrix I think I meant an matrix of abstract scenarios. Also, I think maybe “decision tree”, “flowchart”, or “rule set” is a more commonly used term for what you describe than “matrix”; am I understanding you correctly?
How did your matrix become a belief system?
Ohh...
Yeah, those terms work. “Matrix” fits better with how I visualize it in my head. I think “linked list” or “web” would be the most accurate. The problem I have with flowchart is that a flowchart is too organized. The actual process for taking a scenario and providing an action is much more organic. When a scenario presents itself I run the high level, abstract scenario through the system and respond appropriately. If the scenario doesn’t change or gets worse I need more detail and drop into a different layer. While you could describe the whole thing as a flowchart, it probably wouldn’t be the most efficient translation of what I am talking about.
In addition, an actual real life event is going to have dozens of active scenarios. I need to be able to access the matrix in multiple places at once. The process that actually acts is separate from this system and merely accesses it.
I don’t know. My guess is that it started purely as way to remember feedback loops that increased happiness and reduced stress. As I grew older and was taught about beliefs, ethics, decisions, and responsibilities I just translated those terms into what I was using to govern my actions. When it came time to organize my beliefs and thoughts I started categorizing things by scenario, keeping track of the actions, and drawing relationships between the various parts.
If someone asked me, “What do you believe about gravity?” I would look up scenarios and actions labeled “Gravity”. This would return facts about gravity in the form of answers and there would be soft relations in the matrix to scenarios dealing with falling, balance, and various areas of physics.
These relationships would be another way to describe what I was calling an abstract scenario. The relationships themselves could be abstracted with more relationships, labels, and commentary.
I thought of an example that may help clarify what I was trying to say. Someone who is completely blind is unable to recognize the difference between a $20 bill and a $5 bill. They will fold the bills in a pattern to remind themselves which bill is which, but they need someone to tell them what it was first. It is impossible for them to use their tactile senses to understand Reality.
Likewise, if someone who is blind knows that the grass is green, they can tell you what color the grass is. They will never, ever be able to verify it in any other way than just listening to people talk about the grass being green. They can only repeat what people expect from them and call that the right answer. It is the right answer, but it isn’t a belief attached to Reality.
The Situation/Action belief system is blind in a similar way. Once something tells it what Reality is it can regurgitate the answer forever. It can take a $20 bill and remember it. But if the person is lied to, a $5 bill will be folded as if it were a $20 bill. There is no way to verify the belief through Reality. The connection just isn’t being made. If there is an entire society of people who think the grass is blue, the situation/action belief system will think the grass is blue and never once wonder what went wrong. It keeps getting the passwords right; what more is there to want?
So do you think there’s a human system which includes a closer approximation of reality? (whatever that means)
I think my question is related to the above:
what’s wrong with stopping at red lights? I do it, and it keeps me alive.
(It literally just occurred to me that you might have been using ‘red lights’ as a metaphor for reactions from people you don’t like. So that you mean that you’ve learned to stop or change direction if you get these signals. Was this what you meant?)
There is nothing wrong with stopping at red lights. There is something wrong with believing you should stop at red lights simply because you think you should. The belief should be anchored somewhere.
A major clarification that may help: The matrix does not provide a reason to action in any given scenario. It just remembers how to act in a given scenario. There is no “updating” in the sense that the belief is accurate or inaccurate. The belief can change or grow but it isn’t correct or wrong. But even though the belief isn’t accurate, inaccurate, right, or wrong the system still considers them beliefs.
No. I meant traffic signals.
What do you mean by human system? I think The Simple Truth provides a much better system for beliefs.