I’m also confused and possibly missing the point. You’ve described the development of an apparently useful, functional algorithm for how to act in the world. I don’t see the problem with such a system; don’t we all have one?
I also don’t see what this has to do with beliefs. This is about how to act.
I also don’t see what this has to do with beliefs. This is about how to act.
The system was defining situation/action pairs as beliefs. As in, “Given X, I should Y.” Should, in this case, holds all of the weight of believing in gravity. This wordage is great, but when you start applying the pattern to mundane tasks such as, “I should pour milk after cereal” you can spin off into a world that has nothing to do with Reality. “I should blork” is just as valid because nothing is requesting that these beliefs satisfy some coda of “proper beliefs.” If I can convince myself that blorking is going to make me happy, I will firmly believe that I should blork.
This idea of beliefs flies completely against the concepts promoted in The Simple Truth.
I think an important point missing from your post is that this is how many (most?) people model the world. ‘Causality’ doesn’t necessarily enter into most people’s computation of true and false. It would be nice to see this idea expanded with examples of how other people are using this model, why it gives them the opinions (output) that it does, and how we can begin to approach reasoning with people who model the world in this way.
The model you present seems to explain a lot human behavior, though I admit it might just be broad enough to explain anything (which is why I was interested to see it applied and tested). There have been comments referencing the idea that many people don’t reason or think but just do, and the world appears magical to them. Your model does seem to explain how these people can get by in the world without much need for thinking- just green-go, red-stop. If you really just meant to model yourself, that is fine, but not as interesting to me as the more general idea.
The model you present seems to explain a lot human behavior.
I agree. This seems to give much more accurate predictions of most peoples’ actual actions than modeling them as consequentialists or deontologists. (The latter is close to this, but fails to account for how people fail to generalize rules across contexts.)
The model you present seems to explain a lot human behavior, though I admit it might just be broad enough to explain anything (which is why I was interested to see it applied and tested).
This model works extremely well for predicting other people’s actions. Your point about it being broad is true. People probably shortcut decisions into behavior patterns and habits after a while. I doubt a large number of them do it consciously.
If you really just meant to model yourself, that is fine, but not as interesting to me as the more general idea.
I think the model is applicable to more than me. The underlying point was that some people (such as myself) use this as their belief system. I don’t know how often people do that or if it is common.
In other words, this model can explain and predict people’s actions well but I don’t know how often it ends up absorbing the role of those people’s belief system.
I agree with Blueberry. This reads like a reflective account of how I (and many others, I’d bet) have always learned and navigated the regularities in my life. Why would you have fused this kind of procedural knowledge with belief? Did you focus on it so hard that you forgot to think about truth? This is the part where I feel like I’m missing something. In my case, I developed efficient action systems in order to free up mental cycles, precisely so that I would have as many free cycles as possible to think about computer programming, reality, and truth.
Did you focus on it so hard that you forgot to think about truth?
No. The problem is that when I thought about truth an action popped out. It only mattered when a scenario called for The Truth. Then I entered the Matrix looking for actions and passwords relating to The Truth. The Truth was a valid relative statement with regards to a scenario or question. The idea that “The sky is blue” was true in the scenario of “Being asked for the color of the sky.”
This was abstracted to allow the color that I saw in the sky to apply to other objects I saw in life. I could look at the sky, see the color, associate the Action “Label the color blue” with the Situation “I need to label the color of the sky” and reuse the association for the Situation “I need to label the color of the ocean.”
This has nothing to do with Reality. If I grew up in a world where the sky was never visible I would still be happy as a clam calling the sky blue (or green) because this was the correct action. If you phrased the question in terms of a prediction (“What do you predict for the color of the sky?”) it would be internally translated into the Situation “I need a prediction for the color of the sky.” I would look up the right answer relative to your expectations and return the result. The answer would have nothing to do with me predicting the color of the sky. It had everything to do with my expectation of you predicting the color of the sky.
I don’t know. It is hard for me to remember the driving reasons why. I don’t think approval was really the target so much as low stress was. I would rather be left alone than praised a whole bunch.
“Achievement” really doesn’t seem to describe my younger self well either. “Achievement” is an action without a matching scenario. As a description, it would be too vague to be of much use. Specifically, the action “Achieve a goal” is impossible to perform without more information.
I’m also confused and possibly missing the point. You’ve described the development of an apparently useful, functional algorithm for how to act in the world. I don’t see the problem with such a system; don’t we all have one?
I also don’t see what this has to do with beliefs. This is about how to act.
The system was defining situation/action pairs as beliefs. As in, “Given X, I should Y.” Should, in this case, holds all of the weight of believing in gravity. This wordage is great, but when you start applying the pattern to mundane tasks such as, “I should pour milk after cereal” you can spin off into a world that has nothing to do with Reality. “I should blork” is just as valid because nothing is requesting that these beliefs satisfy some coda of “proper beliefs.” If I can convince myself that blorking is going to make me happy, I will firmly believe that I should blork.
This idea of beliefs flies completely against the concepts promoted in The Simple Truth.
I think an important point missing from your post is that this is how many (most?) people model the world. ‘Causality’ doesn’t necessarily enter into most people’s computation of true and false. It would be nice to see this idea expanded with examples of how other people are using this model, why it gives them the opinions (output) that it does, and how we can begin to approach reasoning with people who model the world in this way.
Why do you think this? I am not disagreeing, I am just wondering if you had any information I don’t. :)
The model you present seems to explain a lot human behavior, though I admit it might just be broad enough to explain anything (which is why I was interested to see it applied and tested). There have been comments referencing the idea that many people don’t reason or think but just do, and the world appears magical to them. Your model does seem to explain how these people can get by in the world without much need for thinking- just green-go, red-stop. If you really just meant to model yourself, that is fine, but not as interesting to me as the more general idea.
I agree. This seems to give much more accurate predictions of most peoples’ actual actions than modeling them as consequentialists or deontologists. (The latter is close to this, but fails to account for how people fail to generalize rules across contexts.)
This model works extremely well for predicting other people’s actions. Your point about it being broad is true. People probably shortcut decisions into behavior patterns and habits after a while. I doubt a large number of them do it consciously.
I think the model is applicable to more than me. The underlying point was that some people (such as myself) use this as their belief system. I don’t know how often people do that or if it is common.
In other words, this model can explain and predict people’s actions well but I don’t know how often it ends up absorbing the role of those people’s belief system.
I agree with Blueberry. This reads like a reflective account of how I (and many others, I’d bet) have always learned and navigated the regularities in my life. Why would you have fused this kind of procedural knowledge with belief? Did you focus on it so hard that you forgot to think about truth? This is the part where I feel like I’m missing something. In my case, I developed efficient action systems in order to free up mental cycles, precisely so that I would have as many free cycles as possible to think about computer programming, reality, and truth.
No. The problem is that when I thought about truth an action popped out. It only mattered when a scenario called for The Truth. Then I entered the Matrix looking for actions and passwords relating to The Truth. The Truth was a valid relative statement with regards to a scenario or question. The idea that “The sky is blue” was true in the scenario of “Being asked for the color of the sky.”
This was abstracted to allow the color that I saw in the sky to apply to other objects I saw in life. I could look at the sky, see the color, associate the Action “Label the color blue” with the Situation “I need to label the color of the sky” and reuse the association for the Situation “I need to label the color of the ocean.”
This has nothing to do with Reality. If I grew up in a world where the sky was never visible I would still be happy as a clam calling the sky blue (or green) because this was the correct action. If you phrased the question in terms of a prediction (“What do you predict for the color of the sky?”) it would be internally translated into the Situation “I need a prediction for the color of the sky.” I would look up the right answer relative to your expectations and return the result. The answer would have nothing to do with me predicting the color of the sky. It had everything to do with my expectation of you predicting the color of the sky.
Would you say this behavior was primarily driven by other-approval-seeking (as opposed to achievement for achievement’s sake)?
I don’t know. It is hard for me to remember the driving reasons why. I don’t think approval was really the target so much as low stress was. I would rather be left alone than praised a whole bunch.
“Achievement” really doesn’t seem to describe my younger self well either. “Achievement” is an action without a matching scenario. As a description, it would be too vague to be of much use. Specifically, the action “Achieve a goal” is impossible to perform without more information.