Short introduction to navigation: Clicking the “Discussion” link at the top of the page will show you (most of) the new articles. If you write comments there, you are most likely to receive replies.
If there is something called “Open Thread”, that pretty much means: feel free to ask or say anything (as long as it is at least somewhat relevant to this website, but even that is not always necessary). Also, posting in the most recent open thread will give you more visitors and thus more replies than posting in a three months old article. As of today, the most recent open thread is here, but tomorrow a new one will be started, and it may be strategic to wait.
humans will always choose to do the action which they think will bring them most pleasure/least pain. … and quite often we get it totally wrong.
Well, if you put it this way, it is almost impossible to find a counterexample, because for literally any situation where “a person X did Y”, you can say “that’s because X somehow believed Y will bring them most pleasure / least pain”, and even if I say “but in this specific situation that doesn’t make any sense”, you can say “well, this is one of those situations when X was totally wrong”.
Better approach than “can you find a situation that my theory cannot explain?” is “can you find a situation that my theory cannot predict?” The difference between explanation and prediction is that explanation is what you do after the fact, when you already know which outcome you need to explain, while predictions are done before the fact. For example, if in the next American elections the Democrats will win, I can explain you why. However, if Republicans will win, I can also explain you why. But if you ask me to predict who will win, then I am in trouble, because here my verbal skills cannot save me.
Analogically, if we have a situation “Joe spends his afternoon reading Reddit”, it is easy to explain: Joe believed that reading Reddit will bring him most pleasure. But if we have a situation “Joe decided not to read Reddit, and instead learned a new programming language”, it is also easy to explain: Joe believed that learning will bring him most pleasure in long term. The problem is if Joe is starting his computer right now, and your theory has to predict whether he will read Reddit (as he usually does, but not always), or whether he will learn a new programming language (which is what he procrastinated doing for a long time, but today he feels slightly more motivated than usually). What will Joe do? This is the difficult question. But once he does something, it will be extremely easy to explain in hindsight why did he choose this option, instead of the other option.
More info here: Making beliefs pay rent. But the general idea is: if your theory can explain anything, but predict nothing, what exactly is the point of having such theory?
Hmm… I’ve given it some thought (more to come later, for sure), but there’s already one thing I’ve found this theory useful for. There have been times when I’ve caught myself doing/desiring things that I should not do/desire. I then asked myself the question—so why do I do/desire this thing? What pleasure/pain motivates me here? Answers to these questions were not immediately available, but after some time doing introspection, I’ve come up with them. After that it was a simple matter of changing these motivators to rid myself of the unwanted behavior.
So… yes, I think it can be used for predicting stuff (like, “if I change X, then behavior Y will also change”). Now, the information needed for these predictions is hard to come by (but not impossible!). Essentially you need to know/guess what a person is thinking/feeling. But once you have that, you can predict what they will do and how to influence them.
After that it was a simple matter of changing these motivators to rid myself of the unwanted behavior.
What you describe as “simple” here, is extremely difficult for me. (There are many possible explanations for why it is so, and I am not sure which one of them is the correct one.) Generally what you described seems like a part of the correct explanation… but there are other parts, such as biology, environment, etc.
For example, if my goal is to exercise regularly, I should a) think about my goals, imagine the consequences, think about the costs, and solve the internal conflicts… but also b) do some strategic activities, such as find where the nearest gym is, or maybe buy some exercise equipment to home, and c) check my health to see there is no biological problem such as e.g. anemia making me chronically tired.
An alternative explanation I can think of is the placebo effect. It’s possible that your behaviour Y changed after changing X, because you believed behaviour Y would change. Especially as you wanted to change those behaviours in the first place.
Also, even if this was not due to placebo effect, it’s only evidence on how your mind works. Other people’s minds might work differently. (And I suspect it’s also quite weak as evidence goes, though I can’t seem to articulate why I think so. At the very least I think you’d need a very big sample size of behaviour changes, without forgetting to consider also the failed attempts at changing your behaviour.)
Welcome!
Short introduction to navigation: Clicking the “Discussion” link at the top of the page will show you (most of) the new articles. If you write comments there, you are most likely to receive replies.
If there is something called “Open Thread”, that pretty much means: feel free to ask or say anything (as long as it is at least somewhat relevant to this website, but even that is not always necessary). Also, posting in the most recent open thread will give you more visitors and thus more replies than posting in a three months old article. As of today, the most recent open thread is here, but tomorrow a new one will be started, and it may be strategic to wait.
Well, if you put it this way, it is almost impossible to find a counterexample, because for literally any situation where “a person X did Y”, you can say “that’s because X somehow believed Y will bring them most pleasure / least pain”, and even if I say “but in this specific situation that doesn’t make any sense”, you can say “well, this is one of those situations when X was totally wrong”.
Better approach than “can you find a situation that my theory cannot explain?” is “can you find a situation that my theory cannot predict?” The difference between explanation and prediction is that explanation is what you do after the fact, when you already know which outcome you need to explain, while predictions are done before the fact. For example, if in the next American elections the Democrats will win, I can explain you why. However, if Republicans will win, I can also explain you why. But if you ask me to predict who will win, then I am in trouble, because here my verbal skills cannot save me.
Analogically, if we have a situation “Joe spends his afternoon reading Reddit”, it is easy to explain: Joe believed that reading Reddit will bring him most pleasure. But if we have a situation “Joe decided not to read Reddit, and instead learned a new programming language”, it is also easy to explain: Joe believed that learning will bring him most pleasure in long term. The problem is if Joe is starting his computer right now, and your theory has to predict whether he will read Reddit (as he usually does, but not always), or whether he will learn a new programming language (which is what he procrastinated doing for a long time, but today he feels slightly more motivated than usually). What will Joe do? This is the difficult question. But once he does something, it will be extremely easy to explain in hindsight why did he choose this option, instead of the other option.
More info here: Making beliefs pay rent. But the general idea is: if your theory can explain anything, but predict nothing, what exactly is the point of having such theory?
Ahh, I see. Thank you! This is exactly what I was looking for! :) Back to thinking. :)
Hmm… I’ve given it some thought (more to come later, for sure), but there’s already one thing I’ve found this theory useful for. There have been times when I’ve caught myself doing/desiring things that I should not do/desire. I then asked myself the question—so why do I do/desire this thing? What pleasure/pain motivates me here? Answers to these questions were not immediately available, but after some time doing introspection, I’ve come up with them. After that it was a simple matter of changing these motivators to rid myself of the unwanted behavior.
So… yes, I think it can be used for predicting stuff (like, “if I change X, then behavior Y will also change”). Now, the information needed for these predictions is hard to come by (but not impossible!). Essentially you need to know/guess what a person is thinking/feeling. But once you have that, you can predict what they will do and how to influence them.
What’s your opinion on this?
What you describe as “simple” here, is extremely difficult for me. (There are many possible explanations for why it is so, and I am not sure which one of them is the correct one.) Generally what you described seems like a part of the correct explanation… but there are other parts, such as biology, environment, etc.
For example, if my goal is to exercise regularly, I should a) think about my goals, imagine the consequences, think about the costs, and solve the internal conflicts… but also b) do some strategic activities, such as find where the nearest gym is, or maybe buy some exercise equipment to home, and c) check my health to see there is no biological problem such as e.g. anemia making me chronically tired.
An alternative explanation I can think of is the placebo effect. It’s possible that your behaviour Y changed after changing X, because you believed behaviour Y would change. Especially as you wanted to change those behaviours in the first place.
Also, even if this was not due to placebo effect, it’s only evidence on how your mind works. Other people’s minds might work differently. (And I suspect it’s also quite weak as evidence goes, though I can’t seem to articulate why I think so. At the very least I think you’d need a very big sample size of behaviour changes, without forgetting to consider also the failed attempts at changing your behaviour.)
Hey.
You suck.