The first thing anyone must do before any other self improvement is even logically possible is doing something about self deception, otherwise any self improvement attempts degenerate to be a form of wireheading as you will self deceive as to what the self improvement achieves, and you would end up improving your ability to self deceive.
I would suggest dropping this belief in silver bullet self improvement, burying it, and putting a stake through it’s heart, as the first step. If you look at the accomplishments of people who couldn’t just improve their subjective performance by improving ability to self deceive—technical fields for example—about the only type of self improvement you see is training, on complicated problems, with tests.
The self deception is a cognitive process that we are reward/punishment conditioned to do, internally, when doing free form non-externally verified thought. E.g. a Christian would be subject to anxiety-like feeling when considering the arguments against the Christianity, and reward feeling when coming up with arguments why Christianity is right, and would get conditioned to feel good about invalid approach to reasoning and feel bad about valid approach to reasoning. Quitting religion won’t reverse this conditioning. The conditioning could perhaps be reversed by studying mathematics for long time and doing the exercises (and getting punished for self deception as self deception would be resulting in failures), or some similar occupation where there is reliable external verification of correctness.
edit: Sorry, christianity is only meant as an example. This applies to any other ill-founded belief, religious or otherwise. The same can also happen the forms of atheisms that include belief in validity of an invalid reason against existence of god. The christianity is simply a world’s most popular religion at the time, and by far the most popular in the developed countries, and so it is an important case.
Regarding Christians, those were a common example. There are many Christian de-converts here—don’t you recall feeling a tingle of fear and anxiety as you explored the possibility that your previous life was wasted on a wrong idea? This is an example of negative reinforcement. If I can’t bring up #1 world religion as the example of religiousness, then what?
Whenever you have positive and negative feelings correlate with behaviour, you get conditioning. Every time. It really is this fundamental. There are many other equally fundamental mechanisms, of course, but they act in addition to the conditioning, not as replacement.
When someone builds a working model of that hypothesis as foundational psychology, in sufficient detail to refute alternative hypotheses (such as the ones that people act to maximise utility, or that they act to achieve their purposes) I’ll consider taking it seriously. I do not believe this has ever been done.
There has been a multitude of experiments, on humans and other animals, demonstrating that conditioning works. If when you touch a specific odd shaped object, you get a mild electric shock, it will become difficult for you to touch that object even when you are fully consciously aware that the shocking circuit is disconnected, and you will experience aversion to touching that object (i.e. you will act as if picking up that object had extra cost compared to other objects, even though you are fully aware you won’t be shocked). This is repeatable scientific finding with broad ramifications. (and it is stable over a multitude of positive and negative reinforcements).
Regarding whenever people act to ‘maximize utility’, that is trivially falsified by any experiment where people demonstrably make a wrong choice (e.g. don’t switch in monty hall). People do not act as to ‘maximize utility’, and that’s why people need training to better achieve their goals. What you listed is not ‘alternative hypotheses’, it’s normative statements about what people should do under particular moral philosophies.
When I was a professor, I ridiculed (over beer in a bar) graduate students who were telling me it made sense to switch. One student came up with a clever demonstration using the digits in the serial numbers of a dollar bill as a random number generator where he asked me about switching in the 10-door generalization of the Monty Hall problem. With 10 doors and only one prize, it quickly became apparent that I had my head up my arse.
I learned something that day. Two things if you count the monty hall problem. The other: if I am arrogant and obnoxious in my beliefs, I will motivate smart people who disagree with me to figure out how to convince me of my error. Of course, there are no karma points in bars (or at least they are not as obvious) so I did not learn how dangerous such an otherwise productive path is to your karma.
I don’t think his objection was that conditioning isn’t a real thing that’s really real, but that it’s not a basis for a fully-descriptive theory of psychological behaviour.
That said, I don’t think you were suggesting it was in the first place.
This is repeatable scientific finding with broad ramifications
More like “suggestive experiments that people read far too much into”.
Regarding whenever people act to ‘maximize utility’, that is trivially falsified by any experiment where people demonstrably make a wrong choice
Talk to Tim Tyler about that. He seems to be as convinced by utility-maximising (as a description of what people actually do, not a norm) as you are of conditioning. There may be others here who believe the same and have better arguments than he does. I think they’re all wrong, so I can’t argue on their behalf, but I will point out the obvious refutation that they might offer, viz. utility is subjective.
It’s pretty amusing how everybody has their own favorite simplified model which they overextend to attempt to explain all human behavior.
Brains are hierarchical backpropagating neural networks! No, they’re Bayesian networks! No, they’re Goedelian pattern recognizers! No, the mind is an algorithm which optimizes for utility! No, it maximizes for reproductive fitness! No, it maximizes for perceptual control! No, it maximizes for status in the tribe!
And then casually applies insights from their own introspection about their own mind to other people, and assumes that everybody else is wrong rather than, perhaps, different.
I’ve made the most progress in “intervening in myself” after I stopped believing that there was some single, simple, fundamental rule underlying all psychology and behavior.
e: I’m not trying to make fun of anyone in particular in this conversation—I was just ragging on the tendency of folks to confuse their map with their map of their map.
It’s not based on self observation of some kind or simple fundamental rule underlying all behaviour (the all part is an obvious strawman brought in by RichardKennaway) . However, the conditioning does affect any behaviour, as far as experiments show.
If you are unable to see the difference between ‘gravity affects every massive object’ and ‘gravitation is a fundamental rule explaining all the universe’, then nothing can help you.
I’m not disagreeing with you. I’m merely pointing out that humans fall too much in love with their pet idea de jour.
Actually not entirely sure why I’m being downvoted, perhaps my comment came off as snarky.
edit: after rereading it, it looks like I was attacking you, when really I was just expressing frustration at an entirely different group of people who write books attempting to convince other people that they have the One True Secret of Life.
I’m not disagreeing with you. I’m merely pointing out that humans fall too much in love with their pet idea de jour.
I agree. BTW, I can’t downvote anyone.
I’m not trying to explain something with conditioning and just conditioning alone though; all I am saying is that we should expect self deception to get reinforced as it results in internal reward (and avoidance of self deception easily results in punishment). Regarding the voting, I also was down to −7 on this: http://lesswrong.com/lw/ai9/how_do_you_notice_when_youre_rationalizing/5y3w so I do not care a whole lot.
Well, I wanted to leave for it being generally a waste of time; still, not everyone here is stupid (non stupid list includes Yvain, Wei_Dai, Will Newsome even though he’s nuts, a few others). The relevant question is why I don’t just delete this account.
Brains are hierarchical backpropagating neural networks! No, they’re Bayesian networks! No, they’re Goedelian pattern recognizers! No, the mind is an algorithm which optimizes for utility! No, it maximizes for reproductive fitness! No, it maximizes for perceptual control! No, it maximizes for status in the tribe!
So, you’re saying all these explanations are Turing-complete?
The first thing anyone must do before any other self improvement is even logically possible is doing something about self deception, otherwise any self improvement attempts degenerate to be a form of wireheading as you will self deceive as to what the self improvement achieves, and you would end up improving your ability to self deceive.
I would suggest dropping this belief in silver bullet self improvement, burying it, and putting a stake through it’s heart, as the first step. If you look at the accomplishments of people who couldn’t just improve their subjective performance by improving ability to self deceive—technical fields for example—about the only type of self improvement you see is training, on complicated problems, with tests.
The self deception is a cognitive process that we are reward/punishment conditioned to do, internally, when doing free form non-externally verified thought. E.g. a Christian would be subject to anxiety-like feeling when considering the arguments against the Christianity, and reward feeling when coming up with arguments why Christianity is right, and would get conditioned to feel good about invalid approach to reasoning and feel bad about valid approach to reasoning. Quitting religion won’t reverse this conditioning. The conditioning could perhaps be reversed by studying mathematics for long time and doing the exercises (and getting punished for self deception as self deception would be resulting in failures), or some similar occupation where there is reliable external verification of correctness.
edit: Sorry, christianity is only meant as an example. This applies to any other ill-founded belief, religious or otherwise. The same can also happen the forms of atheisms that include belief in validity of an invalid reason against existence of god. The christianity is simply a world’s most popular religion at the time, and by far the most popular in the developed countries, and so it is an important case.
Your base point about being careful about self-deception is made crappy by your rant about christians and your weird veiled accusations.
Regarding Christians, those were a common example. There are many Christian de-converts here—don’t you recall feeling a tingle of fear and anxiety as you explored the possibility that your previous life was wasted on a wrong idea? This is an example of negative reinforcement. If I can’t bring up #1 world religion as the example of religiousness, then what?
He also has a strange obsession over “conditioning”, which he appears to think is the fundamental mechanism of the brain.
Whenever you have positive and negative feelings correlate with behaviour, you get conditioning. Every time. It really is this fundamental. There are many other equally fundamental mechanisms, of course, but they act in addition to the conditioning, not as replacement.
When someone builds a working model of that hypothesis as foundational psychology, in sufficient detail to refute alternative hypotheses (such as the ones that people act to maximise utility, or that they act to achieve their purposes) I’ll consider taking it seriously. I do not believe this has ever been done.
There has been a multitude of experiments, on humans and other animals, demonstrating that conditioning works. If when you touch a specific odd shaped object, you get a mild electric shock, it will become difficult for you to touch that object even when you are fully consciously aware that the shocking circuit is disconnected, and you will experience aversion to touching that object (i.e. you will act as if picking up that object had extra cost compared to other objects, even though you are fully aware you won’t be shocked). This is repeatable scientific finding with broad ramifications. (and it is stable over a multitude of positive and negative reinforcements).
Regarding whenever people act to ‘maximize utility’, that is trivially falsified by any experiment where people demonstrably make a wrong choice (e.g. don’t switch in monty hall). People do not act as to ‘maximize utility’, and that’s why people need training to better achieve their goals. What you listed is not ‘alternative hypotheses’, it’s normative statements about what people should do under particular moral philosophies.
Thanks for mentioning the Monty Hall problem. I hadn’t heard of it before and I found it incredibly interesting.
When I was a professor, I ridiculed (over beer in a bar) graduate students who were telling me it made sense to switch. One student came up with a clever demonstration using the digits in the serial numbers of a dollar bill as a random number generator where he asked me about switching in the 10-door generalization of the Monty Hall problem. With 10 doors and only one prize, it quickly became apparent that I had my head up my arse.
I learned something that day. Two things if you count the monty hall problem. The other: if I am arrogant and obnoxious in my beliefs, I will motivate smart people who disagree with me to figure out how to convince me of my error. Of course, there are no karma points in bars (or at least they are not as obvious) so I did not learn how dangerous such an otherwise productive path is to your karma.
Agreed that the reputation costs of being seen as arrogant and obnoxious are not as immediately obvious in some communities as in others.
I don’t think his objection was that conditioning isn’t a real thing that’s really real, but that it’s not a basis for a fully-descriptive theory of psychological behaviour.
That said, I don’t think you were suggesting it was in the first place.
FWIW, I do think it isn’t a real thing that’s really real, but I’m not all that interested in a prolonged discussion on the matter.
Thank you for the clarification.
More like “suggestive experiments that people read far too much into”.
Talk to Tim Tyler about that. He seems to be as convinced by utility-maximising (as a description of what people actually do, not a norm) as you are of conditioning. There may be others here who believe the same and have better arguments than he does. I think they’re all wrong, so I can’t argue on their behalf, but I will point out the obvious refutation that they might offer, viz. utility is subjective.
It’s pretty amusing how everybody has their own favorite simplified model which they overextend to attempt to explain all human behavior.
Brains are hierarchical backpropagating neural networks! No, they’re Bayesian networks! No, they’re Goedelian pattern recognizers! No, the mind is an algorithm which optimizes for utility! No, it maximizes for reproductive fitness! No, it maximizes for perceptual control! No, it maximizes for status in the tribe!
And then casually applies insights from their own introspection about their own mind to other people, and assumes that everybody else is wrong rather than, perhaps, different.
I’ve made the most progress in “intervening in myself” after I stopped believing that there was some single, simple, fundamental rule underlying all psychology and behavior.
e: I’m not trying to make fun of anyone in particular in this conversation—I was just ragging on the tendency of folks to confuse their map with their map of their map.
It’s not based on self observation of some kind or simple fundamental rule underlying all behaviour (the all part is an obvious strawman brought in by RichardKennaway) . However, the conditioning does affect any behaviour, as far as experiments show.
If you are unable to see the difference between ‘gravity affects every massive object’ and ‘gravitation is a fundamental rule explaining all the universe’, then nothing can help you.
I’m not disagreeing with you. I’m merely pointing out that humans fall too much in love with their pet idea de jour.
Actually not entirely sure why I’m being downvoted, perhaps my comment came off as snarky.
edit: after rereading it, it looks like I was attacking you, when really I was just expressing frustration at an entirely different group of people who write books attempting to convince other people that they have the One True Secret of Life.
I agree. BTW, I can’t downvote anyone.
I’m not trying to explain something with conditioning and just conditioning alone though; all I am saying is that we should expect self deception to get reinforced as it results in internal reward (and avoidance of self deception easily results in punishment). Regarding the voting, I also was down to −7 on this: http://lesswrong.com/lw/ai9/how_do_you_notice_when_youre_rationalizing/5y3w so I do not care a whole lot.
Why did you change usernames?
Well, I wanted to leave for it being generally a waste of time; still, not everyone here is stupid (non stupid list includes Yvain, Wei_Dai, Will Newsome even though he’s nuts, a few others). The relevant question is why I don’t just delete this account.
But why did you stop posting under the other name?
So, you’re saying all these explanations are Turing-complete?