Wow, you seem pretty satisfied with it. Now, I haven’t done nearly enough reading on any of those topics to dispute anything you’ve said, but, as a poster on LW I’m obligated to check that you haven’t entered an “affective death spiral” by asking the following:
Are there any non-phenomena that PCT can “explain”? That is, could you use PCT to “prove” why certain conceivable things happen, which don’t really happen? Could I e.g. use PCT to prove why thinking hard about whatever I’m procrastinating about will make me motivated to do it, when you already know that doesn’t work?
I’m obligated to check that you haven’t entered an “affective death spiral”
I have to admit, the first bit of PCT literature I read (a sampler of papers and chapters from various PCT books) was a bit off-putting, since most of the first papers seemed a little too self-congratulatory, as if the intended audience were already cult memebrs. Later papers were more informative, enough to convince me to order a couple of the actual books.
Are there any non-phenomena that PCT can “explain”?
I can’t presently imagine how you could do it without distorting the theory. It’d be like trying to equate atheism and amorality. In a sense, PCT is just stimulus-response atheism.
Could I e.g. use PCT to prove why thinking hard about whatever I’m procrastinating about will make me motivated to do it, when you already know that doesn’t work?
It would depend on a far more specific definition of “thinking hard”, and an adequate specification of the other control systems involved in your individual brain. For certain such definitions and specifications, it would work.
To be precise, if “thinking hard” means that you are actually envisioning a specific outcome or actions, linked to a desired reference value, and you do not have any systems that are trying to set common perceptions to match conflicting reference values, then “thinking hard” would work.
This is not the usual definition of “thinking hard”, however, and PCT makes some very specific predictions about inner conflict that essentially say you are 100% screwed unless you fix the conflicts, because we are control systems (i.e. thermostats) “all the way down”.
If it sounds like I’m saying it depends on the individual and some people are screwed, that’s only sort of the case. Everyone can identify when they’re conflicted, and resolve the conflicts in some fashion. Plenty of people have already noticed this and taught it, PCT simply gives a plausible, testable, physical, 100% reductionistic explanation of how our hardware might produce the results we see.
Wow, you seem pretty satisfied with it. Now, I haven’t done nearly enough reading on any of those topics to dispute anything you’ve said, but, as a poster on LW I’m obligated to check that you haven’t entered an “affective death spiral” by asking the following:
Are there any non-phenomena that PCT can “explain”? That is, could you use PCT to “prove” why certain conceivable things happen, which don’t really happen? Could I e.g. use PCT to prove why thinking hard about whatever I’m procrastinating about will make me motivated to do it, when you already know that doesn’t work?
I have to admit, the first bit of PCT literature I read (a sampler of papers and chapters from various PCT books) was a bit off-putting, since most of the first papers seemed a little too self-congratulatory, as if the intended audience were already cult memebrs. Later papers were more informative, enough to convince me to order a couple of the actual books.
I can’t presently imagine how you could do it without distorting the theory. It’d be like trying to equate atheism and amorality. In a sense, PCT is just stimulus-response atheism.
It would depend on a far more specific definition of “thinking hard”, and an adequate specification of the other control systems involved in your individual brain. For certain such definitions and specifications, it would work.
To be precise, if “thinking hard” means that you are actually envisioning a specific outcome or actions, linked to a desired reference value, and you do not have any systems that are trying to set common perceptions to match conflicting reference values, then “thinking hard” would work.
This is not the usual definition of “thinking hard”, however, and PCT makes some very specific predictions about inner conflict that essentially say you are 100% screwed unless you fix the conflicts, because we are control systems (i.e. thermostats) “all the way down”.
If it sounds like I’m saying it depends on the individual and some people are screwed, that’s only sort of the case. Everyone can identify when they’re conflicted, and resolve the conflicts in some fashion. Plenty of people have already noticed this and taught it, PCT simply gives a plausible, testable, physical, 100% reductionistic explanation of how our hardware might produce the results we see.