Basically, as far as I know, System 1 is more or less directly responsible for all actions.
You can predict what actions a person will take BEFORE they are mentally conscious of it at all. You can do this by measuring galvanic skin response or watching their brain activity.
The mentally conscious part happens second.
But like, I’m guessing for some reason, what I’m saying here is already obvious, and Abram just means something else, and I’m trying to figure out what.
I am confused, like obviously my thoughts cause some changes in behavior. Maybe not immediately (though I am highly dubious of the whole “you can predict my actions before they are mentally conscious bit”), but definitely in the future (by causing some kind of back-propagation of updates that change my future actions).
The opposite would make no sense from an evolutionary adaptiveness perspective (having a whole system-2 like thingy would be a giant waste of energy if it never caused any change in actions), and doesn’t at all correspond to high-level planning actions, isn’t what the whole literature on S1 and S2 says (which does indeed make the case that S2 determines many actions), and doesn’t correspond well to my internal experience.
Yeah I’m not implying that System 2 is useless or irrelevant for actions. Just that it seems more indirect or secondary.
Also please note that overall I’m probably confused about something, as I mentioned. And my comments are not meant to open up conflict, but rather I’m requesting a clarification on this particular sentence and what frame / ontology it’s using:
If you’re experiencing “motivational issues”, then it stands to reason that it might be useful to keep an eye on which thoughts are leading to actions and which are not.
I would like to expand the words ‘thoughts’ and ‘useful’ here.
People seem to be responding to me as though I’m trying to start an argument, and this is really not what I’m going for. Sharing my POV is just to try to help close inferential gap in the right direction.
I don’t know whether this is the true cause, but re-reading your original comment, the word “just” in this sentence gives me a very slight sense of triggeredness:
They’re just like… incidental post-hoc things.
I have a feeling which has the rough shape of something like… “people here [me included] are likely to value thoughts a lot and think of them as important in shaping behavior, and may be put on the defensive by wording which seems to be dismissive towards the importance of thoughts”.
Your second comment (“Basically, as far as I know, System 1 is more or less directly responsible for all actions”) feels to me like it might trigger a bit of the same, as it can be read to imply something like… “all of the stuff in the Sequences about figuring out biases and correcting for them on a System 2 level is basically useless, since System 1 drives people’s actions”.
I also feel that neither of these explanations is exactly right, and it’s actually something more subtle than that. Maybe something like “thoughts being the cause of actions is related to a central strategy of many people around here”.
It’s also weird that, like Raemon says, the feeling-of-the-conversation-being-off is subtle; it doesn’t feel like anybody is being explicitly aggressive and one could in principle interpret the conversation as everyone just sharing their models and confusion. Yet it feels like there is that argumentative vibe.
thoughts being the cause of actions is related to a central strategy of many people around here
(It’s a good reason to at least welcome arguments against this being the case. If your central strategy is built on a false premise, you should want to know. It might be pointless to expect any useful info in this direction, but I think it’s healthier to still want to see it emotionally even when you decide that it’s not worth your time to seek it out.)
I suspect my slight trigger (1/10) set off other people’s triggers. And I’m more triggered now as a result (but still only like 3⁄10.)
I’d like to save this thread as an example of a broader pattern I think I see on LW, which makes having conversations here more unpleasant than is probably necessary? Not sure though.
Do you define thoughts as something relatively specific—that is, does the post make any more sense if you substitute “mental contents” for “thoughts”?
Hmmm. No.
Basically, as far as I know, System 1 is more or less directly responsible for all actions.
You can predict what actions a person will take BEFORE they are mentally conscious of it at all. You can do this by measuring galvanic skin response or watching their brain activity.
The mentally conscious part happens second.
But like, I’m guessing for some reason, what I’m saying here is already obvious, and Abram just means something else, and I’m trying to figure out what.
I am confused, like obviously my thoughts cause some changes in behavior. Maybe not immediately (though I am highly dubious of the whole “you can predict my actions before they are mentally conscious bit”), but definitely in the future (by causing some kind of back-propagation of updates that change my future actions).
The opposite would make no sense from an evolutionary adaptiveness perspective (having a whole system-2 like thingy would be a giant waste of energy if it never caused any change in actions), and doesn’t at all correspond to high-level planning actions, isn’t what the whole literature on S1 and S2 says (which does indeed make the case that S2 determines many actions), and doesn’t correspond well to my internal experience.
Yeah I’m not implying that System 2 is useless or irrelevant for actions. Just that it seems more indirect or secondary.
Also please note that overall I’m probably confused about something, as I mentioned. And my comments are not meant to open up conflict, but rather I’m requesting a clarification on this particular sentence and what frame / ontology it’s using:
I would like to expand the words ‘thoughts’ and ‘useful’ here.
People seem to be responding to me as though I’m trying to start an argument, and this is really not what I’m going for. Sharing my POV is just to try to help close inferential gap in the right direction.
(fwiw I agree something about the conversation felt a bit off/overly-argumentative to me, although it’s hard to place)
I acknowledge that it’s likely somehow because of how I worded things in my original comment. I wish I knew how to fix it.
I don’t know whether this is the true cause, but re-reading your original comment, the word “just” in this sentence gives me a very slight sense of triggeredness:
I have a feeling which has the rough shape of something like… “people here [me included] are likely to value thoughts a lot and think of them as important in shaping behavior, and may be put on the defensive by wording which seems to be dismissive towards the importance of thoughts”.
Your second comment (“Basically, as far as I know, System 1 is more or less directly responsible for all actions”) feels to me like it might trigger a bit of the same, as it can be read to imply something like… “all of the stuff in the Sequences about figuring out biases and correcting for them on a System 2 level is basically useless, since System 1 drives people’s actions”.
I also feel that neither of these explanations is exactly right, and it’s actually something more subtle than that. Maybe something like “thoughts being the cause of actions is related to a central strategy of many people around here”.
It’s also weird that, like Raemon says, the feeling-of-the-conversation-being-off is subtle; it doesn’t feel like anybody is being explicitly aggressive and one could in principle interpret the conversation as everyone just sharing their models and confusion. Yet it feels like there is that argumentative vibe.
(It’s a good reason to at least welcome arguments against this being the case. If your central strategy is built on a false premise, you should want to know. It might be pointless to expect any useful info in this direction, but I think it’s healthier to still want to see it emotionally even when you decide that it’s not worth your time to seek it out.)
(Agreed, didn’t mean to imply otherwise.)
Thanks! This was helpful analysis.
I suspect my slight trigger (1/10) set off other people’s triggers. And I’m more triggered now as a result (but still only like 3⁄10.)
I’d like to save this thread as an example of a broader pattern I think I see on LW, which makes having conversations here more unpleasant than is probably necessary? Not sure though.