i’ve recently started reading this book, but the search-inference framework seems obviously silly, neglecting simple concepts such as “system 1 does thinking”
the search-inference framework seems obviously silly
The search-inference framework matches my introspective account of how I make most of my decisions. It also seems to match my professional experience in numerical optimization. For example, we have four trucks and fifty deliveries to make; which deliveries go in which truck, and what order should they be delivered in? We write out what a possibility looks like, what our goals are, and how a program can go from one possibility to other (hopefully better) possibilities, and when it should stop looking and tell us what orders to give the drivers. Does it clash with your experience of decision-making?
neglecting simple concepts such as “system 1 does thinking”
It’s not clear to me what you mean by “System 1 does thinking.” Could you unpack that for me?
or, it says things like “Naive theories are systems of beliefs that result from incomplete thinking.” and i think “uh sure but if you treat it as a binary then you’ll have to classify all theories as naive . i don’t think you have any idea what complete thinking would actually look like” and then it goes on to talk about the binary between naive and non-naive theories and gives commonplace examples of both
it’s like the book is describing meta concepts (models for human minds) purely by example (different specific wrong models about human minds) without even acknowledging that they’re meta-level
i am experiencing this as disgusting and i notice that i am confused
visible likely resolutions to this confusion are “i am badly misunderstanding the book” and “people on lesswrong are stupider than i thought”
Sorry about the delay in responding! I was much busier this holiday season than I expected to be.
however, most of our minds is system-1 and is nowhere near so spocky …
most of our minds and our cognitive power is instantiated as subconscious system 1 mechanics, not anything as apparent as search-inference
I’m not sure I would describe search-inference as spocky. I agree that having introspective access to it is spocky, but I don’t think that’s necessary for it to be search-inference, and I don’t think Baron is making the claim that the decision process is always accessible. Baron’s example on pages 10-11 seems to include both subconscious and conscious elements, in a way that his earlier description might not seem to, and I think that in this book Baron doesn’t really care whether thinking happens in System 1 or System 2.
A lot of the time, I suspect people don’t even realize that there’s a search going on, because the most available response comes to mind and they don’t think to look for more (see Your inner Google, availability bias, and so on), but it seems likely that System 1 did some searching before coming up with the most available response. Indeed, one of the things that another decision book, Decisive, proposes as a heuristic is that whenever a serious issue is under consideration, there should be at least two alternatives on the table (rather than just “A” or “not A,” search so you’re considering “A” or “B” at least).
or, it says things like “Naive theories are systems of beliefs that result from incomplete thinking.” and i think “uh sure but if you treat it as a binary then you’ll have to classify all theories as naive
Agree that “naive theory” is not a very good category. In the book, they define the binary as:
What makes them “naive” is that they are now superceded by better theories.
But calling the child’s belief that the Earth is flat “naive” because we know better is useless for determining which of our current beliefs are naive, and as you rightly point out if we interpret that as “this belief is naive if someone knows better” then all beliefs must be suspected to be naive.
I think Baron began with naive theories because it makes it easy to give many examples of different mental models of the same phenomena, to highlight that mental models do not have to be concordant with reality, and to show that they can be fluid and changeable (and, implicitly, to be worried about changing the model too little). It sets up the concept of understanding, which is more important and which I remember thinking was sensible.
it’s like the book is describing meta concepts (models for human minds) purely by example (different specific wrong models about human minds) without even acknowledging that they’re meta-level
The start of the Knowledge, thinking, and understanding section is:
Thinking leads to knowledge. This section reviews some ideas about knowledge from cognitive psychology. These ideas are important as background to what follows.
I can read that as acknowledgement of it being meta-level, but I’m not sure that’s what was intended.
visible likely resolutions to this confusion are “i am badly misunderstanding the book” and “people on lesswrong are stupider than i thought”
I’m unlikely to endorse the second resolution! :P My suspicion is that the primary differences in our reaction stem from our different backgrounds, different underlying models of cognition, and different practices to unclear statements. Typically, when I come across a sentence that seems wrong, I default to asking “is there a weak interpretation of this sentence that I could agree with?”. Sometimes the author has led with the wrong foot, and later it seems they meant something I could agree with; other times, no, they do appear to be wrong. If you get to the end of the Understanding section (basically, finishing chapter 1) and still don’t think that Baron is coming from a reasonable place, that seems like it’s worth an extended discussion.
i’ve recently started reading this book, but the search-inference framework seems obviously silly, neglecting simple concepts such as “system 1 does thinking”
what is up with this?
The search-inference framework matches my introspective account of how I make most of my decisions. It also seems to match my professional experience in numerical optimization. For example, we have four trucks and fifty deliveries to make; which deliveries go in which truck, and what order should they be delivered in? We write out what a possibility looks like, what our goals are, and how a program can go from one possibility to other (hopefully better) possibilities, and when it should stop looking and tell us what orders to give the drivers. Does it clash with your experience of decision-making?
It’s not clear to me what you mean by “System 1 does thinking.” Could you unpack that for me?
so, it seems a decent model for system-2 decision making
however, most of our minds is system-1 and is nowhere near so spocky
most of our minds and our cognitive power is instantiated as subconscious system 1 mechanics, not anything as apparent as search-inference
for example, http://cogsci.stackexchange.com/questions/1/how-is-it-that-taking-a-break-from-a-problem-sometimes-allows-you-to-figure-out
or, it says things like “Naive theories are systems of beliefs that result from incomplete thinking.” and i think “uh sure but if you treat it as a binary then you’ll have to classify all theories as naive . i don’t think you have any idea what complete thinking would actually look like” and then it goes on to talk about the binary between naive and non-naive theories and gives commonplace examples of both
it’s like the book is describing meta concepts (models for human minds) purely by example (different specific wrong models about human minds) without even acknowledging that they’re meta-level
i am experiencing this as disgusting and i notice that i am confused
visible likely resolutions to this confusion are “i am badly misunderstanding the book” and “people on lesswrong are stupider than i thought”
Sorry about the delay in responding! I was much busier this holiday season than I expected to be.
I’m not sure I would describe search-inference as spocky. I agree that having introspective access to it is spocky, but I don’t think that’s necessary for it to be search-inference, and I don’t think Baron is making the claim that the decision process is always accessible. Baron’s example on pages 10-11 seems to include both subconscious and conscious elements, in a way that his earlier description might not seem to, and I think that in this book Baron doesn’t really care whether thinking happens in System 1 or System 2.
A lot of the time, I suspect people don’t even realize that there’s a search going on, because the most available response comes to mind and they don’t think to look for more (see Your inner Google, availability bias, and so on), but it seems likely that System 1 did some searching before coming up with the most available response. Indeed, one of the things that another decision book, Decisive, proposes as a heuristic is that whenever a serious issue is under consideration, there should be at least two alternatives on the table (rather than just “A” or “not A,” search so you’re considering “A” or “B” at least).
Agree that “naive theory” is not a very good category. In the book, they define the binary as:
But calling the child’s belief that the Earth is flat “naive” because we know better is useless for determining which of our current beliefs are naive, and as you rightly point out if we interpret that as “this belief is naive if someone knows better” then all beliefs must be suspected to be naive.
I think Baron began with naive theories because it makes it easy to give many examples of different mental models of the same phenomena, to highlight that mental models do not have to be concordant with reality, and to show that they can be fluid and changeable (and, implicitly, to be worried about changing the model too little). It sets up the concept of understanding, which is more important and which I remember thinking was sensible.
The start of the Knowledge, thinking, and understanding section is:
I can read that as acknowledgement of it being meta-level, but I’m not sure that’s what was intended.
I’m unlikely to endorse the second resolution! :P My suspicion is that the primary differences in our reaction stem from our different backgrounds, different underlying models of cognition, and different practices to unclear statements. Typically, when I come across a sentence that seems wrong, I default to asking “is there a weak interpretation of this sentence that I could agree with?”. Sometimes the author has led with the wrong foot, and later it seems they meant something I could agree with; other times, no, they do appear to be wrong. If you get to the end of the Understanding section (basically, finishing chapter 1) and still don’t think that Baron is coming from a reasonable place, that seems like it’s worth an extended discussion.