So we have been looking at the cognitive science of intelligence, and we’ve been looking at the seminal work of Newell and Simon, and we’ve seen how they are trying to create a plausible construct of intelligence to drive many different ideas together into this idea of ‘intelligence as the capacity to be a general problem solve’ and then they’re doing a fantastic job of applying the naturalistic imperative which helps us avoid the homuncular fallacy, because we’re trying to analyze, formalize, and mechanize our explanation of intelligence, explaining the mind ultimately in a non-circular fashion by explaining it in non-mental terms.
This will also hopefully give us a way of resituating the mind within the scientific worldview. We saw that at the core of their construct was the realization via the formalization and attempted mechanization of the combinatorial explosive nature of the problem space and how crucial relevance realization is and how somehow you zero in on the relevant information. They proposed a solution to this that has far-reaching implications for our understanding of meaning cultivation and of rationality; they propose the distinction between heuristic and algorithmic processing and the fact that most of our processing has to be heuristic in nature.
It can’t pursue certainty, it can’t be algorithmic, it can’t be Cartesian in that fashion, and that also means that our cognition is susceptible to bias. The very processes that make us intelligently adaptive help us to ignore the combinatorial explosive amount of options and information available to us are the ones that also prejudice us and bias us so that we can become self-deceptively misled.
They deserve to be seminal figures and they exemplify how we should be trying to do cognitive science and the exemplify the power of the naturalistic imperative, but there were serious shortcomings in Newell and Simon’s work. They themselves (and this is something we should remember, even as scientists: the scientific method is a psychotechnology designed to try to help us deal with our proclivities towards self-deception) fell prey to a cognitive heuristic that biased them. They were making use of the essentialist heuristic, which is crucial to our adaptive intelligence. It helps us find those classes that do share an essence and therefore allow us to make powerful predictions and generalizations.
Of course the problem with essentialism is precisely it is a heuristic, it is adaptive; we are tempted to overuse it and that will make us mis-see many categories do not possess an essence, like Wittgenstein famously pointed out the category of ‘game’ or ‘chair’ or ‘table’. Newell and Simon thought that all problems were essentially the same, and because of that how you formulate a problem is a rather trivial matter. Because of that, they were blinded to the fact that all problems are not essentially the same, that there are essential differences between types of problems, and therefore problem formulation is actually very important.
This is the distinction between well-defined problems and ill-defined problems. I made the point that most real-world problems are ill-defined problems; what’s missing in an ill-defined problem is precisely the relevance realization that you get through a good problem formulation. We then went into work in which Simon himself participated, the work of Kaplan and Simon to show that this self-same relevance realization through problem formulation is at work at addressing combinatorial explosion. We took a look at the problem of the mutilated chessboard, and that if you formulate it as a covering strategy you will get into a combinatorial explosive search whereas if you formulate it as a parity strategy, if you make salient the fact that the two removed pieces are the same color, then the solution becomes obvious to you and very simple to do. Problem formulation helps you avoid combinatorial explosion and helps you deal with ill-definedness and this process by which you move from a poor problem formulation to a good problem formulation is the process of insight.
This is why we have seen throughout that insight is so crucial to you being a rational cognitive agent and that means that in addition to logic being essential for our rationality those psychotechnologies that enhance our capacity for insight are also crucially important (indispensable!).
I can start to read intoa pattern where in the theorethical sections he can refer to concepts as previously known as they are “srpinkled in beforehand” where their previous appearance are justified but only like weakly justified, standing out as odd inclusions. I guess it helps with salience but it also feels like a manipulative technique as it makes it seems artificially profound, the first instances are pretty trivial and then they are reused in critical junctions. Like in the movie Inception, it is a revelation planned out by an outside force in order to achieve a goalstate of the outside agent.
Just bruteforcing a big search space feels to my brain to more be frustration rather than suicide, more of a null operation rather than being actively harmful. Sure it is an unwise move but part of the threat of it would be that it doesn’t “timeout” it doesn’t say itself that it is a bad idea (whereas having a thing like a sword in your stomach would probably make it pretty salient that this choice might not be the most conductive to biological prosperity).
I don’t now how much it is baked into the idea of heuristics but if you are stuck using only preselected heuristics then the lack of flexibility and blindspots are obvoius. What one would ideally want to do is come up with the heuristics on the fly and I guess that is part of what relevance realization is going to be about.
Having watched some things out of order I can see him struggle to keep the narrative in check seeing slips where how he is thinking about it is in conflict how the narrative is progressing.
It was weird that the covering problem in the part when “people usually frame it as a covering problem” it made my brain predict that “it is actually a parity problem”. But that impulse did not make it obvious what the trick was like. At the mention of what the colors of the removed squares are I predicted the “domino stands on different colored squares” property, how it is helpful and important. I was in this weird state that I vaguely had a hint of what the trick was about without it being obvious and not making all the connections.
I was thinking that part of the problem formulation is instead of seeing the problem as “ground level moves” bruteforcing throught the heuristics would be less combinatorily explosive. In this kind of search “just ehaust all options” would be rather quickly categorised as “doesn’t solve atleast fast”. And this process is likely recursive in that one could come up with strategies in which order to try heuristics. and in the other direction “searching all the options” is a stepping back of the kind of procedure “a thing I am doing doesn’t work (errors), lets do something else”. Frustration on this level would mean repeating the action and the error verbatim. This seems to be connected to “madness is doing the same thing and expecting different results” and the skill of saying oops.
A lot of lesswrongian values seems to be referenced with feeling of discovering them from a different angle. Here their importance on how they upkeep other systems is more pronounced. With previous expose on lesswrong it was more in the flavour of “here is a thing that you can aquire and is cool”.
Episode 27: Problem Formulation
s/Kim Stein/Wittgenstein
Fixed, thanks!
I can start to read intoa pattern where in the theorethical sections he can refer to concepts as previously known as they are “srpinkled in beforehand” where their previous appearance are justified but only like weakly justified, standing out as odd inclusions. I guess it helps with salience but it also feels like a manipulative technique as it makes it seems artificially profound, the first instances are pretty trivial and then they are reused in critical junctions. Like in the movie Inception, it is a revelation planned out by an outside force in order to achieve a goalstate of the outside agent.
Just bruteforcing a big search space feels to my brain to more be frustration rather than suicide, more of a null operation rather than being actively harmful. Sure it is an unwise move but part of the threat of it would be that it doesn’t “timeout” it doesn’t say itself that it is a bad idea (whereas having a thing like a sword in your stomach would probably make it pretty salient that this choice might not be the most conductive to biological prosperity).
I don’t now how much it is baked into the idea of heuristics but if you are stuck using only preselected heuristics then the lack of flexibility and blindspots are obvoius. What one would ideally want to do is come up with the heuristics on the fly and I guess that is part of what relevance realization is going to be about.
Having watched some things out of order I can see him struggle to keep the narrative in check seeing slips where how he is thinking about it is in conflict how the narrative is progressing.
It was weird that the covering problem in the part when “people usually frame it as a covering problem” it made my brain predict that “it is actually a parity problem”. But that impulse did not make it obvious what the trick was like. At the mention of what the colors of the removed squares are I predicted the “domino stands on different colored squares” property, how it is helpful and important. I was in this weird state that I vaguely had a hint of what the trick was about without it being obvious and not making all the connections.
I was thinking that part of the problem formulation is instead of seeing the problem as “ground level moves” bruteforcing throught the heuristics would be less combinatorily explosive. In this kind of search “just ehaust all options” would be rather quickly categorised as “doesn’t solve atleast fast”. And this process is likely recursive in that one could come up with strategies in which order to try heuristics. and in the other direction “searching all the options” is a stepping back of the kind of procedure “a thing I am doing doesn’t work (errors), lets do something else”. Frustration on this level would mean repeating the action and the error verbatim. This seems to be connected to “madness is doing the same thing and expecting different results” and the skill of saying oops.
A lot of lesswrongian values seems to be referenced with feeling of discovering them from a different angle. Here their importance on how they upkeep other systems is more pronounced. With previous expose on lesswrong it was more in the flavour of “here is a thing that you can aquire and is cool”.