More intricate models of cognition traditionally vary according to the sets of behaviour they seek to explain; in general, this forum seems to concern itself with the wider sets of human behaviour, with a strange affinity for statistical analysis. It also seems as if most of the people here associate agency with intelligence, though this should be regarded as unsubstantiated anecdote; I have little interest in what people believe, but those beliefs can have interesting consequences. In general, good models of cognition that yield a sense of agency have to be able to explain how a mushy organic collection of cells might become capable of generating a sense of identity. For this reason, our discussion of cognition will treat intelligence as a confluence of passive processes that lead to an approximation of agency.
Who are we? What is intelligence? To answer these or any natural language questions we first search for stored-solutions to whatever we perceive as the problem, even as we generate our conception of the question as a set of abstract problems from interactions between memories. In the absence of recognizing a pattern that triggers a stored solution, a new solution is generated by processes of association and abstraction. This process may be central to the generation of every rational and irrational thought a human will ever have. I would argue that the phenomenon of agency approximates an answer to the question: “who am I?” and that any discussion of consciousness should at least acknowledge how critical natural language use is to universal agreement on any matter. I will gladly discuss this matter further and in greater detail if asked.
At this point, I feel compelled to mention that my initial motivation for pursuing this line of reasoning stems from the realization that this community discusses rationality in a way that differs somewhat from my past encounters with the word.
Out there, it is commonly believed that rationality develops (in hindsight) to explain the subjective experience of cognition;
here we assert a fundamental difference between rationality and this other concept called rationalization. I do not see the utility of this distinction, nor have I found a satisfying explanation of how this distinction operates within accepted models for human learning in such a way that does not assume an a priori method of sorting the values which determine what is considered “rational”. Thus we find there is a general derth of generative models of rational cognition beside a plethora of techniques for spotting irrational or biased methods of thinking.
I see a lot of discussion on the forums very concerned with objective predictions of the future wherein it seems as if rationality (often of a highly probabilistic nature) is, in many cases, expected to bridge the gap between the worlds we can imagine to be possible and our many somewhat subjective realities. And the force keeping these discussions from splintering off into unproductive pissing about is a constant search for bias.
I know I’m not going to be the first among us to suggest that the search for bias is not truly synonymous with rationality, but I would like to clarify before concluding. Searching for bias in cognitive processes can be a very productive way to spend one’s waking hours, and it is a critical element to structuring the subjective world of cognition in such a way that allows abstraction to yield the kind of useful rules that comprise rationality. But it is not, at its core, a generative process.
Let us consider the cognitive process of association (when beliefs, memories, stimuli or concepts become connected to form more complex structures). Without that period of extremely associative and biased cognition experienced during early childhood, we might never learn to attribute the perceived cause of a burn to a hot stove. Without concepts like better and worse to shape our young minds, I imagine many of us would simply lack the attention span to learn about ethics. And what about all the biases that make parables an effective way of conveying information? After all, the strength of a rhetorical argument is in it’s appeal to the interpretive biases of it’s intended audience and not the relative consistency of the conceptual foundations of that argument.
We need to shift discussions involving bias towards models of cognition more complex than portraying it as simply an obstacle to rationality. In my conception of reality, recognizing the existence of bias seems to play a critical role in the development of more complex methods of abstraction; indeed, biases are an intrinsic side effect of the generative grouping of observations that is the core of Baysian reasoning.
In short, biases are not generative processes. Discussions of bias are not necessarily useful, rational or intelligent. A deeper understanding of the nature of intelligence requires conceptualizations that embrace the organic truths at the core of sentience; we must be able to describe our concepts of intelligence, our “rationality”, such that it can emerge organically as the generative processes at the core of cognition.
The Idea
I’d be interested to hear some thoughts about how we might grow to recognize our own biases as necessary to the formative stages of abstraction alongside learning to collectively search for and eliminate biases from our decision making processes. The human mind is limited and while most discussions in natural language never come close to pressing us to those limits, our limitations can still be relevant to those discussions as well as to discussions of artificial intelligences. The way I see things, a bias free machine possessing a model of our own cognition would either have to have stored solutions for every situation it could encounter or methods of generating stored solutions for all future perceived problems (both of which sound like descriptions of oracles to me, though the latter seems more viable from a programmer’s perspective).
A machine capable of making the kinds of decisions considered “easy” for humans, might need biases at some point during it’s journey to the complex and self consistent methods of decision making associated with rationality. This is a rhetorically complex community, but at the risk of my reach exceeding my grasp, I would be interested in seeing an examination of the Affect Heuristic in human decision making as an allegory for the historic utility of fuzzy values in chess AI.
Thank you for your time, and I look forward to what I can only hope will be challenging and thoughtful responses.
More intricate models of cognition traditionally vary according to the sets of behaviour they seek to explain; in general, this forum seems to concern itself with the wider sets of human behaviour, with a strange affinity for statistical analysis. It also seems as if most of the people here associate agency with intelligence, though this should be regarded as unsubstantiated anecdote; I have little interest in what people believe, but those beliefs can have interesting consequences. In general, good models of cognition that yield a sense of agency have to be able to explain how a mushy organic collection of cells might become capable of generating a sense of identity. For this reason, our discussion of cognition will treat intelligence as a confluence of passive processes that lead to an approximation of agency.
Who are we? What is intelligence? To answer these or any natural language questions we first search for stored-solutions to whatever we perceive as the problem, even as we generate our conception of the question as a set of abstract problems from interactions between memories. In the absence of recognizing a pattern that triggers a stored solution, a new solution is generated by processes of association and abstraction. This process may be central to the generation of every rational and irrational thought a human will ever have. I would argue that the phenomenon of agency approximates an answer to the question: “who am I?” and that any discussion of consciousness should at least acknowledge how critical natural language use is to universal agreement on any matter. I will gladly discuss this matter further and in greater detail if asked. At this point, I feel compelled to mention that my initial motivation for pursuing this line of reasoning stems from the realization that this community discusses rationality in a way that differs somewhat from my past encounters with the word.
Out there, it is commonly believed that rationality develops (in hindsight) to explain the subjective experience of cognition; here we assert a fundamental difference between rationality and this other concept called rationalization. I do not see the utility of this distinction, nor have I found a satisfying explanation of how this distinction operates within accepted models for human learning in such a way that does not assume an a priori method of sorting the values which determine what is considered “rational”. Thus we find there is a general derth of generative models of rational cognition beside a plethora of techniques for spotting irrational or biased methods of thinking.
I see a lot of discussion on the forums very concerned with objective predictions of the future wherein it seems as if rationality (often of a highly probabilistic nature) is, in many cases, expected to bridge the gap between the worlds we can imagine to be possible and our many somewhat subjective realities. And the force keeping these discussions from splintering off into unproductive pissing about is a constant search for bias.
I know I’m not going to be the first among us to suggest that the search for bias is not truly synonymous with rationality, but I would like to clarify before concluding. Searching for bias in cognitive processes can be a very productive way to spend one’s waking hours, and it is a critical element to structuring the subjective world of cognition in such a way that allows abstraction to yield the kind of useful rules that comprise rationality. But it is not, at its core, a generative process.
Let us consider the cognitive process of association (when beliefs, memories, stimuli or concepts become connected to form more complex structures). Without that period of extremely associative and biased cognition experienced during early childhood, we might never learn to attribute the perceived cause of a burn to a hot stove. Without concepts like better and worse to shape our young minds, I imagine many of us would simply lack the attention span to learn about ethics. And what about all the biases that make parables an effective way of conveying information? After all, the strength of a rhetorical argument is in it’s appeal to the interpretive biases of it’s intended audience and not the relative consistency of the conceptual foundations of that argument.
We need to shift discussions involving bias towards models of cognition more complex than portraying it as simply an obstacle to rationality. In my conception of reality, recognizing the existence of bias seems to play a critical role in the development of more complex methods of abstraction; indeed, biases are an intrinsic side effect of the generative grouping of observations that is the core of Baysian reasoning.
In short, biases are not generative processes. Discussions of bias are not necessarily useful, rational or intelligent. A deeper understanding of the nature of intelligence requires conceptualizations that embrace the organic truths at the core of sentience; we must be able to describe our concepts of intelligence, our “rationality”, such that it can emerge organically as the generative processes at the core of cognition.
The Idea I’d be interested to hear some thoughts about how we might grow to recognize our own biases as necessary to the formative stages of abstraction alongside learning to collectively search for and eliminate biases from our decision making processes. The human mind is limited and while most discussions in natural language never come close to pressing us to those limits, our limitations can still be relevant to those discussions as well as to discussions of artificial intelligences. The way I see things, a bias free machine possessing a model of our own cognition would either have to have stored solutions for every situation it could encounter or methods of generating stored solutions for all future perceived problems (both of which sound like descriptions of oracles to me, though the latter seems more viable from a programmer’s perspective).
A machine capable of making the kinds of decisions considered “easy” for humans, might need biases at some point during it’s journey to the complex and self consistent methods of decision making associated with rationality. This is a rhetorically complex community, but at the risk of my reach exceeding my grasp, I would be interested in seeing an examination of the Affect Heuristic in human decision making as an allegory for the historic utility of fuzzy values in chess AI.
Thank you for your time, and I look forward to what I can only hope will be challenging and thoughtful responses.
Thanks so much. The formatting is now officially fixed thanks to feedback from the community. I appreciate what you did here none the less.