Not so much from the reading, or even from any specific comments in the forum—though I learned a lot from the links people were kind enough to provide.
But I did, through a kind of osmosis, remind myself that not everyone has the same thing in mind when they think of AI, AGI, human level AI, and still less, mere “intelligence.”
Despite the verbal drawing of the distinction between GOFAI and the spectrum of approaches being investigated and persued today, I have realized by reading between the lines that GOFAI is still alive and well. Maybe it is not the primitive “production system” stuff of the Simon and Newell era, or programs written in LISP or ProLog (both of which I coded in, once upon a time), but there are still a lot of people who don’t much care about what I would call “real consciousness”,and are still taking a Turing-esque, purely operationalistic, essentially logical positivistic positivistic approach to “intellence.”
I am passionately pro-AI. But for me, that means I want more than anything to create a real conscious entity, that feels, has ideas, passions, drives, emotions, loyalties, ideals.
Most of even neurology has moved beyond the positivistic “there is only behavior, and we don’t talk about conscious”, to actively investigating the function, substrate, neural realization of, evolutionary contribution of, etc, consciousness, as opposed to just the evolutiounary contribution of non-conscious informaton processing, to organismic success.
Look at Damasio’s work, showing that emotion is necessary for full spectrum cognitive skill manifestation.
THe thinking-feeling dichotomy is rapidly falling out of the working worldview, and I have been arguing for years that there are fallacious categories we have been using, for other reasons.
This is not to say that nonconscious “intelligent” systems are not here, evolving, and potentially dangerous. Automated program trading on the financial markets is potentially dangerous.
So there is still great utility in being sensitive to possible existential risks from non-consciousness intelligent systems.
They need not be willfully malevolent to pose a risk to us.
But as to my original point, I have learned that much of AI is still (more sophisticated) GOFAI, with better hardware and algorithms.
I am pro-AI, as I say, but I want to create “conscious” machines, in the interesting, natural sense of ‘conscious’ now admitted by neurology, most of cognitive science, much of theoretical neurobiology, and philosophy of mind, -- and in which positions like Dennett’s “intentional stance” that seek to do away with real sentience and admit only behavior, are now recognized to have been a wasted 30 years.
This realization that operationalism is alive and well in AI, is good for me in particular, because I am preparing to create a you tube channel or two, presenting both the history of AI and parallel intellectual history of philosophy of mind and cognitive science—showing why the postivistic atmosphere grew up from ontologal drift emanating from philosphy of science’s delay in digesting the Newtonian to quantum ontology change.
Then untimately, I’ll be laying some fresh groundwork for a series of new ideas I want to present, on how we can advance the goal of artificial sentience, and how and why this is the only way to make superintelligence that has a chance of being safe, let alone ultimately beneficial and a partner to mankind.
So, I have indirectly by, as I say, a kind of osmosis, rather than what anyone has said (more by what has not been said, perhaps) learned that much of AI is lagging behind neurology, cognitive science, and lots of other fields, in the adoption of a head-on attack on the “problem of consciousness.”
To me, not only do I want to create conscious machines, but I think solving the mind body problem in the biological case, and doing “my” brand of successful AI, are complimentary. So complimentary, that solving either would probably point the way to solving the other. I thought that ever since I wrote my undergrad honors thesis.
So that is what I have tentatively introjected so far, albeit indirectly. And it will help me in my You Tube videos (not up yet) which are directed at the AI community, intending to be a helpful resource, especiallly for those who don’t have a clue what kind of intellectual climate made the positivistic “turing test” almost an inevitable outgrowth.
But the intellectual soil from which it grew, no longer is considered valid (understanding this requires digesting the lessons of quantum theory in a new, and rigorous way, and several other issues.)
But its time to shed the suffocating influence of the Turing test, and the gravitational drag of the defective intellectual history, that it inevitably grew out of (along with logical behaviorism, eliminitive materialism, etc. It was all based on a certain understanding of Newtonian physics, which has been known to be fundamentally false, for over a hundred years.
Some of us are still trying to fit AI into an ontology that never was correct to begin with.
But we know enough, now, to get it right this time. If we methodically go back and root out the bad ideas. We need a little top down thinking, to supplement all the bottom up thinking in engineering.
Look at Damasio’s work, showing that emotion is necessary for full spectrum cognitive skill manifestation.
There is a way to arrive at this thru Damasio’s early work, which I don’t think is highlighted by saying that emotion is needed for human-level skill. His work in the 1980s was on “convergence zones”. These are hypothetical areas in the brain that are auto-associative networks (think a Hopfield network) with bi-directional connections to upstream sensory areas. His notion is that different sensory (and motor? I don’t remember now) areas recognize sense-specific patterns (e.g., the sound of a dog barking, the image of a dog, the word “dog”, the sound of the word “dog”, the movement one would make against an attacking dog), and the pattern these create in the convergence zone represents the concept “dog”.
This makes a lot of sense and has a lot of support from studies, but a consequence is that humans don’t use logic. A convergence zone is set there, in one physical hunk of brain, with no way to move its activation pattern around in the brain. That means that the brain’s representations do not use variables the way logic does. A pattern in a CZ might be represented by the variable X, and could take on different values such as the pattern for “dog”. But you can’t move that X around in equations or formuli. You would most likely have a hard-wired set of basic logic rules, and the concept “dog” as used on the left-hand side of a rule would be a different concept than the concept “dog” used on the right-hand side of the same rule.
Hence, emotions are important for humans, but this says nothing about whether emotions would be needed for an agent that could use logic.
Not so much from the reading, or even from any specific comments in the forum—though I learned a lot from the links people were kind enough to provide.
But I did, through a kind of osmosis, remind myself that not everyone has the same thing in mind when they think of AI, AGI, human level AI, and still less, mere “intelligence.”
Despite the verbal drawing of the distinction between GOFAI and the spectrum of approaches being investigated and persued today, I have realized by reading between the lines that GOFAI is still alive and well. Maybe it is not the primitive “production system” stuff of the Simon and Newell era, or programs written in LISP or ProLog (both of which I coded in, once upon a time), but there are still a lot of people who don’t much care about what I would call “real consciousness”,and are still taking a Turing-esque, purely operationalistic, essentially logical positivistic positivistic approach to “intellence.”
I am passionately pro-AI. But for me, that means I want more than anything to create a real conscious entity, that feels, has ideas, passions, drives, emotions, loyalties, ideals.
Most of even neurology has moved beyond the positivistic “there is only behavior, and we don’t talk about conscious”, to actively investigating the function, substrate, neural realization of, evolutionary contribution of, etc, consciousness, as opposed to just the evolutiounary contribution of non-conscious informaton processing, to organismic success.
Look at Damasio’s work, showing that emotion is necessary for full spectrum cognitive skill manifestation.
THe thinking-feeling dichotomy is rapidly falling out of the working worldview, and I have been arguing for years that there are fallacious categories we have been using, for other reasons.
This is not to say that nonconscious “intelligent” systems are not here, evolving, and potentially dangerous. Automated program trading on the financial markets is potentially dangerous.
So there is still great utility in being sensitive to possible existential risks from non-consciousness intelligent systems.
They need not be willfully malevolent to pose a risk to us.
But as to my original point, I have learned that much of AI is still (more sophisticated) GOFAI, with better hardware and algorithms.
I am pro-AI, as I say, but I want to create “conscious” machines, in the interesting, natural sense of ‘conscious’ now admitted by neurology, most of cognitive science, much of theoretical neurobiology, and philosophy of mind, -- and in which positions like Dennett’s “intentional stance” that seek to do away with real sentience and admit only behavior, are now recognized to have been a wasted 30 years.
This realization that operationalism is alive and well in AI, is good for me in particular, because I am preparing to create a you tube channel or two, presenting both the history of AI and parallel intellectual history of philosophy of mind and cognitive science—showing why the postivistic atmosphere grew up from ontologal drift emanating from philosphy of science’s delay in digesting the Newtonian to quantum ontology change.
Then untimately, I’ll be laying some fresh groundwork for a series of new ideas I want to present, on how we can advance the goal of artificial sentience, and how and why this is the only way to make superintelligence that has a chance of being safe, let alone ultimately beneficial and a partner to mankind.
So, I have indirectly by, as I say, a kind of osmosis, rather than what anyone has said (more by what has not been said, perhaps) learned that much of AI is lagging behind neurology, cognitive science, and lots of other fields, in the adoption of a head-on attack on the “problem of consciousness.”
To me, not only do I want to create conscious machines, but I think solving the mind body problem in the biological case, and doing “my” brand of successful AI, are complimentary. So complimentary, that solving either would probably point the way to solving the other. I thought that ever since I wrote my undergrad honors thesis.
So that is what I have tentatively introjected so far, albeit indirectly. And it will help me in my You Tube videos (not up yet) which are directed at the AI community, intending to be a helpful resource, especiallly for those who don’t have a clue what kind of intellectual climate made the positivistic “turing test” almost an inevitable outgrowth.
But the intellectual soil from which it grew, no longer is considered valid (understanding this requires digesting the lessons of quantum theory in a new, and rigorous way, and several other issues.)
But its time to shed the suffocating influence of the Turing test, and the gravitational drag of the defective intellectual history, that it inevitably grew out of (along with logical behaviorism, eliminitive materialism, etc. It was all based on a certain understanding of Newtonian physics, which has been known to be fundamentally false, for over a hundred years.
Some of us are still trying to fit AI into an ontology that never was correct to begin with.
But we know enough, now, to get it right this time. If we methodically go back and root out the bad ideas. We need a little top down thinking, to supplement all the bottom up thinking in engineering.
There is a way to arrive at this thru Damasio’s early work, which I don’t think is highlighted by saying that emotion is needed for human-level skill. His work in the 1980s was on “convergence zones”. These are hypothetical areas in the brain that are auto-associative networks (think a Hopfield network) with bi-directional connections to upstream sensory areas. His notion is that different sensory (and motor? I don’t remember now) areas recognize sense-specific patterns (e.g., the sound of a dog barking, the image of a dog, the word “dog”, the sound of the word “dog”, the movement one would make against an attacking dog), and the pattern these create in the convergence zone represents the concept “dog”.
This makes a lot of sense and has a lot of support from studies, but a consequence is that humans don’t use logic. A convergence zone is set there, in one physical hunk of brain, with no way to move its activation pattern around in the brain. That means that the brain’s representations do not use variables the way logic does. A pattern in a CZ might be represented by the variable X, and could take on different values such as the pattern for “dog”. But you can’t move that X around in equations or formuli. You would most likely have a hard-wired set of basic logic rules, and the concept “dog” as used on the left-hand side of a rule would be a different concept than the concept “dog” used on the right-hand side of the same rule.
Hence, emotions are important for humans, but this says nothing about whether emotions would be needed for an agent that could use logic.