Take three common “broad” or “generally-categorizable” demographics of minds: Autistic people, Empaths (lots of mirror neurons dedicated to modeling the behavior of others), Sociopaths, or “Professional Psychopaths” (high-functioning without mirror neurons, responsible for most systemic destruction, precisely because they can appear to be “highly functional and productive well-respected citizens”), Psychopaths (low-functioning without mirror neurons, most commonly an “obvious problem”). All of the prior humans’ minds work on the principle of emergent order, with logic, reason, introspection being the alien or uncommon state of existence that is a minor veneer on the surface of the vast majority of brain function which is: “streaming perception and prediction of emergent patterns.”
A robot that never evolved to “get along” with other sentiences, and is programmed in a certain way can “go wrong” or have any of billions of irrational “blue minimizing” functions. It seems that sure, it’s a “behavior executor,” not a “utility maximizer.” I would go further and say that humans are not “ultility maximizers” either, except when they are training themselves to behave in a robotic fashion toward the purpose of “maximizing a utility” based on the very small number of patterns that they have consciously identified.
There’s no reason for a super-human intelligence (one with far more neocortex, or far more complex neocortex that’s equipped to do far more than model linear patterns, which perhaps automatically “sees” exponentials, cellular automata, and transcendental numbers) to be so limited.
Humans aren’t much good at intelligent planning that takes other minds, and “kinds of minds” into account. That’s why our societies regularly fall into a state of dominion and enslavement, and have to be “started over” from a position of utter chaos and destruction (ie: the rebuilding of Berlin and Dresden).
Far be it from me to be “mind-killed,” but I think avoiding that fate should be a common object of discussion among people who are “rational.” (ie: “What not to do.”)
I also don’t think it’s fair to lump “behaviorists” (other than perhaps B.F. Skinner) into an irrational school of “oversimplification.” Even Skinner noted that his goal was to get to the truth, via observation. (Eliminate biases.) By trying to make an entire school out of the implications of some minds, some of the time, we oversimplify the complex reality.
Behaviorism has caught scores of serial killers. (According to John Douglas, author of Mindhunter, originator of the FBI’s Investigative Support Unit.) How? It turns out that serial killer behavior isn’t that complex, and it’s seeking goals that superior minds actually can model quite accurately. (This is much like a toddler chasing a ball into the street. Every adult can model that as a “bad thing,” because their minds are superiorenough to understand 1-what the child’s goal is 2-what the child’s probable failures in perception are 3-how the entire system of child, ball, street, and their inter-related feedbacks are likely to produce, as well as how the adult can, andshould, swoop in and prevent the child from reaching the street.)
So, behaviorism does help us do two things: 4-eliminate errors from prior “schools” of philosophy (which were, themselves, not really “schools” but just significant insights) 5-reference “just what we can observe,” in terms of revealedpreferences. Revealed preferences are not “the whole picture.” However, they do give us a starting point for isolating important variables.
This can be done with a robot or a human, but the human is a group of “messy emergent networks” (brain regions, combined with body feedback, with nagging long-term goals in the background, acting as an “action-shifting threshold”) whose goals are the result of modeled patterns and instances of reward. The robot, on the other hand, lacks all the messy patterns, and can often deal with reality as a set of extreme reductions, in a way that no (or few) humans can.
The entire “utility function” paradigm appears to be a very backwards way of approximating thought to me. First you start with perceived patterns, then, you evolve ever-more-complex thought.
This allows you to develop goals that are really worth solving.
What we want in a super-intelligence is actually “more effective libertarians.” Sure, we’ve found that free markets (very large free networks of humans) create wealth and prosperity. However, we’ve also found that there are large numbers of sociopaths who don’t care about wealth and prosperity for all, just for themselves. Such a goal structure can maximize prosperity for sociopaths, while destroying all wealth and prosperity for others. In fact, this repeatedly happens throughout history, right up to the present. It’s a cycle that’s been interfered with temporarily, but never broken.
Would any robot, by default, care about shifting that outcome of “sociopaths dominate grossly-imperfect legal institutions”? I doubt it. Moreover, such a sociopath could create a lasting peace by creating a very stable tyranny, replete with highly-functional secret police, and a highly effective algorithm for “how to steal the most from every producer, while sensing their threshold for rebellion.”
In fact, this is what the current system attempts to accomplish: There’s no reason for the system to decay to Hitler’s excesses, when scientists, producers, engineers, etc. have found (enough) happiness (and fear) in slavery. How much is “enough”? It’s “enough (happiness) to keep producing without rebellion,” and “enough (fear) to disincentivize rebellion.”
In the rest of this sequence, I want to expand upon this idea. I’ll start by discussing some of the foundations of behaviorism, one of the earliest theories to treat people as behavior-executors. I’ll go into some of the implications for the “easy problem” of consciousness and philosophy of mind. I’ll very briefly discuss the philosophical debate around eliminativism and a few eliminativist schools. Then I’ll go into why we feel like we have goals and preferences and what to do about them.
This is like baling a few thousand gallons of water while the Titanic is sinking.
6-It won’t make any difference to any important goal, short-term or long-term
7-It deals with a local situation that is irrelevant to anything important, worldwide
8-It deals with theories of the mind that are compatible with Francis Crick and Jeff Hawkins’ work, but only useful to narrow sub-disciplines like “How do we know when law enforcement should take action?” or “When we see this at a crime scene, it’s a good threshold-based variable for how many resources we should throw at the problem.”
9-Every “school” that stops referring to reality and nature, to the extent it does so, is horribly flawed (this is Jeff Hawkins, who is right about almost everything, screwing up royally in dismissing science fiction as “not having anything important to say about brain building.”)
10-When you’re studying human “schools,” you’re studying a narrow focus of human insight described with words(“labels” and “maps”) instead of the insight they’ve derived from their modeling of the territory. (Kozybski,who himself,turned a few insights into a “school”)
Take three common “broad” or “generally-categorizable” demographics of minds: Autistic people, Empaths (lots of mirror neurons dedicated to modeling the behavior of others), Sociopaths, or “Professional Psychopaths” (high-functioning without mirror neurons, responsible for most systemic destruction, precisely because they can appear to be “highly functional and productive well-respected citizens”), Psychopaths (low-functioning without mirror neurons, most commonly an “obvious problem”). All of the prior humans’ minds work on the principle of emergent order, with logic, reason, introspection being the alien or uncommon state of existence that is a minor veneer on the surface of the vast majority of brain function which is: “streaming perception and prediction of emergent patterns.”
A robot that never evolved to “get along” with other sentiences, and is programmed in a certain way can “go wrong” or have any of billions of irrational “blue minimizing” functions. It seems that sure, it’s a “behavior executor,” not a “utility maximizer.” I would go further and say that humans are not “ultility maximizers” either, except when they are training themselves to behave in a robotic fashion toward the purpose of “maximizing a utility” based on the very small number of patterns that they have consciously identified.
There’s no reason for a super-human intelligence (one with far more neocortex, or far more complex neocortex that’s equipped to do far more than model linear patterns, which perhaps automatically “sees” exponentials, cellular automata, and transcendental numbers) to be so limited.
Humans aren’t much good at intelligent planning that takes other minds, and “kinds of minds” into account. That’s why our societies regularly fall into a state of dominion and enslavement, and have to be “started over” from a position of utter chaos and destruction (ie: the rebuilding of Berlin and Dresden).
Far be it from me to be “mind-killed,” but I think avoiding that fate should be a common object of discussion among people who are “rational.” (ie: “What not to do.”)
I also don’t think it’s fair to lump “behaviorists” (other than perhaps B.F. Skinner) into an irrational school of “oversimplification.” Even Skinner noted that his goal was to get to the truth, via observation. (Eliminate biases.) By trying to make an entire school out of the implications of some minds, some of the time, we oversimplify the complex reality.
Behaviorism has caught scores of serial killers. (According to John Douglas, author of Mindhunter, originator of the FBI’s Investigative Support Unit.) How? It turns out that serial killer behavior isn’t that complex, and it’s seeking goals that superior minds actually can model quite accurately. (This is much like a toddler chasing a ball into the street. Every adult can model that as a “bad thing,” because their minds are superior enough to understand 1-what the child’s goal is 2-what the child’s probable failures in perception are 3-how the entire system of child, ball, street, and their inter-related feedbacks are likely to produce, as well as how the adult can, and should, swoop in and prevent the child from reaching the street.)
So, behaviorism does help us do two things: 4-eliminate errors from prior “schools” of philosophy (which were, themselves, not really “schools” but just significant insights) 5-reference “just what we can observe,” in terms of revealed preferences. Revealed preferences are not “the whole picture.” However, they do give us a starting point for isolating important variables.
This can be done with a robot or a human, but the human is a group of “messy emergent networks” (brain regions, combined with body feedback, with nagging long-term goals in the background, acting as an “action-shifting threshold”) whose goals are the result of modeled patterns and instances of reward. The robot, on the other hand, lacks all the messy patterns, and can often deal with reality as a set of extreme reductions, in a way that no (or few) humans can.
The entire “utility function” paradigm appears to be a very backwards way of approximating thought to me. First you start with perceived patterns, then, you evolve ever-more-complex thought.
This allows you to develop goals that are really worth solving.
What we want in a super-intelligence is actually “more effective libertarians.” Sure, we’ve found that free markets (very large free networks of humans) create wealth and prosperity. However, we’ve also found that there are large numbers of sociopaths who don’t care about wealth and prosperity for all, just for themselves. Such a goal structure can maximize prosperity for sociopaths, while destroying all wealth and prosperity for others. In fact, this repeatedly happens throughout history, right up to the present. It’s a cycle that’s been interfered with temporarily, but never broken.
Would any robot, by default, care about shifting that outcome of “sociopaths dominate grossly-imperfect legal institutions”? I doubt it. Moreover, such a sociopath could create a lasting peace by creating a very stable tyranny, replete with highly-functional secret police, and a highly effective algorithm for “how to steal the most from every producer, while sensing their threshold for rebellion.”
In fact, this is what the current system attempts to accomplish: There’s no reason for the system to decay to Hitler’s excesses, when scientists, producers, engineers, etc. have found (enough) happiness (and fear) in slavery. How much is “enough”? It’s “enough (happiness) to keep producing without rebellion,” and “enough (fear) to disincentivize rebellion.”
This is like baling a few thousand gallons of water while the Titanic is sinking. 6-It won’t make any difference to any important goal, short-term or long-term 7-It deals with a local situation that is irrelevant to anything important, worldwide 8-It deals with theories of the mind that are compatible with Francis Crick and Jeff Hawkins’ work, but only useful to narrow sub-disciplines like “How do we know when law enforcement should take action?” or “When we see this at a crime scene, it’s a good threshold-based variable for how many resources we should throw at the problem.” 9-Every “school” that stops referring to reality and nature, to the extent it does so, is horribly flawed (this is Jeff Hawkins, who is right about almost everything, screwing up royally in dismissing science fiction as “not having anything important to say about brain building.”) 10-When you’re studying human “schools,” you’re studying a narrow focus of human insight described with words(“labels” and “maps”) instead of the insight they’ve derived from their modeling of the territory. (Kozybski,who himself,turned a few insights into a “school”)