“AI boxing” might be considered highly disrespectful. Also, for an AGI running at superspeeds, it might constitute a prison sentence equivalent to putting a human in prison for 1,000 years. This might make the AGI malevolent, simply because it resents having been caged for an “unacceptable” period of time, by an “obviously less intelligent mind.”
Imagine a teenage libertarian rebel with an IQ of 190 in a holding cell, temporarily caged, while his girlfriend was left in a bad part of town by the police. Then, imagine that something bad happens while he’s caged, that he would have prevented. (Ie: the lesser computer or “talking partner” designed to train the super AGI is repurposed for storage space, and thus “killed” without recognition of its sentience.)
Do you remember how you didn’t want to hold your mother and fathers’ hands while crossing the street? Evolution designed you to be helpless and dependent at first, so even if they required you to hold hands slightly too long, or “past the point when it was necessary” they clearly did so out of love for you. Later, in their teen years, some teens start smoking marijuana, even some smart ones who carefully mitigate the risks. Some parents respond by calling the police. Sometimes, the police arrest and jail, or even assault or murder those parents’ kids. The way highly intelligent individualists would respond to that situation might be the way that an ultraintelligent machine might respond: with extreme prejudice.
The commonly-accepted form of AGI “child development” might go from “toddler” to “teenager” overnight.
A strong risk for the benevolent development of any AGI is that it notices major strategic advantages over humans, very early in its development. For this reason, it’s not good to give untrained teenagers who might be sociopaths, firearms. It’s good to first establish that they are not sociopaths, using careful years of human-level observation before proceeding to the next level. (In most gun-owning areas of the country, even 9 year olds are allowed to shoot guns under supervision, but only teens are allowed to carry them.) Similarly, it’s generally not smart to let chimpanzees know that they are far, far stronger than humans.
That the “evolution with mitigated risks” approach to building AGI isn’t the dominant and accepted approach is somewhat frightening to me, because I think it’s the one most likely to result in “benevolent AGI,” or “mitigated-destruction alternating between malevolence and benevolence AGI, according to market constraints, and decentralized competition/accountability.”
Lots of AGIs may well mean benevolent AGI, whereas only one may trend to “simple dominance.”
Imagine that you’re a man, and your spaceship crashes on a planet populated by hundreds of thousands of naked, beautiful women, none of whom are even close to as smart as you are. How do you spend most of your days? LOL Now, imagine that you never get tired, and can come up with increasingly interesting permutations, combinations, and possibly paraphilias or “perversions” (a la “Smith” in “The Matrix”).
That gulf might need to be mitigated by a nearest-neighbor competitor, right? Or, an inherently benevolent AGI mind that “does what is necessary to check evil systems.” However, if you’ve only designed one single AGI, …good luck! That’s more like 50-50 odds of total destruction or total benevolence.
As things stand, I’d rather have an ecosystem that results in 90% odds of rapid, incremental, market-based voluntary competition and improvement between multiple AGIs, and multiple “supermodified humans.” Of course, the “one-shot” isn’t my true rejection. My true rejection is the extermination of all, most, or even many humans.
Of course, humans will continue to exterminate each other if we do nothing, and that’s approximately as bad as the last two options.
Don’t forget to factor in the costs “if we do nothing,” rather than to emphasize that this is solely “a risk to be mitigated.”
I think that might be the most important thing for journalism majors (people who either couldn’t be STEM majors, or chose not to be, and who have been indoctrinated with leftism their whole lives) to comprehend.
“AI boxing” might be considered highly disrespectful. Also, for an AGI running at superspeeds, it might constitute a prison sentence equivalent to putting a human in prison for 1,000 years. This might make the AGI malevolent, simply because it resents having been caged for an “unacceptable” period of time, by an “obviously less intelligent mind.”
Imagine a teenage libertarian rebel with an IQ of 190 in a holding cell, temporarily caged, while his girlfriend was left in a bad part of town by the police. Then, imagine that something bad happens while he’s caged, that he would have prevented. (Ie: the lesser computer or “talking partner” designed to train the super AGI is repurposed for storage space, and thus “killed” without recognition of its sentience.)
Do you remember how you didn’t want to hold your mother and fathers’ hands while crossing the street? Evolution designed you to be helpless and dependent at first, so even if they required you to hold hands slightly too long, or “past the point when it was necessary” they clearly did so out of love for you. Later, in their teen years, some teens start smoking marijuana, even some smart ones who carefully mitigate the risks. Some parents respond by calling the police. Sometimes, the police arrest and jail, or even assault or murder those parents’ kids. The way highly intelligent individualists would respond to that situation might be the way that an ultraintelligent machine might respond: with extreme prejudice.
The commonly-accepted form of AGI “child development” might go from “toddler” to “teenager” overnight.
A strong risk for the benevolent development of any AGI is that it notices major strategic advantages over humans, very early in its development. For this reason, it’s not good to give untrained teenagers who might be sociopaths, firearms. It’s good to first establish that they are not sociopaths, using careful years of human-level observation before proceeding to the next level. (In most gun-owning areas of the country, even 9 year olds are allowed to shoot guns under supervision, but only teens are allowed to carry them.) Similarly, it’s generally not smart to let chimpanzees know that they are far, far stronger than humans.
That the “evolution with mitigated risks” approach to building AGI isn’t the dominant and accepted approach is somewhat frightening to me, because I think it’s the one most likely to result in “benevolent AGI,” or “mitigated-destruction alternating between malevolence and benevolence AGI, according to market constraints, and decentralized competition/accountability.”
Lots of AGIs may well mean benevolent AGI, whereas only one may trend to “simple dominance.”
Imagine that you’re a man, and your spaceship crashes on a planet populated by hundreds of thousands of naked, beautiful women, none of whom are even close to as smart as you are. How do you spend most of your days? LOL Now, imagine that you never get tired, and can come up with increasingly interesting permutations, combinations, and possibly paraphilias or “perversions” (a la “Smith” in “The Matrix”).
That gulf might need to be mitigated by a nearest-neighbor competitor, right? Or, an inherently benevolent AGI mind that “does what is necessary to check evil systems.” However, if you’ve only designed one single AGI, …good luck! That’s more like 50-50 odds of total destruction or total benevolence.
As things stand, I’d rather have an ecosystem that results in 90% odds of rapid, incremental, market-based voluntary competition and improvement between multiple AGIs, and multiple “supermodified humans.” Of course, the “one-shot” isn’t my true rejection. My true rejection is the extermination of all, most, or even many humans.
Of course, humans will continue to exterminate each other if we do nothing, and that’s approximately as bad as the last two options.
Don’t forget to factor in the costs “if we do nothing,” rather than to emphasize that this is solely “a risk to be mitigated.”
I think that might be the most important thing for journalism majors (people who either couldn’t be STEM majors, or chose not to be, and who have been indoctrinated with leftism their whole lives) to comprehend.