I understand that the “turn the world into paperclips” thing comes from a writing of Eliezer, but it is shorthand for a very unlikely scenario. Moreover, this site has gotten really far away from actually dealing with the problems that an unfriendly AGI is likely to cause. Instead, it seems to deal with stupid human problems, foibles, unreason, etc.
The problem with this, as I see it, is that humans are a diverse group, and what’s rational for those without much brainpower is totally irrational for those with a lot of brainpower.
If you have the capacity to develop a functional AGI with mirror neurons, then that’s what you should be doing. If you have the capacity to develop a part of such an AGI, then that’s what you should be doing.
If you don’t have such a capacity (in brains, or in some other necessary capital, such as monetary/material capital), then you shouldn’t waste your time trying to shape the post-singularity future.
Most of this site is word games that point out that words are inadequate communicators, and generally only used to signal status of one primate to another. True enough, but not related to the domain in question: how to stop (or “make less likely”) homo economicus (var. sociopathicus) from killing/displacing the lesser primates?
First, we must realize that sociopathy is our primary problem. We are entering the singularity using sociopath-defined social systems, sociopath-controlled social systems, and sociopath-populated social systems. The tools of liberal democracy have been abandoned, incrementally, due to the former facts / situation(s). Now it’s true that I’ve used a lot of what Marvin Minsky (more succinctly than this site) called “suitcase words.”
You can either debate me in what Ray Kurzweil calls “slow, serial, and imprecise” language (again, more succinctly than this site, in his book “The Age of Spiritual Machines”), or you can “most favorably” interpret what I say, and realize I’m right, and make your way to the fire escape.
Time is short. Human stupidity is long. We will all likely perish. Make haste.
Just to clarify: I think it’s smart to build AGI right now that starts off not knowing much, build it with a weak robot body that can interact with the world towards a goal, allow unlimited self-improvement, and raise the child with love and respect. I think it’s also good to have multiple such “Mind Children.” The more there are, the more likelihood that the non-sociopaths will be able to ameliorate the damage from the sociopaths, both by destroying them, and by designing systems that reward them enough so that the destructiveness of their sociopathy is not fully realized (as humans have tried and failed to do with their own systems).
I understand that the “turn the world into paperclips” thing comes from a writing of Eliezer, but it is shorthand for a very unlikely scenario. Moreover, this site has gotten really far away from actually dealing with the problems that an unfriendly AGI is likely to cause. Instead, it seems to deal with stupid human problems, foibles, unreason, etc.
The problem with this, as I see it, is that humans are a diverse group, and what’s rational for those without much brainpower is totally irrational for those with a lot of brainpower.
If you have the capacity to develop a functional AGI with mirror neurons, then that’s what you should be doing. If you have the capacity to develop a part of such an AGI, then that’s what you should be doing.
If you don’t have such a capacity (in brains, or in some other necessary capital, such as monetary/material capital), then you shouldn’t waste your time trying to shape the post-singularity future.
Most of this site is word games that point out that words are inadequate communicators, and generally only used to signal status of one primate to another. True enough, but not related to the domain in question: how to stop (or “make less likely”) homo economicus (var. sociopathicus) from killing/displacing the lesser primates?
First, we must realize that sociopathy is our primary problem. We are entering the singularity using sociopath-defined social systems, sociopath-controlled social systems, and sociopath-populated social systems. The tools of liberal democracy have been abandoned, incrementally, due to the former facts / situation(s). Now it’s true that I’ve used a lot of what Marvin Minsky (more succinctly than this site) called “suitcase words.”
You can either debate me in what Ray Kurzweil calls “slow, serial, and imprecise” language (again, more succinctly than this site, in his book “The Age of Spiritual Machines”), or you can “most favorably” interpret what I say, and realize I’m right, and make your way to the fire escape.
Time is short. Human stupidity is long. We will all likely perish. Make haste.
Just to clarify: I think it’s smart to build AGI right now that starts off not knowing much, build it with a weak robot body that can interact with the world towards a goal, allow unlimited self-improvement, and raise the child with love and respect. I think it’s also good to have multiple such “Mind Children.” The more there are, the more likelihood that the non-sociopaths will be able to ameliorate the damage from the sociopaths, both by destroying them, and by designing systems that reward them enough so that the destructiveness of their sociopathy is not fully realized (as humans have tried and failed to do with their own systems).