We should be trying to create mentally impoverished AGI, not profoundly knowledgeable AGI — no matter how difficult this is relative to the current approach of starting by feeding our AIs a profound amount of knowledge.
If a healthy five-year-old[1] has GI and qualia and can pass the Turing test, then a necessary condition of GI and qualia and passing the Turing test isn’t profound knowledge. A healthy five-year-old does have GI and qualia and can pass the Turing test. So a necessary condition of GI and qualia and passing the Turing test isn’t profound knowledge.
If GI and qualia and the ability to pass the Turing test don’t require profound knowledge in order to arise in a biological system, then GI and qualia and the ability to pass the Turing test don’t require profound knowledge in order to arise in a synthetic material [this premise seems to follow from the plausible assumption of substrate-independence]. GI and qualia and the ability to pass the Turing test don’t require profound knowledge in order to arise in a biological system. So GI and qualia and the ability to pass the Turing test don’t require profound knowledge in order to arise in a synthetic material.
A GI with qualia and the ability to pass the Turing test which arises in a synthetic material and doesn’t have profound knowledge is much less dangerous than a GI with qualia and the ability to pass the Turing test which arises in a synthetic material and does have profound knowledge. (This also seems to be true of [1] a GI without qualia and the inability to pass the Turing test which arises in a synthetic material and does not have profound knowledge; and of [2] a GI without qualia and the ability to pass the Turing test which arises in a synthetic material and doesn’t have profound knowledge.)
So we ought to be trying to create either (A) a synthetic-housed GI that can pass the Turing test without qualia and without profound knowledge, or (B) a synthetic-housed GI that can pass the Turing test with qualia and without profound knowledge.
Either of these paths — the creation of (A) or (B) — is preferable to our current path, no matter how long they delay the arrival of AGI. In other words, it is preferable that we create AGI in 100,000 years than that we create AGI in 20 if creating AGI in 20 means humanity’s loss of dominance or its destruction.
My arguable assumption is that what makes a five-year-old generally less dangerous than, say, an adult Einstein is a relatively profound lack of knowledge (even physical know-how seems to be a form of knowledge). All other things being equal, if a five-year-old has the knowledge of how to create a pipe bomb, he is just as dangerous as an adult Einstein with the same knowledge, if “knowledge” means something like “accessible complete understanding of x.”
Mental Impoverishment
We should be trying to create mentally impoverished AGI, not profoundly knowledgeable AGI — no matter how difficult this is relative to the current approach of starting by feeding our AIs a profound amount of knowledge.
If a healthy five-year-old[1] has GI and qualia and can pass the Turing test, then a necessary condition of GI and qualia and passing the Turing test isn’t profound knowledge. A healthy five-year-old does have GI and qualia and can pass the Turing test. So a necessary condition of GI and qualia and passing the Turing test isn’t profound knowledge.
If GI and qualia and the ability to pass the Turing test don’t require profound knowledge in order to arise in a biological system, then GI and qualia and the ability to pass the Turing test don’t require profound knowledge in order to arise in a synthetic material [this premise seems to follow from the plausible assumption of substrate-independence]. GI and qualia and the ability to pass the Turing test don’t require profound knowledge in order to arise in a biological system. So GI and qualia and the ability to pass the Turing test don’t require profound knowledge in order to arise in a synthetic material.
A GI with qualia and the ability to pass the Turing test which arises in a synthetic material and doesn’t have profound knowledge is much less dangerous than a GI with qualia and the ability to pass the Turing test which arises in a synthetic material and does have profound knowledge. (This also seems to be true of [1] a GI without qualia and the inability to pass the Turing test which arises in a synthetic material and does not have profound knowledge; and of [2] a GI without qualia and the ability to pass the Turing test which arises in a synthetic material and doesn’t have profound knowledge.)
So we ought to be trying to create either (A) a synthetic-housed GI that can pass the Turing test without qualia and without profound knowledge, or (B) a synthetic-housed GI that can pass the Turing test with qualia and without profound knowledge.
Either of these paths — the creation of (A) or (B) — is preferable to our current path, no matter how long they delay the arrival of AGI. In other words, it is preferable that we create AGI in 100,000 years than that we create AGI in 20 if creating AGI in 20 means humanity’s loss of dominance or its destruction.
My arguable assumption is that what makes a five-year-old generally less dangerous than, say, an adult Einstein is a relatively profound lack of knowledge (even physical know-how seems to be a form of knowledge). All other things being equal, if a five-year-old has the knowledge of how to create a pipe bomb, he is just as dangerous as an adult Einstein with the same knowledge, if “knowledge” means something like “accessible complete understanding of x.”