Before certain MIRI papers, I came up with a steelman in which transparently written AI could never happen due to logical impossibility. After all, humans do not seem transparently written. One could imagine that the complexity necessary to approximate “intelligence” grows much faster than the intelligence’s ability to grasp complexity—at least if we mean the kind of understanding that would let you improve yourself with high probability.
This scenario seemed unlikely even at the time, and less likely now that MIRI’s proven some counterexamples to closely related claims.
What it sounds like. A person created by artificial insemination is technically a Strong AI. But she can’t automatically improve her intelligence and go FOOM, because nobody designed the human brain with the intention of letting human brains understand it. She can probably grasp certain of its theoretical flaws, but that doesn’t mean she can look at her neurons and figure out what they’re doing or how to fix them.
The distinction between AI and NI (Natural Intelligence) is almost a, well, an artificial one. There is plenty of reasons to believe that our brains, NI as they are, are improvable by us. The broad outlines of this have existed in cyberpunk sci fi for many years. The technology is slowly coming along, arguably no more slowly than is the technology for autonomous AI is coming along.
A person created by artificial insemination is technically a strong AI? What is artificial about the human species developing additional mechanisms to get male and female germ plasm together in environments where it can grow to an adult organism? Are you confused by the fact that we animals doing it have expropriated the word “artificial” to describe this new innovation in fucking our species has come up with as part of its evolution?
I’m comfortable reserving the term AI for a thinking machine whos design deviates from any natural design significantly. Robin Hanson’s ems are different enough, in principle we don’t have to understand completely how they work but we have to understand quite a bit in order to port them to a different substrate. If it will be an organic brain based on neurons, then it should not re-use any systems more advanced than, say, the visual cortex, and still get called artificial. If you are just copying the neocortex using DNA in to neurons, you are just building a natural intelligence.
Before certain MIRI papers, I came up with a steelman in which transparently written AI could never happen due to logical impossibility. After all, humans do not seem transparently written. One could imagine that the complexity necessary to approximate “intelligence” grows much faster than the intelligence’s ability to grasp complexity—at least if we mean the kind of understanding that would let you improve yourself with high probability.
This scenario seemed unlikely even at the time, and less likely now that MIRI’s proven some counterexamples to closely related claims.
I’m not sure I understand the logic of your argument. I suspect I do not understand what you mean by transparently written.
What it sounds like. A person created by artificial insemination is technically a Strong AI. But she can’t automatically improve her intelligence and go FOOM, because nobody designed the human brain with the intention of letting human brains understand it. She can probably grasp certain of its theoretical flaws, but that doesn’t mean she can look at her neurons and figure out what they’re doing or how to fix them.
The distinction between AI and NI (Natural Intelligence) is almost a, well, an artificial one. There is plenty of reasons to believe that our brains, NI as they are, are improvable by us. The broad outlines of this have existed in cyberpunk sci fi for many years. The technology is slowly coming along, arguably no more slowly than is the technology for autonomous AI is coming along.
A person created by artificial insemination is technically a strong AI? What is artificial about the human species developing additional mechanisms to get male and female germ plasm together in environments where it can grow to an adult organism? Are you confused by the fact that we animals doing it have expropriated the word “artificial” to describe this new innovation in fucking our species has come up with as part of its evolution?
I’m comfortable reserving the term AI for a thinking machine whos design deviates from any natural design significantly. Robin Hanson’s ems are different enough, in principle we don’t have to understand completely how they work but we have to understand quite a bit in order to port them to a different substrate. If it will be an organic brain based on neurons, then it should not re-use any systems more advanced than, say, the visual cortex, and still get called artificial. If you are just copying the neocortex using DNA in to neurons, you are just building a natural intelligence.