Not necessarily. If it takes us 15 years to kludge something together that’s twice as smart as a single human, I don’t think it’ll be capable of an intelligence explosion on any sort of time scale that could outmaneuver us. Even if the human-level AI can make something better in a tenth the time, we still have more than a year to react before even worrying about superhuman AI, never mind the sort of AI that’s so far superhuman that it actually poses a threat to the established order. An AI explosion will have to happen in hardware, and hardware can’t explode in capability so fast that it outstrips the ability of humans to notice it’s happening.
One machine that’s about as smart as a human and takes millions of dollars worth of hardware to produce is not high stakes. It’ll bugger up the legal system something fierce as we try to figure out what to do about it, but it’s lower stakes than any of a hundred ordinary problems of politics. It requires an AI that is significantly smarter than a human, and that has the capability of upgrading itself quickly, to pose a threat that we can’t easily handle. I suspect at least 4.9 of that 5% is similar low-risk AI. Just because the laws of physics allow for something doesn’t mean we’re on the cusp of doing it in the real world.
You substantially overrate the legal system’s concern with simple sentient rights and basic dignity. The legal system will have no problem determining what to do with such a machine. It will be the property of whoever happens to own it under the same rules as any other computer hardware and software.
Now mind you, I’m not saying that’s the right answer (for more than one definition of right) but it is the answer the legal system will give.
It’ll be the default, certainly. But I suspect there’s going to be enough room for lawyers to play that it’ll stay wrapped up in red tape for many years. (Interestingly, I think that might actually make it more dangerous in some ways—if we really do leapfrog humans on intelligence, giving it years while we wait on lawyers might be a dangerous thing to do. OTOH, there’s generally no truckloads of silicon chips going into the middle of a legal dispute like that, so it might slow things down too.)
5% is pretty high considering the purported stakes.
No doubt!
Not necessarily. If it takes us 15 years to kludge something together that’s twice as smart as a single human, I don’t think it’ll be capable of an intelligence explosion on any sort of time scale that could outmaneuver us. Even if the human-level AI can make something better in a tenth the time, we still have more than a year to react before even worrying about superhuman AI, never mind the sort of AI that’s so far superhuman that it actually poses a threat to the established order. An AI explosion will have to happen in hardware, and hardware can’t explode in capability so fast that it outstrips the ability of humans to notice it’s happening.
One machine that’s about as smart as a human and takes millions of dollars worth of hardware to produce is not high stakes. It’ll bugger up the legal system something fierce as we try to figure out what to do about it, but it’s lower stakes than any of a hundred ordinary problems of politics. It requires an AI that is significantly smarter than a human, and that has the capability of upgrading itself quickly, to pose a threat that we can’t easily handle. I suspect at least 4.9 of that 5% is similar low-risk AI. Just because the laws of physics allow for something doesn’t mean we’re on the cusp of doing it in the real world.
You substantially overrate the legal system’s concern with simple sentient rights and basic dignity. The legal system will have no problem determining what to do with such a machine. It will be the property of whoever happens to own it under the same rules as any other computer hardware and software.
Now mind you, I’m not saying that’s the right answer (for more than one definition of right) but it is the answer the legal system will give.
It’ll be the default, certainly. But I suspect there’s going to be enough room for lawyers to play that it’ll stay wrapped up in red tape for many years. (Interestingly, I think that might actually make it more dangerous in some ways—if we really do leapfrog humans on intelligence, giving it years while we wait on lawyers might be a dangerous thing to do. OTOH, there’s generally no truckloads of silicon chips going into the middle of a legal dispute like that, so it might slow things down too.)