EDIT: On further reflection, my “Huh?” doesn’t square with the higher probabilities I’ve been giving lately of global vs. basement default-FOOMS, since that’s a substantial chunk of probability mass and you can see more globalish FOOMs coming from further off. 15/5% would make sense given a 1⁄4 chance of a not-seen-coming-15-years-off basement FOOM, sometime in the next 75 years. Still seems a bit low relative to my own estimate, which might be more like 40% for a FOOM sometime in the next 75 years that we can’t see coming any better than this from say 15 years off, so… but actually 1⁄2 of the next 15 years are only 7.5 years off. Okay, this number makes more sense now that I’ve thought about it further. I still think I’d go higher than 5% but anything within a factor of 2 is pretty good agreement for asspull numbers.
I don’t understand where you’re getting that from. It obviously isn’t an even distribution over AI at any point in the next 300 years. This implies your probability distribution is much more concentrated than mine, i.e., compared to me you think we have much better data about the absence of AI over the next 15 years specifically, compared to the 15 years after that. Why is that?
You guys have had a discussion like this here on LW before, and you mention your disagreement with Carl Schulman in your foom economics paper. This is a complex subject and I don’t expect you all to come to agreement, or even perfect understanding of each other’s positions, in a short period of time, but it seems like you know surprisingly little about these other positions. Given its importance to your mission, I’m surprised you haven’t set aside a day for the three of you and whoever else you think might be needed to at least come to understand each other’s estimates on when foom might happen.
We spent quite a while on this once, but that was a couple of years ago and apparently things got out of date since then (also I think this was pre-Luke). It does seem like we need to all get together again and redo this, though I find that sort of thing very difficult and indeed outright painful when there’s not an immediate policy question in play to ground everything.
Not necessarily. If it takes us 15 years to kludge something together that’s twice as smart as a single human, I don’t think it’ll be capable of an intelligence explosion on any sort of time scale that could outmaneuver us. Even if the human-level AI can make something better in a tenth the time, we still have more than a year to react before even worrying about superhuman AI, never mind the sort of AI that’s so far superhuman that it actually poses a threat to the established order. An AI explosion will have to happen in hardware, and hardware can’t explode in capability so fast that it outstrips the ability of humans to notice it’s happening.
One machine that’s about as smart as a human and takes millions of dollars worth of hardware to produce is not high stakes. It’ll bugger up the legal system something fierce as we try to figure out what to do about it, but it’s lower stakes than any of a hundred ordinary problems of politics. It requires an AI that is significantly smarter than a human, and that has the capability of upgrading itself quickly, to pose a threat that we can’t easily handle. I suspect at least 4.9 of that 5% is similar low-risk AI. Just because the laws of physics allow for something doesn’t mean we’re on the cusp of doing it in the real world.
You substantially overrate the legal system’s concern with simple sentient rights and basic dignity. The legal system will have no problem determining what to do with such a machine. It will be the property of whoever happens to own it under the same rules as any other computer hardware and software.
Now mind you, I’m not saying that’s the right answer (for more than one definition of right) but it is the answer the legal system will give.
It’ll be the default, certainly. But I suspect there’s going to be enough room for lawyers to play that it’ll stay wrapped up in red tape for many years. (Interestingly, I think that might actually make it more dangerous in some ways—if we really do leapfrog humans on intelligence, giving it years while we wait on lawyers might be a dangerous thing to do. OTOH, there’s generally no truckloads of silicon chips going into the middle of a legal dispute like that, so it might slow things down too.)
15 years plus more importantly everyone besides Google is too much possibility width to use the term “very unlikely”.
I think I’d put something like 5% on AI in the next 15 years. Your estimate is higher, I imagine.
EDIT: On further reflection, my “Huh?” doesn’t square with the higher probabilities I’ve been giving lately of global vs. basement default-FOOMS, since that’s a substantial chunk of probability mass and you can see more globalish FOOMs coming from further off. 15/5% would make sense given a 1⁄4 chance of a not-seen-coming-15-years-off basement FOOM, sometime in the next 75 years. Still seems a bit low relative to my own estimate, which might be more like 40% for a FOOM sometime in the next 75 years that we can’t see coming any better than this from say 15 years off, so… but actually 1⁄2 of the next 15 years are only 7.5 years off. Okay, this number makes more sense now that I’ve thought about it further. I still think I’d go higher than 5% but anything within a factor of 2 is pretty good agreement for asspull numbers.
This made me LOL. I hadn’t heard that term before.
I don’t understand where you’re getting that from. It obviously isn’t an even distribution over AI at any point in the next 300 years. This implies your probability distribution is much more concentrated than mine, i.e., compared to me you think we have much better data about the absence of AI over the next 15 years specifically, compared to the 15 years after that. Why is that?
You guys have had a discussion like this here on LW before, and you mention your disagreement with Carl Schulman in your foom economics paper. This is a complex subject and I don’t expect you all to come to agreement, or even perfect understanding of each other’s positions, in a short period of time, but it seems like you know surprisingly little about these other positions. Given its importance to your mission, I’m surprised you haven’t set aside a day for the three of you and whoever else you think might be needed to at least come to understand each other’s estimates on when foom might happen.
We spent quite a while on this once, but that was a couple of years ago and apparently things got out of date since then (also I think this was pre-Luke). It does seem like we need to all get together again and redo this, though I find that sort of thing very difficult and indeed outright painful when there’s not an immediate policy question in play to ground everything.
5% is pretty high considering the purported stakes.
No doubt!
Not necessarily. If it takes us 15 years to kludge something together that’s twice as smart as a single human, I don’t think it’ll be capable of an intelligence explosion on any sort of time scale that could outmaneuver us. Even if the human-level AI can make something better in a tenth the time, we still have more than a year to react before even worrying about superhuman AI, never mind the sort of AI that’s so far superhuman that it actually poses a threat to the established order. An AI explosion will have to happen in hardware, and hardware can’t explode in capability so fast that it outstrips the ability of humans to notice it’s happening.
One machine that’s about as smart as a human and takes millions of dollars worth of hardware to produce is not high stakes. It’ll bugger up the legal system something fierce as we try to figure out what to do about it, but it’s lower stakes than any of a hundred ordinary problems of politics. It requires an AI that is significantly smarter than a human, and that has the capability of upgrading itself quickly, to pose a threat that we can’t easily handle. I suspect at least 4.9 of that 5% is similar low-risk AI. Just because the laws of physics allow for something doesn’t mean we’re on the cusp of doing it in the real world.
You substantially overrate the legal system’s concern with simple sentient rights and basic dignity. The legal system will have no problem determining what to do with such a machine. It will be the property of whoever happens to own it under the same rules as any other computer hardware and software.
Now mind you, I’m not saying that’s the right answer (for more than one definition of right) but it is the answer the legal system will give.
It’ll be the default, certainly. But I suspect there’s going to be enough room for lawyers to play that it’ll stay wrapped up in red tape for many years. (Interestingly, I think that might actually make it more dangerous in some ways—if we really do leapfrog humans on intelligence, giving it years while we wait on lawyers might be a dangerous thing to do. OTOH, there’s generally no truckloads of silicon chips going into the middle of a legal dispute like that, so it might slow things down too.)
I think P(Google will develop HLAI in the next 15 years | anyone does) is within one or two orders of magnitude of 1.