What is the point of this argument? Is it the time-scale of the singularity, or the need for friendliness in AI? I was under the impression that it was the latter, but we’ve drifted severely afield of this matter. Robin addresses one of the less pivotal elements of Eliezer’s claims − 1 week for 20 orders of magnitude, as opposed to the need for friendliness in AI. If it took 2 years to do 3 orders of magnitude, would we be any more effectively able to resist? The only difference is that this AI would have to play a little closer to its vest in the early stages.
Seriously, does Robin think that we’d be OK if an AI emerged that was equivalent of an IQ 250 human but completely tireless and without distractions, could be copied/distributed, and could cooperate perfectly because they all had the same utility function and they knew it, so they’re essentially one AI… and it wasn’t friendly...
We’d be in a lot of trouble, even without any sort of intelligence explosion at all.
Seriously, does Robin think that we’d be OK if an AI emerged...
I take it Robin would reply that this would indeed be quite bad, but not so bad nor so likely that we shouldn’t pursue AI research fairly aggressively, given that AI research can lead to (for example) medical breakthroughs that can save or improve many lives, etc.
Or at any rate, Robin’s point seems to be that the arguments that AI emergence would be so likely to be bad weren’t very good in 2008 (I don’t know if these arguments have been improved in the mean time).
Unfortunately, I’m not keeping track of what’s ‘thus far’ and not, which is kind of what I just said. Unless you mean ‘thus far’ as in up to the end of 2012, in which case… Hmm. I also haven’t been keeping track of where these arguments are stored in general.
As far as I’m concerned, we have enough obvious cognitive time-wasting—and worse—going on, that the so-called ‘low-hanging fruit’ would be enough to take AI way beyond us, even in the absence of colossal speedups (though that would very likely occur soon) or finding a P solution to NP-complete problems or molecular nanotech (and I’m not ruling those out). We would soon be so useless that trade is not something we could count on to save us.
What is the point of this argument? Is it the time-scale of the singularity, or the need for friendliness in AI? I was under the impression that it was the latter, but we’ve drifted severely afield of this matter. Robin addresses one of the less pivotal elements of Eliezer’s claims − 1 week for 20 orders of magnitude, as opposed to the need for friendliness in AI. If it took 2 years to do 3 orders of magnitude, would we be any more effectively able to resist? The only difference is that this AI would have to play a little closer to its vest in the early stages.
Seriously, does Robin think that we’d be OK if an AI emerged that was equivalent of an IQ 250 human but completely tireless and without distractions, could be copied/distributed, and could cooperate perfectly because they all had the same utility function and they knew it, so they’re essentially one AI… and it wasn’t friendly...
We’d be in a lot of trouble, even without any sort of intelligence explosion at all.
I take it Robin would reply that this would indeed be quite bad, but not so bad nor so likely that we shouldn’t pursue AI research fairly aggressively, given that AI research can lead to (for example) medical breakthroughs that can save or improve many lives, etc.
Or at any rate, Robin’s point seems to be that the arguments that AI emergence would be so likely to be bad weren’t very good in 2008 (I don’t know if these arguments have been improved in the mean time).
Yeah, that last point is the other thing. I come at this not remembering just which arguments were made before and after this point.
What do you think is the strongest presentation of the argument thus far?
Unfortunately, I’m not keeping track of what’s ‘thus far’ and not, which is kind of what I just said. Unless you mean ‘thus far’ as in up to the end of 2012, in which case… Hmm. I also haven’t been keeping track of where these arguments are stored in general.
As far as I’m concerned, we have enough obvious cognitive time-wasting—and worse—going on, that the so-called ‘low-hanging fruit’ would be enough to take AI way beyond us, even in the absence of colossal speedups (though that would very likely occur soon) or finding a P solution to NP-complete problems or molecular nanotech (and I’m not ruling those out). We would soon be so useless that trade is not something we could count on to save us.