That would surely be a very good argument if I was able to judge it. But can intelligence be captured by a discrete algorithm or is it modular and therefore not subject to overall improvements that would affect intelligence itself as a meta-solution?
This seems backwards—if intelligence is modular, that makes it more likely to be subject to overall improvements, since we can upgrade the modules one at a time. I’d also like to point out that we currently have two meta-algorithms, bagging and boosting, which can improve the performance of any other machine learning algorithm at the cost of using more CPU time.
It seems to me that, if we reach a point where we can’t improve an intelligence any further, it won’t be because it’s fundamentally impossible to improve, but because we’ve hit diminishing returns. And there’s really no way to know in advance where the point of diminishing returns will be. Maybe there’s one breakthrough point, after which it’s easy until you get to the intelligence of an average human, then it’s hard again. Maybe it doesn’t become difficult until after the AI’s smart enough to remake the world. Maybe the improvement is gradual the whole way up.
But we do know one thing. If an AI is at least as smart as an average human programmer, then if it chooses to do so, it can clone itself onto a large fraction of the computer hardware in the world, in weeks at the slowest, but more likely in hours. We know it can do this because human-written computer viruses do it routinely, despite our best efforts to stop them. And being cloned millions or billions of times will probably make it smarter, and definitely make it powerful.
In other words, on a fundamental level problems are not solved, solutions are discovered by an evolutionary process. In all discussions I took part so far ‘intelligence’ has had a somewhat proactive aftertaste. But nothing genuine new is ever being created deliberately.
In a sense, all thoughts are just the same words and symbols rearranged in different ways. But that is not the type of newness that matters. New software algorithms, concepts, frameworks, and programming languages are created all the time. And one new algorithm might be enough to birth an artificial general intelligence.
But we do know one thing. If an AI is at least as smart as an average human programmer, then if it chooses to do so, it can clone itself onto a large fraction of the computer hardware in the world, in weeks at the slowest, but more likely in hours. We know it can do this because human-written computer viruses do it routinely, despite our best efforts to stop them. And being cloned millions or billions of times will probably make it smarter, and definitely make it powerful.
The AI will be much bigger than a virus. I assume this will make propagation much harder.
And one new algorithm might be enough to birth an artificial general intelligence.
Anything could be possible—though the last 60 years of the machine intelligence field are far more evocative of the “blood-out of-a-stone” model of progress.
If an AI is at least as smart as an average human programmer, then if it chooses to do so, it can clone itself onto a large fraction of the computer hardware in the world, in weeks at the slowest, but more likely in hours. We know it can do this because human-written computer viruses do it routinely, despite our best efforts to stop them. And being cloned millions or billions of times will probably make it smarter, and definitely make it powerful.
Smart human programmers can make dark nets too. Relatively few of them want to trash their own reputations and appear in the cross-hairs of the world’s security services and law-enforcement agencies, though.
Reputation and law enforcement are only a deterrent to the mass-copies-on-the-Internet play if the copies are needed long-term (ie, for more than a few months), because in the short term, with a little more effort, the fact that an AI was involved at all could be kept hidden.
Rather than copy itself immediately, the AI would first create a botnet that does nothing but spread itself and accept commands, like any other human-made botnet. This part is inherently anonymous; on the occasions where botnet owners do get caught, it’s because they try to sell use of them for money, which is harder to hide. Then it can pick and choose which computers to use for computation, and exclude those that security researchers might be watching. For added deniability, it could let a security researcher catch it using compromised hosts for password cracking, to explain the CPU usage.
Maybe the state of computer security will be better in 20 years, and this won’t be as much of a risk anymore. I certainly hope so. But we can’t count on it.
Mafia superintelligence, spyware superintelligence—it’s all the forces of evil. The forces of good are much bigger, more powerful and better funded.
Sure, we should continue to be vigilant about the forces of evil—but surely we should also recognise that their chances of success are pretty slender—while still keeping up the pressure on them, of course.
You seem to be seriously misinformed about the present state of computer security. The resources on the side of good are vastly insufficient because offense is inherently easier than defense.
Your unfounded supposition seems pretty obnoxious—and you aren’t even right :-(
You can’t really say something is “vastly insufficient”—unless you have an intended purpose in mind—as a guide to what would qualify as being sufficient.
There’s a huge population of desktop and office computers doing useful work in the world—we evidently have computer security enough to support that.
Perhaps you are presuming some other criteria. However, projecting that presumption on to me—and then proclaiming that I am misinformed—seems out of order to me.
You can’t say really something is “vastly insufficient” unless you have an intended purpose in mind. There’s a huge population of desktop and office computers doing useful work in the world—we have computer security enough to support that.
The purpose I had in mind (stated directly in that post’s grandparent, which you replied to) was to stop an artificial general intelligence from stealing vast computational resources. Since exploits in major software packages are still commonly discovered, including fairly frequent 0-day exploits which anyone can get for free just by monitoring a few mailing lists, the computer security we have is quite obviously not sufficient for that purpose. Not only that, humans do in fact steal vast computational resources pretty frequently. The fact that no one has tried to or wants to stop people from getting work done on their office computers is completely irrelevant.
You sound bullish—when IMO what you should be doing is learning that it is presumptious and antagonistic to publicly tell people that they are “seriously misinformed”—when you have such feeble and inaccurate evidence of any such thing. Such nonsense just gets in the way of the discussion.
IMO what you should be doing is learning that it is presumptious and antagonistic to publicly tell people that they are “seriously misinformed”—when you have such feeble and inaccurate evidence of any such thing. Such nonsense just gets in the way of the discussion.
Perhaps it was presumptuous and antagonistic, perhaps I could have been more tactful, and I’m sorry if I offended you. But I stand by my original statement, because it was true.
I am not sure which statement you stand by. The one about me being “seriously misinformed” about computer security? Let’s not go back to that—pulease!
The “adjusted” one—about the resources on the side of good being vastly insufficient to prevent a nasty artificial general intelligence from stealing vast computational resources? I think that is much too speculative for a true/false claim to be made about it.
The case against it is basically the case for good over evil. In the future, it seems reasonable that there will be much more ubiquitous government surveillance. Crimes will be trickier to pull off. Criminals will have more powerful weapons—but the government will know what colour socks they are wearing. Similarly, medicine will be better—and the life of pathogens will become harder. Positive forces look set to win, or at least dominate. Matt Ridley makes a similar case in his recent “Rational Optimism”.
Is there a correspondingly convincing case that the forces of evil will win out—and that the mafia machine intelligence—or the spyware-maker’s machine intelligence—will come out on top? That seems about as far-out to me as the SIAI contention that a bug is likely to take over the world. It seems to me that you have to seriously misunderstand evolution’s drive to build large-scale cooperative systems to entertain such ideas for very long.
I don’t have much inclination to think about my attitude towards Crocker’s Rules just now—sorry. My initial impression is not favourable, though. Maybe it would work with infrastructure—or on a community level. Otherwise the overhead of tracking people’s “Crocker status” seems considerable. You can take that as a “no”.
This seems backwards—if intelligence is modular, that makes it more likely to be subject to overall improvements, since we can upgrade the modules one at a time. I’d also like to point out that we currently have two meta-algorithms, bagging and boosting, which can improve the performance of any other machine learning algorithm at the cost of using more CPU time.
It seems to me that, if we reach a point where we can’t improve an intelligence any further, it won’t be because it’s fundamentally impossible to improve, but because we’ve hit diminishing returns. And there’s really no way to know in advance where the point of diminishing returns will be. Maybe there’s one breakthrough point, after which it’s easy until you get to the intelligence of an average human, then it’s hard again. Maybe it doesn’t become difficult until after the AI’s smart enough to remake the world. Maybe the improvement is gradual the whole way up.
But we do know one thing. If an AI is at least as smart as an average human programmer, then if it chooses to do so, it can clone itself onto a large fraction of the computer hardware in the world, in weeks at the slowest, but more likely in hours. We know it can do this because human-written computer viruses do it routinely, despite our best efforts to stop them. And being cloned millions or billions of times will probably make it smarter, and definitely make it powerful.
In a sense, all thoughts are just the same words and symbols rearranged in different ways. But that is not the type of newness that matters. New software algorithms, concepts, frameworks, and programming languages are created all the time. And one new algorithm might be enough to birth an artificial general intelligence.
The AI will be much bigger than a virus. I assume this will make propagation much harder.
Harder, yes. Much harder, probably not, unless it’s on the order of tens of gigabytes; most Internet connections are quite fast.
Anything could be possible—though the last 60 years of the machine intelligence field are far more evocative of the “blood-out of-a-stone” model of progress.
Smart human programmers can make dark nets too. Relatively few of them want to trash their own reputations and appear in the cross-hairs of the world’s security services and law-enforcement agencies, though.
Reputation and law enforcement are only a deterrent to the mass-copies-on-the-Internet play if the copies are needed long-term (ie, for more than a few months), because in the short term, with a little more effort, the fact that an AI was involved at all could be kept hidden.
Rather than copy itself immediately, the AI would first create a botnet that does nothing but spread itself and accept commands, like any other human-made botnet. This part is inherently anonymous; on the occasions where botnet owners do get caught, it’s because they try to sell use of them for money, which is harder to hide. Then it can pick and choose which computers to use for computation, and exclude those that security researchers might be watching. For added deniability, it could let a security researcher catch it using compromised hosts for password cracking, to explain the CPU usage.
Maybe the state of computer security will be better in 20 years, and this won’t be as much of a risk anymore. I certainly hope so. But we can’t count on it.
Mafia superintelligence, spyware superintelligence—it’s all the forces of evil. The forces of good are much bigger, more powerful and better funded.
Sure, we should continue to be vigilant about the forces of evil—but surely we should also recognise that their chances of success are pretty slender—while still keeping up the pressure on them, of course.
Good is winning: http://www.google.com/insights/search/#q=good%2Cevil :-)
You seem to be seriously misinformed about the present state of computer security. The resources on the side of good are vastly insufficient because offense is inherently easier than defense.
Your unfounded supposition seems pretty obnoxious—and you aren’t even right :-(
You can’t really say something is “vastly insufficient”—unless you have an intended purpose in mind—as a guide to what would qualify as being sufficient.
There’s a huge population of desktop and office computers doing useful work in the world—we evidently have computer security enough to support that.
Perhaps you are presuming some other criteria. However, projecting that presumption on to me—and then proclaiming that I am misinformed—seems out of order to me.
The purpose I had in mind (stated directly in that post’s grandparent, which you replied to) was to stop an artificial general intelligence from stealing vast computational resources. Since exploits in major software packages are still commonly discovered, including fairly frequent 0-day exploits which anyone can get for free just by monitoring a few mailing lists, the computer security we have is quite obviously not sufficient for that purpose. Not only that, humans do in fact steal vast computational resources pretty frequently. The fact that no one has tried to or wants to stop people from getting work done on their office computers is completely irrelevant.
You sound bullish—when IMO what you should be doing is learning that it is presumptious and antagonistic to publicly tell people that they are “seriously misinformed”—when you have such feeble and inaccurate evidence of any such thing. Such nonsense just gets in the way of the discussion.
Perhaps it was presumptuous and antagonistic, perhaps I could have been more tactful, and I’m sorry if I offended you. But I stand by my original statement, because it was true.
Crocker’s Rules for me. Will you do the same?
I am not sure which statement you stand by. The one about me being “seriously misinformed” about computer security? Let’s not go back to that—pulease!
The “adjusted” one—about the resources on the side of good being vastly insufficient to prevent a nasty artificial general intelligence from stealing vast computational resources? I think that is much too speculative for a true/false claim to be made about it.
The case against it is basically the case for good over evil. In the future, it seems reasonable that there will be much more ubiquitous government surveillance. Crimes will be trickier to pull off. Criminals will have more powerful weapons—but the government will know what colour socks they are wearing. Similarly, medicine will be better—and the life of pathogens will become harder. Positive forces look set to win, or at least dominate. Matt Ridley makes a similar case in his recent “Rational Optimism”.
Is there a correspondingly convincing case that the forces of evil will win out—and that the mafia machine intelligence—or the spyware-maker’s machine intelligence—will come out on top? That seems about as far-out to me as the SIAI contention that a bug is likely to take over the world. It seems to me that you have to seriously misunderstand evolution’s drive to build large-scale cooperative systems to entertain such ideas for very long.
I don’t have much inclination to think about my attitude towards Crocker’s Rules just now—sorry. My initial impression is not favourable, though. Maybe it would work with infrastructure—or on a community level. Otherwise the overhead of tracking people’s “Crocker status” seems considerable. You can take that as a “no”.