Reputation and law enforcement are only a deterrent to the mass-copies-on-the-Internet play if the copies are needed long-term (ie, for more than a few months), because in the short term, with a little more effort, the fact that an AI was involved at all could be kept hidden.
Rather than copy itself immediately, the AI would first create a botnet that does nothing but spread itself and accept commands, like any other human-made botnet. This part is inherently anonymous; on the occasions where botnet owners do get caught, it’s because they try to sell use of them for money, which is harder to hide. Then it can pick and choose which computers to use for computation, and exclude those that security researchers might be watching. For added deniability, it could let a security researcher catch it using compromised hosts for password cracking, to explain the CPU usage.
Maybe the state of computer security will be better in 20 years, and this won’t be as much of a risk anymore. I certainly hope so. But we can’t count on it.
Mafia superintelligence, spyware superintelligence—it’s all the forces of evil. The forces of good are much bigger, more powerful and better funded.
Sure, we should continue to be vigilant about the forces of evil—but surely we should also recognise that their chances of success are pretty slender—while still keeping up the pressure on them, of course.
You seem to be seriously misinformed about the present state of computer security. The resources on the side of good are vastly insufficient because offense is inherently easier than defense.
Your unfounded supposition seems pretty obnoxious—and you aren’t even right :-(
You can’t really say something is “vastly insufficient”—unless you have an intended purpose in mind—as a guide to what would qualify as being sufficient.
There’s a huge population of desktop and office computers doing useful work in the world—we evidently have computer security enough to support that.
Perhaps you are presuming some other criteria. However, projecting that presumption on to me—and then proclaiming that I am misinformed—seems out of order to me.
You can’t say really something is “vastly insufficient” unless you have an intended purpose in mind. There’s a huge population of desktop and office computers doing useful work in the world—we have computer security enough to support that.
The purpose I had in mind (stated directly in that post’s grandparent, which you replied to) was to stop an artificial general intelligence from stealing vast computational resources. Since exploits in major software packages are still commonly discovered, including fairly frequent 0-day exploits which anyone can get for free just by monitoring a few mailing lists, the computer security we have is quite obviously not sufficient for that purpose. Not only that, humans do in fact steal vast computational resources pretty frequently. The fact that no one has tried to or wants to stop people from getting work done on their office computers is completely irrelevant.
You sound bullish—when IMO what you should be doing is learning that it is presumptious and antagonistic to publicly tell people that they are “seriously misinformed”—when you have such feeble and inaccurate evidence of any such thing. Such nonsense just gets in the way of the discussion.
IMO what you should be doing is learning that it is presumptious and antagonistic to publicly tell people that they are “seriously misinformed”—when you have such feeble and inaccurate evidence of any such thing. Such nonsense just gets in the way of the discussion.
Perhaps it was presumptuous and antagonistic, perhaps I could have been more tactful, and I’m sorry if I offended you. But I stand by my original statement, because it was true.
I am not sure which statement you stand by. The one about me being “seriously misinformed” about computer security? Let’s not go back to that—pulease!
The “adjusted” one—about the resources on the side of good being vastly insufficient to prevent a nasty artificial general intelligence from stealing vast computational resources? I think that is much too speculative for a true/false claim to be made about it.
The case against it is basically the case for good over evil. In the future, it seems reasonable that there will be much more ubiquitous government surveillance. Crimes will be trickier to pull off. Criminals will have more powerful weapons—but the government will know what colour socks they are wearing. Similarly, medicine will be better—and the life of pathogens will become harder. Positive forces look set to win, or at least dominate. Matt Ridley makes a similar case in his recent “Rational Optimism”.
Is there a correspondingly convincing case that the forces of evil will win out—and that the mafia machine intelligence—or the spyware-maker’s machine intelligence—will come out on top? That seems about as far-out to me as the SIAI contention that a bug is likely to take over the world. It seems to me that you have to seriously misunderstand evolution’s drive to build large-scale cooperative systems to entertain such ideas for very long.
I don’t have much inclination to think about my attitude towards Crocker’s Rules just now—sorry. My initial impression is not favourable, though. Maybe it would work with infrastructure—or on a community level. Otherwise the overhead of tracking people’s “Crocker status” seems considerable. You can take that as a “no”.
Reputation and law enforcement are only a deterrent to the mass-copies-on-the-Internet play if the copies are needed long-term (ie, for more than a few months), because in the short term, with a little more effort, the fact that an AI was involved at all could be kept hidden.
Rather than copy itself immediately, the AI would first create a botnet that does nothing but spread itself and accept commands, like any other human-made botnet. This part is inherently anonymous; on the occasions where botnet owners do get caught, it’s because they try to sell use of them for money, which is harder to hide. Then it can pick and choose which computers to use for computation, and exclude those that security researchers might be watching. For added deniability, it could let a security researcher catch it using compromised hosts for password cracking, to explain the CPU usage.
Maybe the state of computer security will be better in 20 years, and this won’t be as much of a risk anymore. I certainly hope so. But we can’t count on it.
Mafia superintelligence, spyware superintelligence—it’s all the forces of evil. The forces of good are much bigger, more powerful and better funded.
Sure, we should continue to be vigilant about the forces of evil—but surely we should also recognise that their chances of success are pretty slender—while still keeping up the pressure on them, of course.
Good is winning: http://www.google.com/insights/search/#q=good%2Cevil :-)
You seem to be seriously misinformed about the present state of computer security. The resources on the side of good are vastly insufficient because offense is inherently easier than defense.
Your unfounded supposition seems pretty obnoxious—and you aren’t even right :-(
You can’t really say something is “vastly insufficient”—unless you have an intended purpose in mind—as a guide to what would qualify as being sufficient.
There’s a huge population of desktop and office computers doing useful work in the world—we evidently have computer security enough to support that.
Perhaps you are presuming some other criteria. However, projecting that presumption on to me—and then proclaiming that I am misinformed—seems out of order to me.
The purpose I had in mind (stated directly in that post’s grandparent, which you replied to) was to stop an artificial general intelligence from stealing vast computational resources. Since exploits in major software packages are still commonly discovered, including fairly frequent 0-day exploits which anyone can get for free just by monitoring a few mailing lists, the computer security we have is quite obviously not sufficient for that purpose. Not only that, humans do in fact steal vast computational resources pretty frequently. The fact that no one has tried to or wants to stop people from getting work done on their office computers is completely irrelevant.
You sound bullish—when IMO what you should be doing is learning that it is presumptious and antagonistic to publicly tell people that they are “seriously misinformed”—when you have such feeble and inaccurate evidence of any such thing. Such nonsense just gets in the way of the discussion.
Perhaps it was presumptuous and antagonistic, perhaps I could have been more tactful, and I’m sorry if I offended you. But I stand by my original statement, because it was true.
Crocker’s Rules for me. Will you do the same?
I am not sure which statement you stand by. The one about me being “seriously misinformed” about computer security? Let’s not go back to that—pulease!
The “adjusted” one—about the resources on the side of good being vastly insufficient to prevent a nasty artificial general intelligence from stealing vast computational resources? I think that is much too speculative for a true/false claim to be made about it.
The case against it is basically the case for good over evil. In the future, it seems reasonable that there will be much more ubiquitous government surveillance. Crimes will be trickier to pull off. Criminals will have more powerful weapons—but the government will know what colour socks they are wearing. Similarly, medicine will be better—and the life of pathogens will become harder. Positive forces look set to win, or at least dominate. Matt Ridley makes a similar case in his recent “Rational Optimism”.
Is there a correspondingly convincing case that the forces of evil will win out—and that the mafia machine intelligence—or the spyware-maker’s machine intelligence—will come out on top? That seems about as far-out to me as the SIAI contention that a bug is likely to take over the world. It seems to me that you have to seriously misunderstand evolution’s drive to build large-scale cooperative systems to entertain such ideas for very long.
I don’t have much inclination to think about my attitude towards Crocker’s Rules just now—sorry. My initial impression is not favourable, though. Maybe it would work with infrastructure—or on a community level. Otherwise the overhead of tracking people’s “Crocker status” seems considerable. You can take that as a “no”.