I’m not sure you’re looking at the probability of other extinction risks with the proper weighting.
That might be true. But most of them have one solution that demands research in many areas. Space colonization. It is true that intelligent systems, if achievable in due time, play a significant role here. But not an exceptional role if you disregard the possibility of an intelligence explosion, of which I am very skeptical. Further, it appears to me that donating to the SIAI would rather impede research on such systems giving their position that such systems themselves posit an existential risk. Therefore, at the moment, the possibility of risks from AI is partially being outweighed to the extent that the SIAI should be supported yet doesn’t hold an exceptional position that would necessarily make it the one charity with the highest expected impact per donation. I am unable to pinpoint another charity at the moment, e.g. space elevator projects, because I haven’t looked into it. But I do not know of any comparison analysis, although you and many other people claim they have calculated it nobody ever published their efforts. As you know, I am unable to do such an analysis myself at this point as I am still learning the math. But I am eager to get the best information by means of feedback anyhow. Not intended as an excuse of course.
Once you’ve got a starting point, any algorithm that can be called ‘intelligent’ at all, you’ve got a huge leap toward mathematical improvement. Algorithms have been getting faster at a higher rate than Moore’s Law and computer chips.
That would surely be a very good argument if I was able to judge it. But can intelligence be captured by a discrete algorithm or is it modular and therefore not subject to overall improvements that would affect intelligence itself as a meta-solution? Also, can algorithms that could be employed in real-world scenarios be speed-up to have an effect that would warrant superhuman power? Take photosynthesis, could that particular algorithm be improved considerably, to an extent that it would be vastly better than the evolutionary one? Further, will such improvements be accomplishable fast enough to outpace human progress or the adaption of the given results? My problem is that I do not believe that intelligence is fathomable as a solution that can be applied to itself effectively. I see a fundamental dependency on unintelligent processes. Intelligence is merely to recapitulate prior discoveries. To alter what is already known by means of natural methods. If ‘intelligence’ is shorthand for ‘problem-solving’ then it’s also the solution which would mean that there was no problem to be solved. This can’t be true, we still have to solve problems and are only able to do so more effectively if we are dealing with similar problems that can be subject to known and merely altered solutions. In other words, on a fundamental level problems are not solved, solutions are discovered by an evolutionary process. In all discussions I took part so far ‘intelligence’ has had a somewhat proactive aftertaste. But nothing genuine new is ever being created deliberately.
Nonetheless I believe your reply was very helpful as an impulse to look at it from a different perspective. Although I might not be able to judge it in detail at this point I’ll have to incorporate it.
That would surely be a very good argument if I was able to judge it. But can intelligence be captured by a discrete algorithm or is it modular and therefore not subject to overall improvements that would affect intelligence itself as a meta-solution?
This seems backwards—if intelligence is modular, that makes it more likely to be subject to overall improvements, since we can upgrade the modules one at a time. I’d also like to point out that we currently have two meta-algorithms, bagging and boosting, which can improve the performance of any other machine learning algorithm at the cost of using more CPU time.
It seems to me that, if we reach a point where we can’t improve an intelligence any further, it won’t be because it’s fundamentally impossible to improve, but because we’ve hit diminishing returns. And there’s really no way to know in advance where the point of diminishing returns will be. Maybe there’s one breakthrough point, after which it’s easy until you get to the intelligence of an average human, then it’s hard again. Maybe it doesn’t become difficult until after the AI’s smart enough to remake the world. Maybe the improvement is gradual the whole way up.
But we do know one thing. If an AI is at least as smart as an average human programmer, then if it chooses to do so, it can clone itself onto a large fraction of the computer hardware in the world, in weeks at the slowest, but more likely in hours. We know it can do this because human-written computer viruses do it routinely, despite our best efforts to stop them. And being cloned millions or billions of times will probably make it smarter, and definitely make it powerful.
In other words, on a fundamental level problems are not solved, solutions are discovered by an evolutionary process. In all discussions I took part so far ‘intelligence’ has had a somewhat proactive aftertaste. But nothing genuine new is ever being created deliberately.
In a sense, all thoughts are just the same words and symbols rearranged in different ways. But that is not the type of newness that matters. New software algorithms, concepts, frameworks, and programming languages are created all the time. And one new algorithm might be enough to birth an artificial general intelligence.
But we do know one thing. If an AI is at least as smart as an average human programmer, then if it chooses to do so, it can clone itself onto a large fraction of the computer hardware in the world, in weeks at the slowest, but more likely in hours. We know it can do this because human-written computer viruses do it routinely, despite our best efforts to stop them. And being cloned millions or billions of times will probably make it smarter, and definitely make it powerful.
The AI will be much bigger than a virus. I assume this will make propagation much harder.
And one new algorithm might be enough to birth an artificial general intelligence.
Anything could be possible—though the last 60 years of the machine intelligence field are far more evocative of the “blood-out of-a-stone” model of progress.
If an AI is at least as smart as an average human programmer, then if it chooses to do so, it can clone itself onto a large fraction of the computer hardware in the world, in weeks at the slowest, but more likely in hours. We know it can do this because human-written computer viruses do it routinely, despite our best efforts to stop them. And being cloned millions or billions of times will probably make it smarter, and definitely make it powerful.
Smart human programmers can make dark nets too. Relatively few of them want to trash their own reputations and appear in the cross-hairs of the world’s security services and law-enforcement agencies, though.
Reputation and law enforcement are only a deterrent to the mass-copies-on-the-Internet play if the copies are needed long-term (ie, for more than a few months), because in the short term, with a little more effort, the fact that an AI was involved at all could be kept hidden.
Rather than copy itself immediately, the AI would first create a botnet that does nothing but spread itself and accept commands, like any other human-made botnet. This part is inherently anonymous; on the occasions where botnet owners do get caught, it’s because they try to sell use of them for money, which is harder to hide. Then it can pick and choose which computers to use for computation, and exclude those that security researchers might be watching. For added deniability, it could let a security researcher catch it using compromised hosts for password cracking, to explain the CPU usage.
Maybe the state of computer security will be better in 20 years, and this won’t be as much of a risk anymore. I certainly hope so. But we can’t count on it.
Mafia superintelligence, spyware superintelligence—it’s all the forces of evil. The forces of good are much bigger, more powerful and better funded.
Sure, we should continue to be vigilant about the forces of evil—but surely we should also recognise that their chances of success are pretty slender—while still keeping up the pressure on them, of course.
You seem to be seriously misinformed about the present state of computer security. The resources on the side of good are vastly insufficient because offense is inherently easier than defense.
Your unfounded supposition seems pretty obnoxious—and you aren’t even right :-(
You can’t really say something is “vastly insufficient”—unless you have an intended purpose in mind—as a guide to what would qualify as being sufficient.
There’s a huge population of desktop and office computers doing useful work in the world—we evidently have computer security enough to support that.
Perhaps you are presuming some other criteria. However, projecting that presumption on to me—and then proclaiming that I am misinformed—seems out of order to me.
You can’t say really something is “vastly insufficient” unless you have an intended purpose in mind. There’s a huge population of desktop and office computers doing useful work in the world—we have computer security enough to support that.
The purpose I had in mind (stated directly in that post’s grandparent, which you replied to) was to stop an artificial general intelligence from stealing vast computational resources. Since exploits in major software packages are still commonly discovered, including fairly frequent 0-day exploits which anyone can get for free just by monitoring a few mailing lists, the computer security we have is quite obviously not sufficient for that purpose. Not only that, humans do in fact steal vast computational resources pretty frequently. The fact that no one has tried to or wants to stop people from getting work done on their office computers is completely irrelevant.
You sound bullish—when IMO what you should be doing is learning that it is presumptious and antagonistic to publicly tell people that they are “seriously misinformed”—when you have such feeble and inaccurate evidence of any such thing. Such nonsense just gets in the way of the discussion.
IMO what you should be doing is learning that it is presumptious and antagonistic to publicly tell people that they are “seriously misinformed”—when you have such feeble and inaccurate evidence of any such thing. Such nonsense just gets in the way of the discussion.
Perhaps it was presumptuous and antagonistic, perhaps I could have been more tactful, and I’m sorry if I offended you. But I stand by my original statement, because it was true.
I am not sure which statement you stand by. The one about me being “seriously misinformed” about computer security? Let’s not go back to that—pulease!
The “adjusted” one—about the resources on the side of good being vastly insufficient to prevent a nasty artificial general intelligence from stealing vast computational resources? I think that is much too speculative for a true/false claim to be made about it.
The case against it is basically the case for good over evil. In the future, it seems reasonable that there will be much more ubiquitous government surveillance. Crimes will be trickier to pull off. Criminals will have more powerful weapons—but the government will know what colour socks they are wearing. Similarly, medicine will be better—and the life of pathogens will become harder. Positive forces look set to win, or at least dominate. Matt Ridley makes a similar case in his recent “Rational Optimism”.
Is there a correspondingly convincing case that the forces of evil will win out—and that the mafia machine intelligence—or the spyware-maker’s machine intelligence—will come out on top? That seems about as far-out to me as the SIAI contention that a bug is likely to take over the world. It seems to me that you have to seriously misunderstand evolution’s drive to build large-scale cooperative systems to entertain such ideas for very long.
I don’t have much inclination to think about my attitude towards Crocker’s Rules just now—sorry. My initial impression is not favourable, though. Maybe it would work with infrastructure—or on a community level. Otherwise the overhead of tracking people’s “Crocker status” seems considerable. You can take that as a “no”.
I believe your reply was very helpful as an impulse to look at it from a different perspective. Although I might not be able to judge it in detail at this point I’ll have to incorporate it.
Thank you for continuing to engage my point of view, and offering your own.
I do not believe that intelligence is fathomable as a solution that can [be] applied to itself effectively.
That’s an interesting hypothesis which easily fits into my estimated 90+ percent bucket of failure modes. I’ve got all kinds of such events in there, including things such as, there’s no way to understand intelligence, there’s no way to implement intelligence in computers, friendliness isn’t meaningful, CEV is impossible, they don’t have the right team to achieve it, hardware will never be fast enough, powerful corporations or governments will get there first, etc. My favorite is: no matter whether it’s possible or not, we won’t get there in time; basically, that it will take too long to be useful. I don’t believe any of them, but I do think they have solid probabilities which add up to a great amount of difficulty.
But the future isn’t set, they’re just probabilities, and we can change them. I think we need to explore this as much as possible, to see what the real math looks like, to see how long it takes, to see how hard it really is. Because the payoffs or results of failure are in that same realm of ‘astronomical’.
That might be true. But most of them have one solution that demands research in many areas. Space colonization. It is true that intelligent systems, if achievable in due time, play a significant role here. But not an exceptional role if you disregard the possibility of an intelligence explosion, of which I am very skeptical. Further, it appears to me that donating to the SIAI would rather impede research on such systems giving their position that such systems themselves posit an existential risk. Therefore, at the moment, the possibility of risks from AI is partially being outweighed to the extent that the SIAI should be supported yet doesn’t hold an exceptional position that would necessarily make it the one charity with the highest expected impact per donation. I am unable to pinpoint another charity at the moment, e.g. space elevator projects, because I haven’t looked into it. But I do not know of any comparison analysis, although you and many other people claim they have calculated it nobody ever published their efforts. As you know, I am unable to do such an analysis myself at this point as I am still learning the math. But I am eager to get the best information by means of feedback anyhow. Not intended as an excuse of course.
That would surely be a very good argument if I was able to judge it. But can intelligence be captured by a discrete algorithm or is it modular and therefore not subject to overall improvements that would affect intelligence itself as a meta-solution? Also, can algorithms that could be employed in real-world scenarios be speed-up to have an effect that would warrant superhuman power? Take photosynthesis, could that particular algorithm be improved considerably, to an extent that it would be vastly better than the evolutionary one? Further, will such improvements be accomplishable fast enough to outpace human progress or the adaption of the given results? My problem is that I do not believe that intelligence is fathomable as a solution that can be applied to itself effectively. I see a fundamental dependency on unintelligent processes. Intelligence is merely to recapitulate prior discoveries. To alter what is already known by means of natural methods. If ‘intelligence’ is shorthand for ‘problem-solving’ then it’s also the solution which would mean that there was no problem to be solved. This can’t be true, we still have to solve problems and are only able to do so more effectively if we are dealing with similar problems that can be subject to known and merely altered solutions. In other words, on a fundamental level problems are not solved, solutions are discovered by an evolutionary process. In all discussions I took part so far ‘intelligence’ has had a somewhat proactive aftertaste. But nothing genuine new is ever being created deliberately.
Nonetheless I believe your reply was very helpful as an impulse to look at it from a different perspective. Although I might not be able to judge it in detail at this point I’ll have to incorporate it.
This seems backwards—if intelligence is modular, that makes it more likely to be subject to overall improvements, since we can upgrade the modules one at a time. I’d also like to point out that we currently have two meta-algorithms, bagging and boosting, which can improve the performance of any other machine learning algorithm at the cost of using more CPU time.
It seems to me that, if we reach a point where we can’t improve an intelligence any further, it won’t be because it’s fundamentally impossible to improve, but because we’ve hit diminishing returns. And there’s really no way to know in advance where the point of diminishing returns will be. Maybe there’s one breakthrough point, after which it’s easy until you get to the intelligence of an average human, then it’s hard again. Maybe it doesn’t become difficult until after the AI’s smart enough to remake the world. Maybe the improvement is gradual the whole way up.
But we do know one thing. If an AI is at least as smart as an average human programmer, then if it chooses to do so, it can clone itself onto a large fraction of the computer hardware in the world, in weeks at the slowest, but more likely in hours. We know it can do this because human-written computer viruses do it routinely, despite our best efforts to stop them. And being cloned millions or billions of times will probably make it smarter, and definitely make it powerful.
In a sense, all thoughts are just the same words and symbols rearranged in different ways. But that is not the type of newness that matters. New software algorithms, concepts, frameworks, and programming languages are created all the time. And one new algorithm might be enough to birth an artificial general intelligence.
The AI will be much bigger than a virus. I assume this will make propagation much harder.
Harder, yes. Much harder, probably not, unless it’s on the order of tens of gigabytes; most Internet connections are quite fast.
Anything could be possible—though the last 60 years of the machine intelligence field are far more evocative of the “blood-out of-a-stone” model of progress.
Smart human programmers can make dark nets too. Relatively few of them want to trash their own reputations and appear in the cross-hairs of the world’s security services and law-enforcement agencies, though.
Reputation and law enforcement are only a deterrent to the mass-copies-on-the-Internet play if the copies are needed long-term (ie, for more than a few months), because in the short term, with a little more effort, the fact that an AI was involved at all could be kept hidden.
Rather than copy itself immediately, the AI would first create a botnet that does nothing but spread itself and accept commands, like any other human-made botnet. This part is inherently anonymous; on the occasions where botnet owners do get caught, it’s because they try to sell use of them for money, which is harder to hide. Then it can pick and choose which computers to use for computation, and exclude those that security researchers might be watching. For added deniability, it could let a security researcher catch it using compromised hosts for password cracking, to explain the CPU usage.
Maybe the state of computer security will be better in 20 years, and this won’t be as much of a risk anymore. I certainly hope so. But we can’t count on it.
Mafia superintelligence, spyware superintelligence—it’s all the forces of evil. The forces of good are much bigger, more powerful and better funded.
Sure, we should continue to be vigilant about the forces of evil—but surely we should also recognise that their chances of success are pretty slender—while still keeping up the pressure on them, of course.
Good is winning: http://www.google.com/insights/search/#q=good%2Cevil :-)
You seem to be seriously misinformed about the present state of computer security. The resources on the side of good are vastly insufficient because offense is inherently easier than defense.
Your unfounded supposition seems pretty obnoxious—and you aren’t even right :-(
You can’t really say something is “vastly insufficient”—unless you have an intended purpose in mind—as a guide to what would qualify as being sufficient.
There’s a huge population of desktop and office computers doing useful work in the world—we evidently have computer security enough to support that.
Perhaps you are presuming some other criteria. However, projecting that presumption on to me—and then proclaiming that I am misinformed—seems out of order to me.
The purpose I had in mind (stated directly in that post’s grandparent, which you replied to) was to stop an artificial general intelligence from stealing vast computational resources. Since exploits in major software packages are still commonly discovered, including fairly frequent 0-day exploits which anyone can get for free just by monitoring a few mailing lists, the computer security we have is quite obviously not sufficient for that purpose. Not only that, humans do in fact steal vast computational resources pretty frequently. The fact that no one has tried to or wants to stop people from getting work done on their office computers is completely irrelevant.
You sound bullish—when IMO what you should be doing is learning that it is presumptious and antagonistic to publicly tell people that they are “seriously misinformed”—when you have such feeble and inaccurate evidence of any such thing. Such nonsense just gets in the way of the discussion.
Perhaps it was presumptuous and antagonistic, perhaps I could have been more tactful, and I’m sorry if I offended you. But I stand by my original statement, because it was true.
Crocker’s Rules for me. Will you do the same?
I am not sure which statement you stand by. The one about me being “seriously misinformed” about computer security? Let’s not go back to that—pulease!
The “adjusted” one—about the resources on the side of good being vastly insufficient to prevent a nasty artificial general intelligence from stealing vast computational resources? I think that is much too speculative for a true/false claim to be made about it.
The case against it is basically the case for good over evil. In the future, it seems reasonable that there will be much more ubiquitous government surveillance. Crimes will be trickier to pull off. Criminals will have more powerful weapons—but the government will know what colour socks they are wearing. Similarly, medicine will be better—and the life of pathogens will become harder. Positive forces look set to win, or at least dominate. Matt Ridley makes a similar case in his recent “Rational Optimism”.
Is there a correspondingly convincing case that the forces of evil will win out—and that the mafia machine intelligence—or the spyware-maker’s machine intelligence—will come out on top? That seems about as far-out to me as the SIAI contention that a bug is likely to take over the world. It seems to me that you have to seriously misunderstand evolution’s drive to build large-scale cooperative systems to entertain such ideas for very long.
I don’t have much inclination to think about my attitude towards Crocker’s Rules just now—sorry. My initial impression is not favourable, though. Maybe it would work with infrastructure—or on a community level. Otherwise the overhead of tracking people’s “Crocker status” seems considerable. You can take that as a “no”.
Thank you for continuing to engage my point of view, and offering your own.
That’s an interesting hypothesis which easily fits into my estimated 90+ percent bucket of failure modes. I’ve got all kinds of such events in there, including things such as, there’s no way to understand intelligence, there’s no way to implement intelligence in computers, friendliness isn’t meaningful, CEV is impossible, they don’t have the right team to achieve it, hardware will never be fast enough, powerful corporations or governments will get there first, etc. My favorite is: no matter whether it’s possible or not, we won’t get there in time; basically, that it will take too long to be useful. I don’t believe any of them, but I do think they have solid probabilities which add up to a great amount of difficulty.
But the future isn’t set, they’re just probabilities, and we can change them. I think we need to explore this as much as possible, to see what the real math looks like, to see how long it takes, to see how hard it really is. Because the payoffs or results of failure are in that same realm of ‘astronomical’.