You sound bullish—when IMO what you should be doing is learning that it is presumptious and antagonistic to publicly tell people that they are “seriously misinformed”—when you have such feeble and inaccurate evidence of any such thing. Such nonsense just gets in the way of the discussion.
IMO what you should be doing is learning that it is presumptious and antagonistic to publicly tell people that they are “seriously misinformed”—when you have such feeble and inaccurate evidence of any such thing. Such nonsense just gets in the way of the discussion.
Perhaps it was presumptuous and antagonistic, perhaps I could have been more tactful, and I’m sorry if I offended you. But I stand by my original statement, because it was true.
I am not sure which statement you stand by. The one about me being “seriously misinformed” about computer security? Let’s not go back to that—pulease!
The “adjusted” one—about the resources on the side of good being vastly insufficient to prevent a nasty artificial general intelligence from stealing vast computational resources? I think that is much too speculative for a true/false claim to be made about it.
The case against it is basically the case for good over evil. In the future, it seems reasonable that there will be much more ubiquitous government surveillance. Crimes will be trickier to pull off. Criminals will have more powerful weapons—but the government will know what colour socks they are wearing. Similarly, medicine will be better—and the life of pathogens will become harder. Positive forces look set to win, or at least dominate. Matt Ridley makes a similar case in his recent “Rational Optimism”.
Is there a correspondingly convincing case that the forces of evil will win out—and that the mafia machine intelligence—or the spyware-maker’s machine intelligence—will come out on top? That seems about as far-out to me as the SIAI contention that a bug is likely to take over the world. It seems to me that you have to seriously misunderstand evolution’s drive to build large-scale cooperative systems to entertain such ideas for very long.
I don’t have much inclination to think about my attitude towards Crocker’s Rules just now—sorry. My initial impression is not favourable, though. Maybe it would work with infrastructure—or on a community level. Otherwise the overhead of tracking people’s “Crocker status” seems considerable. You can take that as a “no”.
You sound bullish—when IMO what you should be doing is learning that it is presumptious and antagonistic to publicly tell people that they are “seriously misinformed”—when you have such feeble and inaccurate evidence of any such thing. Such nonsense just gets in the way of the discussion.
Perhaps it was presumptuous and antagonistic, perhaps I could have been more tactful, and I’m sorry if I offended you. But I stand by my original statement, because it was true.
Crocker’s Rules for me. Will you do the same?
I am not sure which statement you stand by. The one about me being “seriously misinformed” about computer security? Let’s not go back to that—pulease!
The “adjusted” one—about the resources on the side of good being vastly insufficient to prevent a nasty artificial general intelligence from stealing vast computational resources? I think that is much too speculative for a true/false claim to be made about it.
The case against it is basically the case for good over evil. In the future, it seems reasonable that there will be much more ubiquitous government surveillance. Crimes will be trickier to pull off. Criminals will have more powerful weapons—but the government will know what colour socks they are wearing. Similarly, medicine will be better—and the life of pathogens will become harder. Positive forces look set to win, or at least dominate. Matt Ridley makes a similar case in his recent “Rational Optimism”.
Is there a correspondingly convincing case that the forces of evil will win out—and that the mafia machine intelligence—or the spyware-maker’s machine intelligence—will come out on top? That seems about as far-out to me as the SIAI contention that a bug is likely to take over the world. It seems to me that you have to seriously misunderstand evolution’s drive to build large-scale cooperative systems to entertain such ideas for very long.
I don’t have much inclination to think about my attitude towards Crocker’s Rules just now—sorry. My initial impression is not favourable, though. Maybe it would work with infrastructure—or on a community level. Otherwise the overhead of tracking people’s “Crocker status” seems considerable. You can take that as a “no”.