Both types of software are powerful tools. Powerful tools are dangerous in the wrong hands, because they amplify the power of their users. That is the gist of the analogy.
I expect EMACS has been used for all kinds of evil purposes, from writing viruses, trojans, and worms to tax evasion and fraud.
That seems rather dubious as a general motto, but in this case, I am inclined to agree. In the case of intelligent machines, the positives of openness substantially outweigh their negatives, IMO.
Budding machine intelligence builders badly need to signal that they are not going to screw everyone over. How else are other people to know that they are not planning to screw everyone over?
Such signals should be expensive and difficult to fake. In this case, about the only credible signal is maximum transpareny. I am not going to screw you over, and look, here is the proof: what’s mine is yours.
If you don’t understand something I’ve written, please ask for clarification. Don’t guess what I said and respond to that instead; that’s obnoxious. Your comparison of my argument to
“Otherwise the terrorists will win!”
Leads me to believe that you didn’t understand what I said at all. How is destroying the world by accident like terrorism?
Er, characterising someone who disagrees with you on a technical point as “obnoxious” is not terribly great manners in itself! I never compared destroying the world by accident with terrorism—you appear to be projecting. However, I am not especially interested in the conversation being dragged into the gutter in this way.
If you did have a good argument favouring closed source software and reduced transparency I think there has been a reasonable chance to present it. However, if you can’t even be civil, perhaps you should consider waiting until you can.
I gave an argument that open-sourcing AI would increase the risk of the world being destroyed by accident. You said
I note that Anders Sandberg recently included: “Otherwise the terrorists will win!” …in his list of of signs that you might be looking at a weak moral argument.
I presented the mismatch between this statement and my argument as evidence that you had misunderstood what I was saying. In your reply,
I never compared destroying the world by accident with terrorism—you appear to be projecting.
You are misunderstanding me again. I think I’ve already said all that needs to be said, but I can’t clear up confusion if you keep attacking straw men rather than asking questions.
Both types of software are powerful tools. Powerful tools are dangerous in the wrong hands, because they amplify the power of their users. That is the gist of the analogy.
I expect EMACS has been used for all kinds of evil purposes, from writing viruses, trojans, and worms to tax evasion and fraud.
I note that Anders Sandberg recently included:
“Otherwise the terrorists will win!”
...in his list of of signs that you might be looking at a weak moral argument.
That seems rather dubious as a general motto, but in this case, I am inclined to agree. In the case of intelligent machines, the positives of openness substantially outweigh their negatives, IMO.
Budding machine intelligence builders badly need to signal that they are not going to screw everyone over. How else are other people to know that they are not planning to screw everyone over?
Such signals should be expensive and difficult to fake. In this case, about the only credible signal is maximum transpareny. I am not going to screw you over, and look, here is the proof: what’s mine is yours.
If you don’t understand something I’ve written, please ask for clarification. Don’t guess what I said and respond to that instead; that’s obnoxious. Your comparison of my argument to
Leads me to believe that you didn’t understand what I said at all. How is destroying the world by accident like terrorism?
Er, characterising someone who disagrees with you on a technical point as “obnoxious” is not terribly great manners in itself! I never compared destroying the world by accident with terrorism—you appear to be projecting. However, I am not especially interested in the conversation being dragged into the gutter in this way.
If you did have a good argument favouring closed source software and reduced transparency I think there has been a reasonable chance to present it. However, if you can’t even be civil, perhaps you should consider waiting until you can.
I gave an argument that open-sourcing AI would increase the risk of the world being destroyed by accident. You said
I presented the mismatch between this statement and my argument as evidence that you had misunderstood what I was saying. In your reply,
You are misunderstanding me again. I think I’ve already said all that needs to be said, but I can’t clear up confusion if you keep attacking straw men rather than asking questions.