The good guys want to form a large cooperatve network with each other, to help ensure they reach the goal first. Sharing is one of the primary ways they have of signalling to each other that they are good guys. Signalling must be expensive to be credible, and this is a nice, relevant, expensive signal. Being secretive—and failing to share—self-identifies yourself as a selfish bad guy—in the eyes of the sharers.
It is not an advantage to be recognised by good guys as a probable bad guy. For one thing, it most likey means you get no technical support.
A large cooperative good-guy network is a major win in terms of risk—compared to the scenario where everyone is secretive. The bad guys get some shared source code—but that in no way makes up for how much worse their position is overall.
To get ahead, the bad guys have to pretend to be good guys. To convince others of this—in the face of the innate human lie-detector abilities—they may even need to convince themselves they are good guys...
Personally, I think the benefits of openness win out in this case too.
That is especially true for the “inductive inference” side of things—which I estimate to be about 80% of the technical problem of machine intelligence. Keeping that secret is just a fantasy. Versions of that are going to be embedded in every library in every mobile computing device on the planet—doing input prediction, compression, and pattern completion. It is core infrastructure. You can’t hide things like that.
Essentially, you will have to learn to live with the possibility of bad guys using machine intelligence to help themselves. You can’t really stop that—so, don’t think that you can—and instead look into affecting what you can change—for example, reducing the opportunities for them to win, limiting the resulting damage, etc.
We discussed this very recently.
The good guys want to form a large cooperatve network with each other, to help ensure they reach the goal first. Sharing is one of the primary ways they have of signalling to each other that they are good guys. Signalling must be expensive to be credible, and this is a nice, relevant, expensive signal. Being secretive—and failing to share—self-identifies yourself as a selfish bad guy—in the eyes of the sharers.
It is not an advantage to be recognised by good guys as a probable bad guy. For one thing, it most likey means you get no technical support.
A large cooperative good-guy network is a major win in terms of risk—compared to the scenario where everyone is secretive. The bad guys get some shared source code—but that in no way makes up for how much worse their position is overall.
To get ahead, the bad guys have to pretend to be good guys. To convince others of this—in the face of the innate human lie-detector abilities—they may even need to convince themselves they are good guys...
You never did address the issue I raised in the linked comment. As far as I can tell, it’s a showstopper for open-access development models of AI.
You gave some disadvantages of openness—I responded with a list of advantages of openness. Why you concluded this was not responsive is not clear.
Conventional wisdom about open source and security is that it helps—e.g. see Bruce Schneier on the topic.
Personally, I think the benefits of openness win out in this case too.
That is especially true for the “inductive inference” side of things—which I estimate to be about 80% of the technical problem of machine intelligence. Keeping that secret is just a fantasy. Versions of that are going to be embedded in every library in every mobile computing device on the planet—doing input prediction, compression, and pattern completion. It is core infrastructure. You can’t hide things like that.
Essentially, you will have to learn to live with the possibility of bad guys using machine intelligence to help themselves. You can’t really stop that—so, don’t think that you can—and instead look into affecting what you can change—for example, reducing the opportunities for them to win, limiting the resulting damage, etc.
What linked comment?
The first comment here, I believe.