I disagree with most of that analysis. I assume machine intelligence will catalyse its own creation. I fully expect that some organisations will stick with secret source code. How could the probability of that possibly be as low as 0.8!?!
I figure that use of open source software is more likely to lead to a more even balance of power—and less likely to lead to a corrupt organisation in charge of the planet’s most advanced machine intelligence efforts. That assessment is mostly based on the software industry to date—where many of the worst abuses appear to me to have occurred at the hands of proprietary software vendors.
If you have an unethical open source project, people can just fork it, and make an ethical version. With a closed source project, people don’t have that option—they often have to go with whatever they are given by those in charge of the project.
Nor am I assuming that no team will ever win. If there is to be a winner, we want the best possible lead up. The “trust us” model is not it—not by a long shot.
I figure that use of open source software is more likely to lead to a more even balance of power—and less likely to lead to a corrupt organisation in charge of the planet’s most advanced machine intelligence efforts. That assessment is mostly based on the software industry to date—where many of the worst abuses appear to me to have occurred at the hands of proprietary software vendors.
If you have an unethical open source project, people can just fork it, and make an ethical version. With a closed source project, people don’t have that option—they often have to go with whatever they are given by those in charge of the project.
There are two problems with this reasoning. First, you have the causality backwards: makers of open-source software are less abusive than makers of closed-source software not because open-source is such a good safeguard, but because the sorts of organizations that would be abusive don’t open source in the first place.
And second, if there is an unethical AI running somewhere, then forking the code will not save humanity. Forking is a defense against not having good software to use yourself; it is not a defense against other people running software that does bad things to you.
you have the causality backwards: makers of open-source software are less abusive than makers of closed-source software not because open-source is such a good safeguard, but because the sorts of organizations that would be abusive don’t open source in the first place.
Really? I just provided an example of a mechanism that helps keep open source software projects ethical—the fact that if the manufacturers attempt to exploit their customers it is much easier for the customers to switch to a more ethical fork—because creating such a fork no longer violates copyright law. Though you said you were pointing out problems with my reasoning, you didn’t actually point out any problems with that reasoning.
We saw an example of this kind of thing very recently—with LibreOffice. The developers got afraid that their adopted custodian, Oracle, was going to screw the customers of their project—so, to protect their customers and themselves, they forked it—and went their own way.
if there is an unethical AI running somewhere, then forking the code will not save humanity. Forking is a defense against not having good software to use yourself; it is not a defense against other people running software that does bad things to you.
If other people are running software that does bad things to you then running good quality software yourself most certainly is a kind of defense. It means you are better able to construct defenses, better able to anticipate their attacks—and so on. Better brains makes you more powerful.
Compare with the closed-source alternative: If other people are running software that does bad things to you—and you have no way to run such software yourself—since it is on their server and running secret source that is also protected by copyright law—you are probably pretty screwed.
I disagree with most of that analysis. I assume machine intelligence will catalyse its own creation. I fully expect that some organisations will stick with secret source code. How could the probability of that possibly be as low as 0.8!?!
I figure that use of open source software is more likely to lead to a more even balance of power—and less likely to lead to a corrupt organisation in charge of the planet’s most advanced machine intelligence efforts. That assessment is mostly based on the software industry to date—where many of the worst abuses appear to me to have occurred at the hands of proprietary software vendors.
If you have an unethical open source project, people can just fork it, and make an ethical version. With a closed source project, people don’t have that option—they often have to go with whatever they are given by those in charge of the project.
Nor am I assuming that no team will ever win. If there is to be a winner, we want the best possible lead up. The “trust us” model is not it—not by a long shot.
There are two problems with this reasoning. First, you have the causality backwards: makers of open-source software are less abusive than makers of closed-source software not because open-source is such a good safeguard, but because the sorts of organizations that would be abusive don’t open source in the first place.
And second, if there is an unethical AI running somewhere, then forking the code will not save humanity. Forking is a defense against not having good software to use yourself; it is not a defense against other people running software that does bad things to you.
Really? I just provided an example of a mechanism that helps keep open source software projects ethical—the fact that if the manufacturers attempt to exploit their customers it is much easier for the customers to switch to a more ethical fork—because creating such a fork no longer violates copyright law. Though you said you were pointing out problems with my reasoning, you didn’t actually point out any problems with that reasoning.
We saw an example of this kind of thing very recently—with LibreOffice. The developers got afraid that their adopted custodian, Oracle, was going to screw the customers of their project—so, to protect their customers and themselves, they forked it—and went their own way.
If other people are running software that does bad things to you then running good quality software yourself most certainly is a kind of defense. It means you are better able to construct defenses, better able to anticipate their attacks—and so on. Better brains makes you more powerful.
Compare with the closed-source alternative: If other people are running software that does bad things to you—and you have no way to run such software yourself—since it is on their server and running secret source that is also protected by copyright law—you are probably pretty screwed.