With the difference that many people think it may have been a mistake to make those things illegal to begin with. People considering industrial sabotage to stop UFAI probably don’t think that industrial sabotage should be legal in general.
I see this question as analogous to the discussion in the Brain Preservation Foundation thread about whether not donating reveals preferences or exposes belief in belief. Why is asking about non-lethal sabotage too qualitatively different to get at the same question?
I assume you’d slow down or kibosh a not-proved-to-be-friendly AGI project if you had the authority to do so. But you wouldn’t interfere if you didn’t have legitimate authority over the project? There are plenty of not-illegal, but still ethical norm-breaking opportunities for sabotage, (deny the people on the project tenure, if you’re in a university, hire the best researchers away, etc).
Do you think this shouldn’t be discussed out of respect for the law, out of respect for the autonomy of researchers, or a mix of both?
If we were certain a uFAI were going online in a matter of days, it would be everyone’s responsibility to stop it by any means possible. Imminent threat to humanity and all that.
However, it’s a very low probability that it’ll ever get to that point. Talking about and endorsing (hypothetical) unethical activity will impose social costs in the meanwhile. So, it’s a net negative to discuss it.
I’d argue the latter. It’s hard to imagine how you could know in advance that a uFAI has a high chance of working, rather than being one of thousands of ambitious AGI projects that simply fail.
(Douglas Lenat comes to you, saying that he’s finished a powerful fully general self-modifying AI program called Eurisko, which has done very impressive things in its early trials, so he’s about to run it on some real-world problems on a supercomputer with Internet access; and by the way, he’ll be alone all tomorrow fiddling with it, would you like to come over...)
Sorry, I was imprecise. I consider it likely that eventually we’ll be able to make uFAI, but unlikely that any particular project will make uFAI. Moreover, we probably won’t get appreciable warning for uFAI because if researchers knew they were making a uFAI then they wouldn’t make one.
Thus, we have to adopt a general strategy that can’t target any specific research group. Sabotage does not scale well, and would only drive research underground while imposing social costs on us meanwhile. The best bet then is to promote awareness of uFAI risks and try to have friendliness theory completed by the time the first AGI goes online. Not surprisingly, this seems to be what SIAI is already doing. Discussion of sabotage just harms that strategy.
It might damage LW’s credibility among decision-makers and public opinion. Of course, it might improve LW’s credibility among certain other groupings. PR is a tricky balancing act. PETA is a good example in both cases.
Indeed. Comment like those increase my belief that LW is home to some crazy dangerous doomsday cult that stores weapons caches and prepares terror attacks.
(I still don’t assign an high probability to that belief, yet, but still higher than most communities I know)
I came here from OB, and I lurked a bit before posting precisely because I didn’t like these kind of undertones. If that attitude becomes more prevalent I will probably go away to avoid any association.
If that attitude becomes more prevalent I will probably go away to avoid any association.
I was going to say: “Well, on the bright side, at least your username is not really Googleable.” Then I Googled it just for fun, and found you on the first page of the results (゜レ゜)
I don’t think you quite understand the hammer that will come down if anything comes of your questions. Nothing of what you built will be left. I don’t think many non-illegal sabotage avenues are open to this community. You can’t easily influence the tenure process, and hiring the best researchers is notoriously difficult, even for very good universities/labs.
That’s why I asked whether Less Wrongers would prefer SI to devote more of it’s time to slowing down other people’s unfriendly AI relative to how much time it spends constructing FAI. I agree, SI staff shouldn’t answer.
I think any sequence of events that leads to anyone at all in any way associated with either lesswrong or SI doing anything to hinder any research would be a catastrophe for this community. At best, you will get a crank label (more than now, that is), at worst the FBI will get involved.
Yes. It’s much better to tile the universe with paperclips than to have this community looked on poorly. How ever could he have gotten his priorities so crossed?
If there is a big enough AI project out there, especially if it will be released as freeware, others won’t work on it. That would be high-risk and result in a low return on investment.
Also, I don’t think my other two risky AGI deterring ideas aren’t do-able simultaneously. Not sure how many people it would take to get those moving on a large enough scale, but it’s probably nowhere near as much as making a friendly AGI.
I vote that criminal activity shouldn’t be endorsed in general.
On first reading, I read your name as “jailbot,” which seemed pretty appropriate for this comment.
Discussions of illicit drugs or ways of getting copyrighted material without the consent of the copyright holder aren’t unprecedented on LW.
With the difference that many people think it may have been a mistake to make those things illegal to begin with. People considering industrial sabotage to stop UFAI probably don’t think that industrial sabotage should be legal in general.
I see this question as analogous to the discussion in the Brain Preservation Foundation thread about whether not donating reveals preferences or exposes belief in belief. Why is asking about non-lethal sabotage too qualitatively different to get at the same question?
Because it’s bad tactics to endorse it in the open or because sabotaging unfriendly AI research is a case of not even if it’s the right thing to do?
I assume you’d slow down or kibosh a not-proved-to-be-friendly AGI project if you had the authority to do so. But you wouldn’t interfere if you didn’t have legitimate authority over the project? There are plenty of not-illegal, but still ethical norm-breaking opportunities for sabotage, (deny the people on the project tenure, if you’re in a university, hire the best researchers away, etc).
Do you think this shouldn’t be discussed out of respect for the law, out of respect for the autonomy of researchers, or a mix of both?
If we were certain a uFAI were going online in a matter of days, it would be everyone’s responsibility to stop it by any means possible. Imminent threat to humanity and all that.
However, it’s a very low probability that it’ll ever get to that point. Talking about and endorsing (hypothetical) unethical activity will impose social costs in the meanwhile. So, it’s a net negative to discuss it.
What specifically do you consider low probability? That an uFAI will ever be launched, or that there will be an advance high credibility warning?
I’d argue the latter. It’s hard to imagine how you could know in advance that a uFAI has a high chance of working, rather than being one of thousands of ambitious AGI projects that simply fail.
(Douglas Lenat comes to you, saying that he’s finished a powerful fully general self-modifying AI program called Eurisko, which has done very impressive things in its early trials, so he’s about to run it on some real-world problems on a supercomputer with Internet access; and by the way, he’ll be alone all tomorrow fiddling with it, would you like to come over...)
Sorry, I was imprecise. I consider it likely that eventually we’ll be able to make uFAI, but unlikely that any particular project will make uFAI. Moreover, we probably won’t get appreciable warning for uFAI because if researchers knew they were making a uFAI then they wouldn’t make one.
Thus, we have to adopt a general strategy that can’t target any specific research group. Sabotage does not scale well, and would only drive research underground while imposing social costs on us meanwhile. The best bet then is to promote awareness of uFAI risks and try to have friendliness theory completed by the time the first AGI goes online. Not surprisingly, this seems to be what SIAI is already doing. Discussion of sabotage just harms that strategy.
It might damage LW’s credibility among decision-makers and public opinion. Of course, it might improve LW’s credibility among certain other groupings. PR is a tricky balancing act. PETA is a good example in both cases.
Indeed. Comment like those increase my belief that LW is home to some crazy dangerous doomsday cult that stores weapons caches and prepares terror attacks.
(I still don’t assign an high probability to that belief, yet, but still higher than most communities I know)
I came here from OB, and I lurked a bit before posting precisely because I didn’t like these kind of undertones. If that attitude becomes more prevalent I will probably go away to avoid any association.
I was going to say: “Well, on the bright side, at least your username is not really Googleable.” Then I Googled it just for fun, and found you on the first page of the results (゜レ゜)
Intelligence does not imply benevolence. Surely, there already are people who will try to sabotage unFriendly projects.
I don’t think you quite understand the hammer that will come down if anything comes of your questions. Nothing of what you built will be left. I don’t think many non-illegal sabotage avenues are open to this community. You can’t easily influence the tenure process, and hiring the best researchers is notoriously difficult, even for very good universities/labs.
Re: OP, I think you are worried over nothing.
That’s why I asked whether Less Wrongers would prefer SI to devote more of it’s time to slowing down other people’s unfriendly AI relative to how much time it spends constructing FAI. I agree, SI staff shouldn’t answer.
I think any sequence of events that leads to anyone at all in any way associated with either lesswrong or SI doing anything to hinder any research would be a catastrophe for this community. At best, you will get a crank label (more than now, that is), at worst the FBI will get involved.
I think you may be a bit late.
Yes. It’s much better to tile the universe with paperclips than to have this community looked on poorly. How ever could he have gotten his priorities so crossed?
If there is a big enough AI project out there, especially if it will be released as freeware, others won’t work on it. That would be high-risk and result in a low return on investment.
Three ideas to prevent unfriendly AGI (Scroll to “Help good guys beat the arms race”)
Also, I don’t think my other two risky AGI deterring ideas aren’t do-able simultaneously. Not sure how many people it would take to get those moving on a large enough scale, but it’s probably nowhere near as much as making a friendly AGI.
Three legal ideas to prevent risky AGI projects
Sabotage would probably backfire: Why sabotaging unfriendly AGI wouldn’t work