I don’t think you quite understand the hammer that will come down if anything comes of your questions. Nothing of what you built will be left. I don’t think many non-illegal sabotage avenues are open to this community. You can’t easily influence the tenure process, and hiring the best researchers is notoriously difficult, even for very good universities/labs.
That’s why I asked whether Less Wrongers would prefer SI to devote more of it’s time to slowing down other people’s unfriendly AI relative to how much time it spends constructing FAI. I agree, SI staff shouldn’t answer.
I think any sequence of events that leads to anyone at all in any way associated with either lesswrong or SI doing anything to hinder any research would be a catastrophe for this community. At best, you will get a crank label (more than now, that is), at worst the FBI will get involved.
Yes. It’s much better to tile the universe with paperclips than to have this community looked on poorly. How ever could he have gotten his priorities so crossed?
If there is a big enough AI project out there, especially if it will be released as freeware, others won’t work on it. That would be high-risk and result in a low return on investment.
Also, I don’t think my other two risky AGI deterring ideas aren’t do-able simultaneously. Not sure how many people it would take to get those moving on a large enough scale, but it’s probably nowhere near as much as making a friendly AGI.
I don’t think you quite understand the hammer that will come down if anything comes of your questions. Nothing of what you built will be left. I don’t think many non-illegal sabotage avenues are open to this community. You can’t easily influence the tenure process, and hiring the best researchers is notoriously difficult, even for very good universities/labs.
Re: OP, I think you are worried over nothing.
That’s why I asked whether Less Wrongers would prefer SI to devote more of it’s time to slowing down other people’s unfriendly AI relative to how much time it spends constructing FAI. I agree, SI staff shouldn’t answer.
I think any sequence of events that leads to anyone at all in any way associated with either lesswrong or SI doing anything to hinder any research would be a catastrophe for this community. At best, you will get a crank label (more than now, that is), at worst the FBI will get involved.
I think you may be a bit late.
Yes. It’s much better to tile the universe with paperclips than to have this community looked on poorly. How ever could he have gotten his priorities so crossed?
If there is a big enough AI project out there, especially if it will be released as freeware, others won’t work on it. That would be high-risk and result in a low return on investment.
Three ideas to prevent unfriendly AGI (Scroll to “Help good guys beat the arms race”)
Also, I don’t think my other two risky AGI deterring ideas aren’t do-able simultaneously. Not sure how many people it would take to get those moving on a large enough scale, but it’s probably nowhere near as much as making a friendly AGI.
Three legal ideas to prevent risky AGI projects
Sabotage would probably backfire: Why sabotaging unfriendly AGI wouldn’t work