I’m sure that smarts of the kind that visit LW and who work for the SIAI would be able to commit targeted killings of rogue researchers. Given that uFAI is portrayed to be the biggest risk humanity is facing, I don’t think it is a distraction to directly ask for why we are not talking about preemptive actions against uFAI rather than trying to come up with a conceptual framework for FAI. Even if the SIAI succeeds in creating a mathematical strict model to guarantee FAI, who says that the people working on AGI would implement it? Given the stakes, the option to take active actions against anyone carelessly working on AGI should be seriously considered.
1.) We have no guarantee that FAI is possible.
2.) We don’t know if the SIAI will succeed before the first AGI is build.
3.) We don’t know that if the SIAI succeeds that others will listen.
We should consider preemptive means in the case that someone might succeed in creating AGI under the above circumstances.
Someone asked me to delete this comment. I do not think deletion is necessary in cases like this. It’s enough for the readers to downvote it into oblivion. Note that after it hits −3 most readers will stop seeing it, so don’t panic if it only gets downvoted to −3 instead of −40.
You simply don’t stand up to your own rationality here. Although I can understand why you have to deny it in public, of course it is illegal.
I think it is just ridiculous that people think about taking out terrorists and nuclear facilities but not about AI researchers that could destroy the universe according to your view that AI can go FOOM.
Why though don’t we talk about contacting those people and tell them how dangerous it is, or maybe even try that they don’t get any funding?
If someone does think about it, do you think they would do it in public and we would ever hear about it? If someone’s doing it, I hope they have the good sense to do it covertly instead of discussing all the violent and illegal things they’re planning on an online forum.
I deleted all my other comments regarding this topic. I just wanted to figure out if you’re preaching the imminent rise of sea levels and at the same time purchase ocean-front property. Your comment convinced me.
I guess it was obvious, but too interesting to ignore. Others will come up with this idea sooner or later and as AI going FOOM will become mainstream, people are going to act upon it.
Thank you for deleting the comments; I realize that it’s an interesting idea to play with, but it’s just not something you can talk about in a public forum. Nothing good will come of it.
As usual, my lack of self control and that I do not think things through got me to act like an idiot. I guess someone like me is a even bigger risk :-(
I’ve even got a written list of rules I should follow but sometimes fail to care: Think before talking to people or writing stuff in public; Be careful of what you say and write; Rather write and say less or nothing at all if you’re not sure it isn’t stupid to do so; Be humble; You think you don’t know much but you actually don’t know nearly as much as you think; Other people won’t perceive what you utter the way you intended it to be meant; Other people may take things really serious; You often fail to perceive that matters actually are serious, be careful…
A little bit of knowledge is a dangerous thing. It can convince you that an argument this idiotic and this sloppy is actually profound. It can convince you to publicly make a raging jackass out of yourself, by rambling on and on, based on a stupid misunderstanding of a simplified, informal, intuitive description of something complex. — The Danger When You Don’t Know What You Don’t Know
I’m sure that smarts of the kind that visit LW and who work for the SIAI would be able to commit targeted killings of rogue researchers. Given that uFAI is portrayed to be the biggest risk humanity is facing, I don’t think it is a distraction to directly ask for why we are not talking about preemptive actions against uFAI rather than trying to come up with a conceptual framework for FAI. Even if the SIAI succeeds in creating a mathematical strict model to guarantee FAI, who says that the people working on AGI would implement it? Given the stakes, the option to take active actions against anyone carelessly working on AGI should be seriously considered.
1.) We have no guarantee that FAI is possible.
2.) We don’t know if the SIAI will succeed before the first AGI is build.
3.) We don’t know that if the SIAI succeeds that others will listen.
We should consider preemptive means in the case that someone might succeed in creating AGI under the above circumstances.
Someone asked me to delete this comment. I do not think deletion is necessary in cases like this. It’s enough for the readers to downvote it into oblivion. Note that after it hits −3 most readers will stop seeing it, so don’t panic if it only gets downvoted to −3 instead of −40.
You simply don’t stand up to your own rationality here. Although I can understand why you have to deny it in public, of course it is illegal.
I think it is just ridiculous that people think about taking out terrorists and nuclear facilities but not about AI researchers that could destroy the universe according to your view that AI can go FOOM.
Why though don’t we talk about contacting those people and tell them how dangerous it is, or maybe even try that they don’t get any funding?
If someone does think about it, do you think they would do it in public and we would ever hear about it? If someone’s doing it, I hope they have the good sense to do it covertly instead of discussing all the violent and illegal things they’re planning on an online forum.
I deleted all my other comments regarding this topic. I just wanted to figure out if you’re preaching the imminent rise of sea levels and at the same time purchase ocean-front property. Your comment convinced me.
I guess it was obvious, but too interesting to ignore. Others will come up with this idea sooner or later and as AI going FOOM will become mainstream, people are going to act upon it.
Thank you for deleting the comments; I realize that it’s an interesting idea to play with, but it’s just not something you can talk about in a public forum. Nothing good will come of it.
As usual, my lack of self control and that I do not think things through got me to act like an idiot. I guess someone like me is a even bigger risk :-(
I’ve even got a written list of rules I should follow but sometimes fail to care: Think before talking to people or writing stuff in public; Be careful of what you say and write; Rather write and say less or nothing at all if you’re not sure it isn’t stupid to do so; Be humble; You think you don’t know much but you actually don’t know nearly as much as you think; Other people won’t perceive what you utter the way you intended it to be meant; Other people may take things really serious; You often fail to perceive that matters actually are serious, be careful…
Okay, I’m convinced. Let’s add paramilitary.lesswrong.com to the subreddit proposal.