The best way to limit the impact of a rogue AI is to limit the production of autonomous (intelligent) lethal weapons. More details in this post and its comments:
I weak-downvoted this: in general I think it is informative for people to just state their opinion, but in this case the opinion had very little to do with the content of the post and was not argued for. The linked post also did not engage with any of the existing arguments around TAI risk.
(Not that I disagree with “limiting the spread of autonomous weapons is going to lead to fewer human deaths in expectation”, but I don’t think it is the best strategy to limit such kinds of impact.)
Which part of my statement does not make sense, and how so?
My statement is relevent to the post. The beginning of the article partially defined hard alignment as preventing AI from destroying everything of value to us. The most likely way a rogue AI would do that is by gaining unauthorized access to weapons with built-in intelligence.
I dont think the most likely way is gaining access to autonomous weapons designed to kill. An ai smarter than all humans has many different options to take over, including making its own autonomous weapons
Let’s agree that the first step towards AI alignment is to refrain from building intelligent machines that are designed to kill people.
I don’t get it; why would ‘refraining from designing intelligent machines to kill people’ help prevent AI from killing everyone? That’s a really bold and outlandish claim that I think you have to actually defend and not just tell people to agree with… Like, from my perspective, you’re just assuming the hard parts of the problem don’t exist, and replacing all the hard parts with an easier problem (‘avoid designing AIs to kill people’). It’s the hard parts of the problem that seem on track to kill us; solving the easier problem doesn’t seem to help.
Yes, we need to solve the harder alignment problems as well. I suggested limited intelligent weapons as the first step, because these are the most obviously misanthropic AI being developed, and the clearest vector of attack for any rogue AI. Why don’t we focus on that first, before we focus on the more subtle vectors.
The end of the post you linked said, basically, “we need a plan”. Do you have a better one?
The best way to limit the impact of a rogue AI is to limit the production of autonomous (intelligent) lethal weapons. More details in this post and its comments:
https://www.lesswrong.com/posts/b2d3yBzzik4hajGni/limit-intelligent-weapons
I weak-downvoted this: in general I think it is informative for people to just state their opinion, but in this case the opinion had very little to do with the content of the post and was not argued for. The linked post also did not engage with any of the existing arguments around TAI risk.
(Not that I disagree with “limiting the spread of autonomous weapons is going to lead to fewer human deaths in expectation”, but I don’t think it is the best strategy to limit such kinds of impact.)
Which part of my statement does not make sense, and how so?
My statement is relevent to the post. The beginning of the article partially defined hard alignment as preventing AI from destroying everything of value to us. The most likely way a rogue AI would do that is by gaining unauthorized access to weapons with built-in intelligence.
I dont think the most likely way is gaining access to autonomous weapons designed to kill. An ai smarter than all humans has many different options to take over, including making its own autonomous weapons
I don’t get it; why would ‘refraining from designing intelligent machines to kill people’ help prevent AI from killing everyone? That’s a really bold and outlandish claim that I think you have to actually defend and not just tell people to agree with… Like, from my perspective, you’re just assuming the hard parts of the problem don’t exist, and replacing all the hard parts with an easier problem (‘avoid designing AIs to kill people’). It’s the hard parts of the problem that seem on track to kill us; solving the easier problem doesn’t seem to help.
Yes, we need to solve the harder alignment problems as well. I suggested limited intelligent weapons as the first step, because these are the most obviously misanthropic AI being developed, and the clearest vector of attack for any rogue AI. Why don’t we focus on that first, before we focus on the more subtle vectors.
The end of the post you linked said, basically, “we need a plan”. Do you have a better one?