Which part of my statement does not make sense, and how so?
My statement is relevent to the post. The beginning of the article partially defined hard alignment as preventing AI from destroying everything of value to us. The most likely way a rogue AI would do that is by gaining unauthorized access to weapons with built-in intelligence.
I dont think the most likely way is gaining access to autonomous weapons designed to kill. An ai smarter than all humans has many different options to take over, including making its own autonomous weapons
Which part of my statement does not make sense, and how so?
My statement is relevent to the post. The beginning of the article partially defined hard alignment as preventing AI from destroying everything of value to us. The most likely way a rogue AI would do that is by gaining unauthorized access to weapons with built-in intelligence.
I dont think the most likely way is gaining access to autonomous weapons designed to kill. An ai smarter than all humans has many different options to take over, including making its own autonomous weapons