“Advocacy pushes you down a path of simplifying ideas rather than clearly articulating what’s true, and pushing for consensus for the sake of coordination regardless of whether you’ve actually found the right thing to coordinate on.”
Simplifying (abstracting) ideas allows us to use them efficiently.
Coordination allows us to combine our talents to achieve a common goal.
The right thing is the one which best helps us achieve our cause.
Our cause, in terms of alignment, is making intelligent machines that help us.
The first step towards helping us is not killing us.
Intelligent weapons are machines with built-in intelligence capabilities specialized for the task of killing humans.
Yes, a rogue AI could try to kill us in other ways: bioweapons, power grid sabotage, communications sabotage, etc. Limiting the development of new microorganisms, especially with regards to AI, would also be a very good step. However, bioweapons research requires human action, and there are very few humans that are both capable and willing to cause human extinction. Sabotage of civilian infrastructure could cause a lot of damage, especially the power grid, which may be vulnerable to cyberattack. https://www.gao.gov/blog/securing-u.s.-electricity-grid-cyberattacks
Human mercenaries causing a societal collapse? That would mean a large number of individuals who are willing to take orders from a machine to actively harm their communities. Very unlikely.
The more human action that an AI requires to function, the more likely a human will notice and eliminate a rogue AI. Unfortunately, the development of weapons which require less human action is proceeding rapidly.
Suppose an LLM or other reasoning model were to enter a bad loop, maybe as the result of a joke, in which it sought to destroy humanity. Suppose it wrote a program which, when installed by the unsuspecting user, created a much smaller model, and this model used other machines to communicate with autonomous weapons, instructing them to destroy key targets. The damage which arises in this scenario would be proportional to the power and intelligence of the autonomous weapons. Hence, the need to stop developing them immediately.
Human mercenaries causing a societal collapse? That would mean a large number of individuals who are willing to take orders from a machine to actively harm their communities. Very unlikely.
I’m wondering how you can hold that position given all the recent social disorder we’ve seen all over the world where social media driven outrage cycles have been a significant accelerating factor. People are absolutely willing to “take orders from a machine” (i.e. participate in collective action based on memes from social media) in order to “harm their communities” (i.e. cause violence and property destruction).
These memes have been magnified by the words of politicians and media. We need our leaders to discuss things more reasonably.
That said, restricting social media could also make sense. A requirement for in-person verification and limitation to a single account per site could be helpful.
“Advocacy pushes you down a path of simplifying ideas rather than clearly articulating what’s true, and pushing for consensus for the sake of coordination regardless of whether you’ve actually found the right thing to coordinate on.”
Simplifying (abstracting) ideas allows us to use them efficiently.
Coordination allows us to combine our talents to achieve a common goal.
The right thing is the one which best helps us achieve our cause.
Our cause, in terms of alignment, is making intelligent machines that help us.
The first step towards helping us is not killing us.
Intelligent weapons are machines with built-in intelligence capabilities specialized for the task of killing humans.
Yes, a rogue AI could try to kill us in other ways: bioweapons, power grid sabotage, communications sabotage, etc. Limiting the development of new microorganisms, especially with regards to AI, would also be a very good step. However, bioweapons research requires human action, and there are very few humans that are both capable and willing to cause human extinction. Sabotage of civilian infrastructure could cause a lot of damage, especially the power grid, which may be vulnerable to cyberattack. https://www.gao.gov/blog/securing-u.s.-electricity-grid-cyberattacks
Human mercenaries causing a societal collapse? That would mean a large number of individuals who are willing to take orders from a machine to actively harm their communities. Very unlikely.
The more human action that an AI requires to function, the more likely a human will notice and eliminate a rogue AI. Unfortunately, the development of weapons which require less human action is proceeding rapidly.
Suppose an LLM or other reasoning model were to enter a bad loop, maybe as the result of a joke, in which it sought to destroy humanity. Suppose it wrote a program which, when installed by the unsuspecting user, created a much smaller model, and this model used other machines to communicate with autonomous weapons, instructing them to destroy key targets. The damage which arises in this scenario would be proportional to the power and intelligence of the autonomous weapons. Hence, the need to stop developing them immediately.
I’m wondering how you can hold that position given all the recent social disorder we’ve seen all over the world where social media driven outrage cycles have been a significant accelerating factor. People are absolutely willing to “take orders from a machine” (i.e. participate in collective action based on memes from social media) in order to “harm their communities” (i.e. cause violence and property destruction).
These memes have been magnified by the words of politicians and media. We need our leaders to discuss things more reasonably.
That said, restricting social media could also make sense. A requirement for in-person verification and limitation to a single account per site could be helpful.