I also believe that even if alignment is possible, we need more time to solve it.
The “Do Not Build Uncontrollable AI” area is meant for anyone to join who have this concern.
The purpose of this area is to contribute to restricting corporations from recklessly scaling the training and uses of ML models.
I want the area to be open for contributors who think that:
we’re not on track to solving safe control of AGI; and/or
there are fundamental limits to the controllability of AGI, and unfortunately AGI cannot be kept safe over the long term; and/or
corporations are causing increasing harms in how they scale uses of AI models.
After thinking about this over three years, I now think 1.-3. are all true.
I would love more people who hold any of these views to collaborate thoughtfully across the board!
The “Do Not Build Uncontrollable AI” area is meant for anyone to join who have this concern.
The purpose of this area is to contribute to restricting corporations from recklessly scaling the training and uses of ML models.
I want the area to be open for contributors who think that:
we’re not on track to solving safe control of AGI; and/or
there are fundamental limits to the controllability of AGI, and unfortunately AGI cannot be kept safe over the long term; and/or
corporations are causing increasing harms in how they scale uses of AI models.
After thinking about this over three years, I now think 1.-3. are all true. I would love more people who hold any of these views to collaborate thoughtfully across the board!