The letter feels rushed and leaves me with a bunch of questions.
1. “recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”
Where is the evidence of this “out-of-control race”? Where is the argument that future systems could be dangerous?
2. “Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders.”
These are very different concerns that water down what the problem is the letter tries to address. Most of them are deployment questions more than development questions.
3. I like the idea of a six-month collaboration between actors. I also like the policy asks they include.
4. The main impact of this letter would obviously be getting the main actors to halt development (OpenAI, Anthropic, DeepMind, MetaAI, Google). Yet, those actors seem not to have been involved in this letter/ haven’t publicly commented. (afaik) This seems like a failure.
5. Not making it possible to verify the names is a pretty big mistake.
6. In my perception, the letter mostly appears alarmist at the current time, especially since it doesn’t include an argument for why future systems should be dangerous. It might just end up burning political capital.
GPT-4 was rushed, and the OpenAI Plugin store. Things are moving far too fast for comfort. I think we can forgive this response for being rushed. It’s good to have some significant opposition working on the brakes to the runaway existential catastrophe train that we’ve all been put on.
Update: I think it doesn’t make much sense to interpret the letter literally. Instead, it can be seen as an attempt to show that a range of people think that slowing down progress would be good, and I think it does an okay job at that (though I still think the wording could be much better, and it should present arguments for why we should decelerate.)
The letter feels rushed and leaves me with a bunch of questions.
1. “recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”
Where is the evidence of this “out-of-control race”? Where is the argument that future systems could be dangerous?
2. “Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders.”
These are very different concerns that water down what the problem is the letter tries to address. Most of them are deployment questions more than development questions.
3. I like the idea of a six-month collaboration between actors. I also like the policy asks they include.
4. The main impact of this letter would obviously be getting the main actors to halt development (OpenAI, Anthropic, DeepMind, MetaAI, Google). Yet, those actors seem not to have been involved in this letter/ haven’t publicly commented. (afaik) This seems like a failure.
5. Not making it possible to verify the names is a pretty big mistake.
6. In my perception, the letter mostly appears alarmist at the current time, especially since it doesn’t include an argument for why future systems should be dangerous. It might just end up burning political capital.
GPT-4 was rushed, and the OpenAI Plugin store. Things are moving far too fast for comfort. I think we can forgive this response for being rushed. It’s good to have some significant opposition working on the brakes to the runaway existential catastrophe train that we’ve all been put on.
Update: I think it doesn’t make much sense to interpret the letter literally. Instead, it can be seen as an attempt to show that a range of people think that slowing down progress would be good, and I think it does an okay job at that (though I still think the wording could be much better, and it should present arguments for why we should decelerate.)