What is the current popular (or ideally wise) wisdom wrt publishing demos of scary/spooky AI capabilities? I’ve heard the argument that moderately scary demos drive capability development into secrecy. Maybe it’s just all in the details of who you show what when and what you say. But has someone written a good post about this question?
The way it is now, when one lab has an insight, the insight will probably spread quickly to all the other labs. If we could somehow “drive capability development into secrecy,” that would drastically slow down capability development.
What is the current popular (or ideally wise) wisdom wrt publishing demos of scary/spooky AI capabilities? I’ve heard the argument that moderately scary demos drive capability development into secrecy. Maybe it’s just all in the details of who you show what when and what you say. But has someone written a good post about this question?
The way it is now, when one lab has an insight, the insight will probably spread quickly to all the other labs. If we could somehow “drive capability development into secrecy,” that would drastically slow down capability development.