I’m going to say some critical stuff about this post. I hope I can do it without giving offense. This is how it seemed to one reader. I’m offering this criticism exactly because this post is, in important ways, good, and I’d like to see the author get better.
This is a long careful post, that boils down to “Someone will have to do something.” Okay, but what? It’s operating at a very high level of abstraction, only dipping down into the concrete only for a few sentences about chair construction. It was ultimately unsatisfying to me. I felt like it wrote some checks and left them up to other people to cash. I felt like the notion of sociotechnical system, and the need for an all-of-society response to AI, were novel and potentially important. I look forward to seeing how the author develops them.
This post seems to attempt to recapitulate the history of the AI risk discussion in a few aphoristic paragraphs, for somebody who’s never heard it before. Who’s the imagined audience for this piece? Certainly not the habitual Less Wrong reader, who has already read “List of Lethalities” or its equivalent. But it is equally inappropriate for the AI novice, who needs the alarming facts spelled out more slowly and carefully. I suspect it would help if the author clarified in their mind whom they imagine is reading it.
The post has the imagined structure of a logical proof, with definitions, axioms, and a proposition. But none of the points follow from each other with the rigidity that would require such a setup. When I read a math paper, I need all those things spelled out, because I might spend fifteen minutes reading a five-line definition, or need to repeatedly refer back to a theorem from several pages ago. But this is just an essay, with its lower standards of logical rigidity, and a greater need for readability. You’re just LARPing mathematics. It doesn’t make it more convincing.
Thanks very much, Carl. Your feedback is super-useful, and much appreciated. I’ll take it on board along with other comments and will work on a follow-up that gives more examples of what sort of controls might be deployed in the wider system.
I’m going to say some critical stuff about this post. I hope I can do it without giving offense. This is how it seemed to one reader. I’m offering this criticism exactly because this post is, in important ways, good, and I’d like to see the author get better.
This is a long careful post, that boils down to “Someone will have to do something.” Okay, but what? It’s operating at a very high level of abstraction, only dipping down into the concrete only for a few sentences about chair construction. It was ultimately unsatisfying to me. I felt like it wrote some checks and left them up to other people to cash. I felt like the notion of sociotechnical system, and the need for an all-of-society response to AI, were novel and potentially important. I look forward to seeing how the author develops them.
This post seems to attempt to recapitulate the history of the AI risk discussion in a few aphoristic paragraphs, for somebody who’s never heard it before. Who’s the imagined audience for this piece? Certainly not the habitual Less Wrong reader, who has already read “List of Lethalities” or its equivalent. But it is equally inappropriate for the AI novice, who needs the alarming facts spelled out more slowly and carefully. I suspect it would help if the author clarified in their mind whom they imagine is reading it.
The post has the imagined structure of a logical proof, with definitions, axioms, and a proposition. But none of the points follow from each other with the rigidity that would require such a setup. When I read a math paper, I need all those things spelled out, because I might spend fifteen minutes reading a five-line definition, or need to repeatedly refer back to a theorem from several pages ago. But this is just an essay, with its lower standards of logical rigidity, and a greater need for readability. You’re just LARPing mathematics. It doesn’t make it more convincing.
Thanks very much, Carl. Your feedback is super-useful, and much appreciated. I’ll take it on board along with other comments and will work on a follow-up that gives more examples of what sort of controls might be deployed in the wider system.