I write academic papers in healthcare, psychology, and epidemiology for peer-review. I don’t write blog posts every day, so thank you for your patience with this particular style, which was devised for guidelines and frameworks.
Thank you for sharing your thoughts on AI alignment, AI safety, and imminent threats. I posted this essay to demonstrate how public health guidelines and system thinking can be useful in preventing harm, inequality, and avoiding unforeseen negative outcomes in general. I wanted the LessWrong audience to gain perspectives from other fields that have been addressing rapidly emerging innovations—along with their benefits and harms—for centuries, with the aim of minimising risk and maximising benefit, keeping the wider public in mind.
I am aware of the narrative around the ‘paperclip maximiser’ threat. However, I believe it is important to recognise that the risks AI brings should not be viewed in the context of a single threat, a single bias, or one path to extinction. AI is a complex system, used in a complex setting—the social structure. It should be studied with due rigour, with a focus on understanding its complexity.
If you can suggest literature on AGI alignment that recognises the complexity of the issue and applies systems thinking to the problem, I would be grateful.
I write academic papers in healthcare, psychology, and epidemiology for peer-review. I don’t write blog posts every day, so thank you for your patience with this particular style, which was devised for guidelines and frameworks.
Thank you for sharing your thoughts on AI alignment, AI safety, and imminent threats. I posted this essay to demonstrate how public health guidelines and system thinking can be useful in preventing harm, inequality, and avoiding unforeseen negative outcomes in general. I wanted the LessWrong audience to gain perspectives from other fields that have been addressing rapidly emerging innovations—along with their benefits and harms—for centuries, with the aim of minimising risk and maximising benefit, keeping the wider public in mind.
I am aware of the narrative around the ‘paperclip maximiser’ threat. However, I believe it is important to recognise that the risks AI brings should not be viewed in the context of a single threat, a single bias, or one path to extinction. AI is a complex system, used in a complex setting—the social structure. It should be studied with due rigour, with a focus on understanding its complexity.
If you can suggest literature on AGI alignment that recognises the complexity of the issue and applies systems thinking to the problem, I would be grateful.