I’ve read this paper and find it fascinating. I think it’s very relevant to Lesswrong’s interests. Not just that it’s about AI,but also that it asks hard moral and philosophical questions.
There are many interesting excerpts. For example:
The drug midazolam (also known as ‘versed,’ short for ‘versatile sedative’) is often used in procedures like endoscopy and colonoscopy… surveyed doctors in Germany who indicated that during endoscopies using midazolam, patients would ‘moan aloud because of pain’ and sometimes scream. Most of the endoscopists reported ‘fierce defense movements with midazolam or the need to hold the patient down on the examination couch.’ And yet, because midazolam blocks memory formation, most patients didn’t remember this: ‘the potent amnestic effect of midazolam conceals pain actually suffered during the endoscopic procedure’. While midazolam does prevent the hippocampus from forming memories, the patient remains conscious, and dopaminergic reinforcement-learning continues to function as normal.
The author is associated with the Foundational Research Institute, which has a variety of interests highly connected to those of Lesswrong, yet some casual searches seem to show they’ve not been mentioned.
Briefly, they seem to be focused on averting suffering, with various outlooks on that including effective altruism outreach, animal suffering and ai-risk as a cause of great suffering.
Do Artificial Reinforcement-Learning Agents Matter Morally?
I’ve read this paper and find it fascinating. I think it’s very relevant to Lesswrong’s interests. Not just that it’s about AI,but also that it asks hard moral and philosophical questions.
There are many interesting excerpts. For example:
The author is associated with the Foundational Research Institute, which has a variety of interests highly connected to those of Lesswrong, yet some casual searches seem to show they’ve not been mentioned.
Briefly, they seem to be focused on averting suffering, with various outlooks on that including effective altruism outreach, animal suffering and ai-risk as a cause of great suffering.