I’m all for thinking about brain-computer interfaces—what forms might they take, how likely are they, how desirable are they? I would actually lump this into the category of AGI safety research, not just because I draw the category pretty broadly, but because it’s best done and likeliest to be done by the same people who are doing other types of thinking about AGI safety. It’s possible that Robin has something narrower in mind when he talks about “AI risk”, so maybe there’s some common ground where we both think that brain-computer interface scenarios deserve more careful analysis? Not sure.
There does seem to be a disconnect, where people like Ray Kurzweil and Elon Musk say that brain-computer interfaces are critical for AGI safety (e.g. WaitButWhy), while most AGI safety researchers (e.g. at MIRI and OpenAI) don’t seem to talk about brain-computer interfaces at all. I think the latter group has come to the conclusion that brain-computer interfaces are unhelpful, and not just unlikely, but I haven’t seen a good articulation of that argument. It could also just be oversight / specialization. (Bostrom’s Superintelligence has just a couple paragraphs about brain-computer interfaces, generally skeptical but noncommittal, if memory serves.) It’s on my list of things to think more carefully about and write up, if I ever get around to it, and no one else does it first.
Just a quick note on where you say “I find this confusing. What are the other 19 chapter titles? ”, the book he’s referring to is “Global Catastrophic Risks” by Bostrom. There’s 22 chapters, with the first few being intro and background, hence ~20 chapters on big risks, only one of which is about AI.
I’m all for thinking about brain-computer interfaces—what forms might they take, how likely are they, how desirable are they? I would actually lump this into the category of AGI safety research, not just because I draw the category pretty broadly, but because it’s best done and likeliest to be done by the same people who are doing other types of thinking about AGI safety. It’s possible that Robin has something narrower in mind when he talks about “AI risk”, so maybe there’s some common ground where we both think that brain-computer interface scenarios deserve more careful analysis? Not sure.
There does seem to be a disconnect, where people like Ray Kurzweil and Elon Musk say that brain-computer interfaces are critical for AGI safety (e.g. WaitButWhy), while most AGI safety researchers (e.g. at MIRI and OpenAI) don’t seem to talk about brain-computer interfaces at all. I think the latter group has come to the conclusion that brain-computer interfaces are unhelpful, and not just unlikely, but I haven’t seen a good articulation of that argument. It could also just be oversight / specialization. (Bostrom’s Superintelligence has just a couple paragraphs about brain-computer interfaces, generally skeptical but noncommittal, if memory serves.) It’s on my list of things to think more carefully about and write up, if I ever get around to it, and no one else does it first.
Just a quick note on where you say “I find this confusing. What are the other 19 chapter titles? ”, the book he’s referring to is “Global Catastrophic Risks” by Bostrom. There’s 22 chapters, with the first few being intro and background, hence ~20 chapters on big risks, only one of which is about AI.