There’s a lot of “neuralink will make it easier to solve the alignment problem” stuff going around the mainstream internet right now in response to neuralink’s recent demo.
I’m inclined to agree with Eliezer, that seems wrong; either AGI will be aligned in which case it will make its own neuralink and wont need ours, or it will be unaligned and you really wouldn’t want to connect with it. You can’t make horses competitive with cars by giving them exoskeletons.
But, is there much of a reason to push back against this?
Providing humans with cognitive augmentation probably would help to solve the alignment problem, in a bunch of indirect ways.
It doesn’t seem like a dangerous error at all. It feeds a public desire to understand how AGI might work. Neuralink itself is a great project for medical science. Generally, wrong beliefs cause bad consequences, but I’m having difficulty seeing what they’d be here.
The obvious bad consequence is a false sense of security leading people to just get BCIs instead of trying harder to shape (e.g. delay) AI development.
″ You can’t make horses competitive with cars by giving them exoskeletons. ” <-- this reads to me like a separate argument, rather than a restatement of the one that came before.
I agree that BCI seems unlikely to be a good permanent/long-term solution, unless it helps us solve alignment, which I think it could. It could also just defuse a conflict between AIs and humans, leading us to gracefully give up our control over the future light cone instead of fighting a (probably losing) battle to retain it.
...Your post made me think more about my own (and others’) reasons for rejecting Neuralink as a bad idea… I think there’s a sense of “we’re the experts and Elon is a n00b”. This coupled with feeling a bit burned by Elon first starting his own AI safety org and then ditching it for this… overall doesn’t feel great.
I’ve never been mad at elon for not having decision theoretic alignmentism. I wonder, should I be mad. Should I be mad about the fact that he has never talked to eliezer (eliezer said that in passing a year or two ago on twitter) even though he totally could whenever he wanted.
Also, what happened at OpenAI? He appointed some people to solve the alignment problem, I think we can infer that they told him, “you’ve misunderstood something and the approach you’re advocating (proliferate the technology?) wouldn’t really be all that helpful”, and he responded badly to that? They did not reach mutual understanding?
There’s a lot of “neuralink will make it easier to solve the alignment problem” stuff going around the mainstream internet right now in response to neuralink’s recent demo.
I’m inclined to agree with Eliezer, that seems wrong; either AGI will be aligned in which case it will make its own neuralink and wont need ours, or it will be unaligned and you really wouldn’t want to connect with it. You can’t make horses competitive with cars by giving them exoskeletons.
But, is there much of a reason to push back against this?
Providing humans with cognitive augmentation probably would help to solve the alignment problem, in a bunch of indirect ways.
It doesn’t seem like a dangerous error at all. It feeds a public desire to understand how AGI might work. Neuralink itself is a great project for medical science. Generally, wrong beliefs cause bad consequences, but I’m having difficulty seeing what they’d be here.
The obvious bad consequence is a false sense of security leading people to just get BCIs instead of trying harder to shape (e.g. delay) AI development.
″ You can’t make horses competitive with cars by giving them exoskeletons. ” <-- this reads to me like a separate argument, rather than a restatement of the one that came before.
I agree that BCI seems unlikely to be a good permanent/long-term solution, unless it helps us solve alignment, which I think it could. It could also just defuse a conflict between AIs and humans, leading us to gracefully give up our control over the future light cone instead of fighting a (probably losing) battle to retain it.
...Your post made me think more about my own (and others’) reasons for rejecting Neuralink as a bad idea… I think there’s a sense of “we’re the experts and Elon is a n00b”. This coupled with feeling a bit burned by Elon first starting his own AI safety org and then ditching it for this… overall doesn’t feel great.
I’ve never been mad at elon for not having decision theoretic alignmentism. I wonder, should I be mad. Should I be mad about the fact that he has never talked to eliezer (eliezer said that in passing a year or two ago on twitter) even though he totally could whenever he wanted.
Also, what happened at OpenAI? He appointed some people to solve the alignment problem, I think we can infer that they told him, “you’ve misunderstood something and the approach you’re advocating (proliferate the technology?) wouldn’t really be all that helpful”, and he responded badly to that? They did not reach mutual understanding?