The obvious bad consequence is a false sense of security leading people to just get BCIs instead of trying harder to shape (e.g. delay) AI development.
″ You can’t make horses competitive with cars by giving them exoskeletons. ” <-- this reads to me like a separate argument, rather than a restatement of the one that came before.
I agree that BCI seems unlikely to be a good permanent/long-term solution, unless it helps us solve alignment, which I think it could. It could also just defuse a conflict between AIs and humans, leading us to gracefully give up our control over the future light cone instead of fighting a (probably losing) battle to retain it.
...Your post made me think more about my own (and others’) reasons for rejecting Neuralink as a bad idea… I think there’s a sense of “we’re the experts and Elon is a n00b”. This coupled with feeling a bit burned by Elon first starting his own AI safety org and then ditching it for this… overall doesn’t feel great.
I’ve never been mad at elon for not having decision theoretic alignmentism. I wonder, should I be mad. Should I be mad about the fact that he has never talked to eliezer (eliezer said that in passing a year or two ago on twitter) even though he totally could whenever he wanted.
Also, what happened at OpenAI? He appointed some people to solve the alignment problem, I think we can infer that they told him, “you’ve misunderstood something and the approach you’re advocating (proliferate the technology?) wouldn’t really be all that helpful”, and he responded badly to that? They did not reach mutual understanding?
The obvious bad consequence is a false sense of security leading people to just get BCIs instead of trying harder to shape (e.g. delay) AI development.
″ You can’t make horses competitive with cars by giving them exoskeletons. ” <-- this reads to me like a separate argument, rather than a restatement of the one that came before.
I agree that BCI seems unlikely to be a good permanent/long-term solution, unless it helps us solve alignment, which I think it could. It could also just defuse a conflict between AIs and humans, leading us to gracefully give up our control over the future light cone instead of fighting a (probably losing) battle to retain it.
...Your post made me think more about my own (and others’) reasons for rejecting Neuralink as a bad idea… I think there’s a sense of “we’re the experts and Elon is a n00b”. This coupled with feeling a bit burned by Elon first starting his own AI safety org and then ditching it for this… overall doesn’t feel great.
I’ve never been mad at elon for not having decision theoretic alignmentism. I wonder, should I be mad. Should I be mad about the fact that he has never talked to eliezer (eliezer said that in passing a year or two ago on twitter) even though he totally could whenever he wanted.
Also, what happened at OpenAI? He appointed some people to solve the alignment problem, I think we can infer that they told him, “you’ve misunderstood something and the approach you’re advocating (proliferate the technology?) wouldn’t really be all that helpful”, and he responded badly to that? They did not reach mutual understanding?