This is probably the wrong place if you were looking for optimism about that approach. Improving purely-artificial intelligence has a much faster feedback loop, and thus a much larger exponent, than improving our own intelligence.
Of course, currently there’s not much of a feedback loop of AI improving AI (though see Codex etc.) - but I think we’re closer to that than we are to hooking the human brain into computer and it actually doing something useful.
If this sort of strategy is where your mind first went, I’d recommend looking into the related-in-spirit strategy of improving external assistants—i.e. amplifying our own capabilities by training AI to assist us or to help us work on AI alignment problems, with no BCI involved.
This is probably the wrong place if you were looking for optimism about that approach. Improving purely-artificial intelligence has a much faster feedback loop, and thus a much larger exponent, than improving our own intelligence.
Of course, currently there’s not much of a feedback loop of AI improving AI (though see Codex etc.) - but I think we’re closer to that than we are to hooking the human brain into computer and it actually doing something useful.
If this sort of strategy is where your mind first went, I’d recommend looking into the related-in-spirit strategy of improving external assistants—i.e. amplifying our own capabilities by training AI to assist us or to help us work on AI alignment problems, with no BCI involved.
Even if pessimistic, it is invaluable to know that an idea is unlikely to succeed before you invest your only shot into it.
Thanks for the pointers, I will research them and reformulate my plan.
Some reading recommendations might be Learning The Prior, and the AI Alignment Dataset project.