I also have concerns with this plan (mainly about timing, see my comment elsewhere on this thread). However, I disagree with your concerns. I think that a CogEm as described here has much better interpretability than a human brain (we can read the connections and weights completely). Based on my neuroscience background, I think that human brains are already more intepretable and controllable than black-box ml models. I think that the other problems you mention are greatly mitigated by the fact that we’d have edit-access to the weights and connections of the CogEm, thus would be able to redirect it much more easily than a human. I think that having full edit access to the weights and connections of a human brain would make that human quite controllable! Especially in combination with being able to wipe its memory and restore it to a previous state, rerun it over test scenarios many thousands of times with different parameters, etc.
I also have concerns with this plan (mainly about timing, see my comment elsewhere on this thread). However, I disagree with your concerns. I think that a CogEm as described here has much better interpretability than a human brain (we can read the connections and weights completely). Based on my neuroscience background, I think that human brains are already more intepretable and controllable than black-box ml models. I think that the other problems you mention are greatly mitigated by the fact that we’d have edit-access to the weights and connections of the CogEm, thus would be able to redirect it much more easily than a human. I think that having full edit access to the weights and connections of a human brain would make that human quite controllable! Especially in combination with being able to wipe its memory and restore it to a previous state, rerun it over test scenarios many thousands of times with different parameters, etc.