Thinking that evolution is smart on the timescales we care about is probably a worse heuristic, though. Evolution can’t look ahead, which is fine when it’s possible to construct useful intermediate adaptations, but poses a serious problem when there are no useful intermediates. In the case of infosec, it’s as all-or-nothing as it gets. A single mistake exposes the whole system to attack by adversaries. In this case, the attack could destroy the mind of the person using their neural connection.
Consider it from this perspective: a single deleterious mutation to part of the genome encoding the security system opens the person up to someone else poisoning their mind in serious and sudden ways: consider literal toxins, including the wide variety of organochlorides and other chemicals that can bind acetylcholinesterase and cause seizures (i.e., how many pesticides work), but also consider memetic attacks that can cause the person to act against their own interests (yes, language also permits these attacks, but much less efficiently than being able to directly update someone’s beliefs/memories/heuristics/thoughts, which is entirely possible once you open a direct, physical connection to someone’s brain from the outside of their skull—eyes are bad enough, from this perspective!).
A secure system would not only have to be secure for the individual it evolved in, but also be robust to the variety of mutations it will encounter in that individual’s descendants. And the stage in between wherein some individuals have secure neural communication while others can have their minds ravaged by adversaries (or unwitting friends) would prevent any widespread adoption of the genes involved.
Over millions upon millions of years, it’s possible that evolution could devise an ingenious system that gets around all of this, but my guess is that direct neural communication would only noticeably help language-bearing humans, which have existed for only ~100K years. Simpler organisms can just exchange chemicals or other simple signals. I don’t think 100K years nearly enough time to evolve a robust-to-mutations security system for a process that can directly update the contents of someone’s mind.
This post is a bad idea and it would be better if it were taken down. It’s “penny-wise, pound-foolish” applied to epistemology and I would be utterly shocked if this post had a net positive effect.
I wrote a big critique outlining why I think it’s bad, but I couldn’t keep it civil and don’t want to spend another hour editing it to be, so I’ll keep it brief and to the point: lesswrong has been a great source of info and discussion on COVID-19 in the past couple of weeks, much better than most mainstream sources, but as usual, I don’t recommend the site to friends or family because I know posts like this always pop up and I don’t want to expose people to this obvious info hazard or be put in the position of defending why I recommended a community that posts info hazards like this.
As a mostly-lurker, I’m really just raising my hand here and saying “posts like this make me extremely uncomfortable and unwilling to recommend this community to others.” Obviously not everyone wants this community to become mainstream and I’m really not trying to make anyone feel bad, but I think it’s worth mentioning since other than David Manheim, I don’t see my opinion represented in the comments yet and it looks like it’s a minority one.
(Obviously it’s up to the author whether or not to remove the post—I’m not requesting anything here, just expressing my preferences.)