We want to emphasize just two points of intersection of AI in healthcare and AI safety: Medical AI is aimed at the preservation of human lives, whereas, for example, military AI is generally focused on human destruction. If we assume that AI preserves the values of its creators, medical AI should be more harmless.
The development of such types of medical AI as neuroimplants will accelerate the development of AI in the form of a distributed social network consisting of self-upgrading people. Here, again, the values of such an intelligent neuroweb will be defined by the values of its participant “nodes,” which should be relatively safer than other routes to AI. Also, AI based on human uploads may be less probable to go into quick unlimited selfimprovement, because of complex and opaque structure.
Interestingly, a few years ago I wrote an article “Artificial intelligence in life extension” in which I concluded that medical AI is a possible way to AI safety.