As I understand it, the idea with the problems listed in the article is that their solutions are supposed to be fundamental design principles of the AI, rather than addons to fix loopholes.
Augmenting ourselves is probably a good idea to do *in addition* to AI safety research, but I think it’s dangerous to do it *instead* of AI safety research. It’s far from impossible that artificial intelligence could gain intelligence much faster at some point than augmenting the rather messy human brain, at which point it *needs* to be designed in a safe way.
I’d say we start augmenting the human brain until it’s completely replaced by a post-biological counterpart and from there rapid improvements can start taking place, but unless we start early I doubt we’ll be able to catch up with AI. I agree on the part that this need to happen in tandem with AI safety.
As I understand it, the idea with the problems listed in the article is that their solutions are supposed to be fundamental design principles of the AI, rather than addons to fix loopholes.
Augmenting ourselves is probably a good idea to do *in addition* to AI safety research, but I think it’s dangerous to do it *instead* of AI safety research. It’s far from impossible that artificial intelligence could gain intelligence much faster at some point than augmenting the rather messy human brain, at which point it *needs* to be designed in a safe way.
I’d say we start augmenting the human brain until it’s completely replaced by a post-biological counterpart and from there rapid improvements can start taking place, but unless we start early I doubt we’ll be able to catch up with AI. I agree on the part that this need to happen in tandem with AI safety.