There’s an important difference between thinking mathematically and only thinking mathematically.
I agree. I am not so sure I agree that cobbled-together AI can “quite conceivably be made safe by piecemeal engineering solutions”, and I’m pretty sure that historically at least MIRI has thought it very unlikely that they can. It does seem plausible that any potentially-dangerous AI could be made at least a bit safer by such things, and I hope MIRI aren’t advocating that no such things be done.
By 1900, the basic principles of areodynamics in terms of lift and drag were known for almost a century—the basic math of flight. There were two remaining problems: power and control. Powered heavier than air flight requires an efficient engine with sufficient power/weight ratio. Combustion engine tech developed along a sigmoid, and by 1900 that tech was ready.
The remaining problem then was control. Most of the flight pioneers either didn’t understand the importance of this problem, or they thought that aircraft could be controlled like boats are—with a simple rudder mechanism. The Wright Brothers—two unknown engineers—realized that steering in 3D was more complex. They solved this problem by careful observation of bird flight. They saw that birds turned by banking their whole body (and thus leveraging the entire wing airfoil), induced through careful airfoil manipulation on the trailing wing edge. They copied this wing warping mechanism directly in their first flying machines. Of course—they weren’t the only ones to realize all this, and ailerons are functionally equivalent but more practical for fixed wing aircraft.
Flight was achieved by technological evolution or experimental engineering, taking some inspiration from biology. Pretty much all tech is created through steady experimental/evolutionary engineering. Machine learning is on a very similar track to produce AGI in the near term.
But this is all rather reminiscent of computer security, where there are crude piecemeal things you can do that help a bit, but if you want really tight security
Ahh and that’s part of the problem. The first AGIs will be sub-human then human level intelligence, and Moore’s Law is about to end or has already ended, so the risk of some super rapid SI explosion in the near term is low. Most of the world doesn’t care about tight security. AGI just needs to be as safe or safer than humans. Tight security is probably impossible regardless—you can’t prove tight bounds on any system of extreme complexity (like the real world). Tight math bounds always requires ultra-simplified models.
By 1900, the basic principles of areodynamics in terms of lift and drag were known for almost a century—the basic math of flight. There were two remaining problems: power and control. Powered heavier than air flight requires an efficient engine with sufficient power/weight ratio. Combustion engine tech developed along a sigmoid, and by 1900 that tech was ready.
The remaining problem then was control. Most of the flight pioneers either didn’t understand the importance of this problem, or they thought that aircraft could be controlled like boats are—with a simple rudder mechanism. The Wright Brothers—two unknown engineers—realized that steering in 3D was more complex. They solved this problem by careful observation of bird flight. They saw that birds turned by banking their whole body (and thus leveraging the entire wing airfoil), induced through careful airfoil manipulation on the trailing wing edge. They copied this wing warping mechanism directly in their first flying machines. Of course—they weren’t the only ones to realize all this, and ailerons are functionally equivalent but more practical for fixed wing aircraft.
Flight was achieved by technological evolution or experimental engineering, taking some inspiration from biology. Pretty much all tech is created through steady experimental/evolutionary engineering. Machine learning is on a very similar track to produce AGI in the near term.
Ahh and that’s part of the problem. The first AGIs will be sub-human then human level intelligence, and Moore’s Law is about to end or has already ended, so the risk of some super rapid SI explosion in the near term is low. Most of the world doesn’t care about tight security. AGI just needs to be as safe or safer than humans. Tight security is probably impossible regardless—you can’t prove tight bounds on any system of extreme complexity (like the real world). Tight math bounds always requires ultra-simplified models.