Well thank the benevolence of the Friendly AI that this intelligence didn’t see a helium balloon first. Just imagine the kinds of theories it might produce then!
If you see one object falling in a particular way, you might infer that all objects fall that way—but it’s an extremely weak inference, as the strength of a single observation is spread over the entirety of “all things”. We were so confident in Newton’s formulation for such a long time because we had a vast store of observations, and were aware of confounding influences that masked the underlying pattern: things like air resistance and buoyancy. The understanding that all things fall at a given rate was a strong and reliable inference because we observed it to hold across many, many things. Once we knew that, we could show that such behavior was consistent with Newton’s hypothesized force. More importantly, we had already determined through observation that the objects in the Solar system moved in elliptical orbits, but we didn’t know why. We were able to show that Newton’s hypothesized forces would result in objects moving in such a way, and so concluded that his description was correct.
Eliezer is almost certainly wrong about what a hyper-rational AI could determine from a limited set of observations. It would probably notice the implications of Maxwell’s laws that require Relativity to fully explain—something real physicists missed for a generation—because the implications follow directly from the mathematics. Actually producing the laws in the first place requires a lot of data regarding electricity and magnetism.
His projected super-intelligence would very quickly outleap its data and rush to all sorts of unsupportable inferences. If it confused those inferences with conclusions, it would fall into error faster than we could possibly correct it, and if it lacked the long, slow, tedious process of checking and re-checking data that science uses, it would be unlikely to ever correct those errors.
Well thank the benevolence of the Friendly AI that this intelligence didn’t see a helium balloon first. Just imagine the kinds of theories it might produce then!
If you see one object falling in a particular way, you might infer that all objects fall that way—but it’s an extremely weak inference, as the strength of a single observation is spread over the entirety of “all things”. We were so confident in Newton’s formulation for such a long time because we had a vast store of observations, and were aware of confounding influences that masked the underlying pattern: things like air resistance and buoyancy. The understanding that all things fall at a given rate was a strong and reliable inference because we observed it to hold across many, many things. Once we knew that, we could show that such behavior was consistent with Newton’s hypothesized force. More importantly, we had already determined through observation that the objects in the Solar system moved in elliptical orbits, but we didn’t know why. We were able to show that Newton’s hypothesized forces would result in objects moving in such a way, and so concluded that his description was correct.
Eliezer is almost certainly wrong about what a hyper-rational AI could determine from a limited set of observations. It would probably notice the implications of Maxwell’s laws that require Relativity to fully explain—something real physicists missed for a generation—because the implications follow directly from the mathematics. Actually producing the laws in the first place requires a lot of data regarding electricity and magnetism.
His projected super-intelligence would very quickly outleap its data and rush to all sorts of unsupportable inferences. If it confused those inferences with conclusions, it would fall into error faster than we could possibly correct it, and if it lacked the long, slow, tedious process of checking and re-checking data that science uses, it would be unlikely to ever correct those errors.