Is it necessary that we understand how intelligence works for us to know how to build it? This may almost be a philosophical question. Certainly a guy who builds race car engines almost certainly knows nothing about the periodic table of elements and the quantum effects behind electronic orbitals that can explain some of the mechanical properties of the metals that are used in the engines. Very likely he does not know much thermodynamics, does not appreciate the interplay between energy and entropy required to make a heat engine produce mechanical power. Possibly knows very little of the chemistry behind the design of the lubricants or the chemistry evolved in storing energy in hydrocarbons and releasing it by oxidizing it.
But I’d sure rather drive a car with an engine he designed in it than a car with an engine designed by a room full of chemists and physicists.
My point being, we may well develop a set of black boxes that can be linked together to produce AI systems for various tasks. Quite a lot will be known by the builders of these AIs about how to put these together and what to expect in certain configurations. But they may not know much about how the eagle-eye-vision core works or how the alpha-chimp-emotional core works, just how the go together and a sense of what to expect as they get hooked up.
Maybe we never have much sense of what goes on inside some of those black boxes. Just as it is hard to picture what the universe looked like before the big bang or at the center of a black hole. Maybe not.
An empirical question? Most people I know define understanding something as being able to build it. Its not a bad definition, it limits you to a subset of maps that have demonstrated utility for building things.
I don’t think it is an empirical question, empirically I think it is a tautology.
Is it necessary that we understand how intelligence works for us to know how to build it? This may almost be a philosophical question. Certainly a guy who builds race car engines almost certainly knows nothing about the periodic table of elements and the quantum effects behind electronic orbitals that can explain some of the mechanical properties of the metals that are used in the engines. Very likely he does not know much thermodynamics, does not appreciate the interplay between energy and entropy required to make a heat engine produce mechanical power. Possibly knows very little of the chemistry behind the design of the lubricants or the chemistry evolved in storing energy in hydrocarbons and releasing it by oxidizing it.
But I’d sure rather drive a car with an engine he designed in it than a car with an engine designed by a room full of chemists and physicists.
My point being, we may well develop a set of black boxes that can be linked together to produce AI systems for various tasks. Quite a lot will be known by the builders of these AIs about how to put these together and what to expect in certain configurations. But they may not know much about how the eagle-eye-vision core works or how the alpha-chimp-emotional core works, just how the go together and a sense of what to expect as they get hooked up.
Maybe we never have much sense of what goes on inside some of those black boxes. Just as it is hard to picture what the universe looked like before the big bang or at the center of a black hole. Maybe not.
This is definitely an empirical question. I hope it will be settled “relatively soon” in the affirmative by brain emulation.
An empirical question? Most people I know define understanding something as being able to build it. Its not a bad definition, it limits you to a subset of maps that have demonstrated utility for building things.
I don’t think it is an empirical question, empirically I think it is a tautology.