If the brain is efficient, and it is, then you shouldn’t try to cargo-cult copy the brain, any more than we cargo-culted feathery wings to make airplanes.
The wright brothers copied wings for lift and wing warping for 3D control both from birds. Only the forward propulsion was different.
make an engine based on a clear theory of which natural forces govern the phenomenon in question—here, thought.
We already have that—it’s called a computer. AGI is much more specific and anthropocentric because it is relative to our specific society/culture/economy. It requires predicting and modelling human minds—and the structure of efficient software that can predict a human mind is itself a human mind.
“the structure of efficient software that can predict a human mind is itself a human mind.”—I doubt that. Why do you think this is the case? I think there are already many examples where simple statistical models (e.g. linear regression) can do a better job of predicting some things about a human than an expert human can.
“Intelligence measures an agent’s ability to achieve goals in a wide range of environments.”
So, arguably that should include environments with humans in them. But to succeed, an AI would not necessarily have to predict or model human minds; it could instead, e.g. kill all humans, and/or create safeguards that would prevent its own destruction by any existing technology.
A computer is a bicycle for the mind. Logic is purified thought, computers are logic engines. General intelligence can be implemented by a computer, but it is much more anthrospecific.
With respect, no, it’s just thought with all the interesting bits cut away to leave something so stripped-down it’s completely deterministic.
computers are logic engines
Sorta-kinda. They’re also arithmetic engines, floating-point engines, recording engines. They can be made into probability engines, which is the beginnings of how you implement intelligence on a computer.
The wright brothers copied wings for lift and wing warping for 3D control both from birds. Only the forward propulsion was different.
We already have that—it’s called a computer. AGI is much more specific and anthropocentric because it is relative to our specific society/culture/economy. It requires predicting and modelling human minds—and the structure of efficient software that can predict a human mind is itself a human mind.
“the structure of efficient software that can predict a human mind is itself a human mind.”—I doubt that. Why do you think this is the case? I think there are already many examples where simple statistical models (e.g. linear regression) can do a better job of predicting some things about a human than an expert human can.
Also, although I don’t think there is “one true definition” of AGI, I think there is a meaningful one which is not particularly anthropocentric, see Chapter 1 of Shane Legg’s thesis: http://www.vetta.org/documents/Machine_Super_Intelligence.pdf.
“Intelligence measures an agent’s ability to achieve goals in a wide range of environments.”
So, arguably that should include environments with humans in them. But to succeed, an AI would not necessarily have to predict or model human minds; it could instead, e.g. kill all humans, and/or create safeguards that would prevent its own destruction by any existing technology.
What? No.
A computer is a bicycle for the mind. Logic is purified thought, computers are logic engines. General intelligence can be implemented by a computer, but it is much more anthrospecific.
With respect, no, it’s just thought with all the interesting bits cut away to leave something so stripped-down it’s completely deterministic.
Sorta-kinda. They’re also arithmetic engines, floating-point engines, recording engines. They can be made into probability engines, which is the beginnings of how you implement intelligence on a computer.