How should someone familiar with past work in AI use that knowledge judge how much work is left to be done before reaching human-level AI, or human-level ability at a particular kind of task?
one way to apply such knowledge might be in differentiating between approaches that are indefinitely extendable and/or expandable and those that, despite impressive beginnings, tend to max out beyond a certain point. (Think of Joe Weizenbaum’s ELIZA as an example of the second.)
Whole Brain Emulation might be such an example, at least insofar as nothing in the approach itself seems to imply that it would be prone to get stuck in some local optimum before its ultimate goal (AGI) is achieved.
However, Whole Brain Emulation is likely to be much more resource intensive than other approaches, and if so will probably be no more than a transitional form of AGI.
How should someone familiar with past work in AI use that knowledge judge how much work is left to be done before reaching human-level AI, or human-level ability at a particular kind of task?
one way to apply such knowledge might be in differentiating between approaches that are indefinitely extendable and/or expandable and those that, despite impressive beginnings, tend to max out beyond a certain point. (Think of Joe Weizenbaum’s ELIZA as an example of the second.)
Do you have any examples of approaches that are indefinitely extendable?
Whole Brain Emulation might be such an example, at least insofar as nothing in the approach itself seems to imply that it would be prone to get stuck in some local optimum before its ultimate goal (AGI) is achieved.
However, Whole Brain Emulation is likely to be much more resource intensive than other approaches, and if so will probably be no more than a transitional form of AGI.