Compare four situations: 1) Space flight in 1950, 2) Heavier than air flight in 1900, 3) Heavier than air flight in 1200, 4) Drexler nanotech today. Since the science was lacking—no one had ever built the relevant technology, we have to approach the question indirectly. The best questions to ask are:
A) Does something like X already exist?
For 2) and 3) yes (birds flying using mechanical force), for 4) yes (enzymes doing similar roles). For 1), no.
B) If something like X exists, do we understand it?
For 2), yes, for 4), partially, for the other, not at all.
C) Do current technologies exist that can approach X without new conceptual ideas?
1) yes, (rockets), 2) yes (lifting surfaces and models of gliders), 3) no, 4) probably (larger scale nano and medical manipulations of enzymes, proteins and retro-viruses).
So I’d put Drexler’s nanotechnology in with flight in 1900 - the signs are that something like what he describes (certainly not exactly as he describes) will be coming in the forseable future.
(Unfortunately, I’d put AI down with flight in 1200 - intelligence exists, but we don’t understand it to any real extent, and current technologies are not approaching proper intelligence; they need new conceptual ideas)
(Unfortunately, I’d put AI down with flight in 1200 - intelligence exists, but we don’t understand it to any real extent, and current technologies are not approaching proper intelligence; they need new conceptual ideas)
Interesting. Do you still agree with Stuart_2007 on this?
(Unfortunately, I’d put AI down with flight in 1200 - intelligence exists, but we don’t understand it to any real extent, and current technologies are not approaching proper intelligence; they need new conceptual ideas)
What’s your measuring stick here? “Artificial general intelligence” doesn’t require the intelligent system to be able to have emotions, or even organism level goals, arguably. Arguably, a software stack where you can define what a robotic work system must accomplish in some heuristic language, and then autonomously generate the neural network architecture, models, and choose the robotic hardware platform to use meets the criterion of being “AGI”.
So a small number of AI engineers log in to some cloud hosted system, define some new task (‘cooking: grilled cheese sandwiches’), do a person-month or so of labor, and now worldwide anyone can get their grilled cheese cooked by a robot that is a little bit better on average than a human. (assuming they pay the license fees, which are priced to be cheaper than having a human do the task for that local labor market)
This to me seems near term, do you disagree with it’s feasibility?
Would this be “not AGI” even though you can automate entire classes of real world tasks?
(the limit is that the task needs to be modelable for your success heuristic, and for the inputs the robotic system will see. So you can automate almost any kind of physical manipulation task. While “teaching: second grade math” can’t be automated because you can’t model the facial expressions a small child will generate accurately or a set of queries a small child might ask over the entire state space of questions a child will likely ask, or model whether or not a given policy results in the child learning the math. At least, not with current simulators, obviously there is significant progress being made)
Compare four situations: 1) Space flight in 1950, 2) Heavier than air flight in 1900, 3) Heavier than air flight in 1200, 4) Drexler nanotech today. Since the science was lacking—no one had ever built the relevant technology, we have to approach the question indirectly. The best questions to ask are:
A) Does something like X already exist? For 2) and 3) yes (birds flying using mechanical force), for 4) yes (enzymes doing similar roles). For 1), no.
B) If something like X exists, do we understand it? For 2), yes, for 4), partially, for the other, not at all.
C) Do current technologies exist that can approach X without new conceptual ideas? 1) yes, (rockets), 2) yes (lifting surfaces and models of gliders), 3) no, 4) probably (larger scale nano and medical manipulations of enzymes, proteins and retro-viruses).
So I’d put Drexler’s nanotechnology in with flight in 1900 - the signs are that something like what he describes (certainly not exactly as he describes) will be coming in the forseable future.
(Unfortunately, I’d put AI down with flight in 1200 - intelligence exists, but we don’t understand it to any real extent, and current technologies are not approaching proper intelligence; they need new conceptual ideas)
Interesting. Do you still agree with Stuart_2007 on this?
Less. My personal opinion hasn’t changed much, but I know other people disagree, so my total opinion has moved quite a bit.
Do you still agree with Stuart_2012 on this?
Nope! Part of my own research has made more optimistic about the possibilities of understanding and creating intelligence.
(Unfortunately, I’d put AI down with flight in 1200 - intelligence exists, but we don’t understand it to any real extent, and current technologies are not approaching proper intelligence; they need new conceptual ideas)
What’s your measuring stick here? “Artificial general intelligence” doesn’t require the intelligent system to be able to have emotions, or even organism level goals, arguably. Arguably, a software stack where you can define what a robotic work system must accomplish in some heuristic language, and then autonomously generate the neural network architecture, models, and choose the robotic hardware platform to use meets the criterion of being “AGI”.
So a small number of AI engineers log in to some cloud hosted system, define some new task (‘cooking: grilled cheese sandwiches’), do a person-month or so of labor, and now worldwide anyone can get their grilled cheese cooked by a robot that is a little bit better on average than a human. (assuming they pay the license fees, which are priced to be cheaper than having a human do the task for that local labor market)
This to me seems near term, do you disagree with it’s feasibility?
Would this be “not AGI” even though you can automate entire classes of real world tasks?
(the limit is that the task needs to be modelable for your success heuristic, and for the inputs the robotic system will see. So you can automate almost any kind of physical manipulation task. While “teaching: second grade math” can’t be automated because you can’t model the facial expressions a small child will generate accurately or a set of queries a small child might ask over the entire state space of questions a child will likely ask, or model whether or not a given policy results in the child learning the math. At least, not with current simulators, obviously there is significant progress being made)