I’m not even that optimistic about ‘success’. It’s actually feels a little weird how much my thinking on AI has changed. A few years ago a digital toddler might have seemed like a sensible approach.
What seems nonsensible about it now? (I presume you’re meaning in the sense of even succeeding at the AGI goal, not in the sense of succeeding in the Friendliness goal.)
If you think you understand what’s needed to make a Human Level AI, then you shouldn’t need a five step plan. (at least not with these steps) If you expect to learn anything important from the toddler stage that will let you move towards the adult stage, then you already know you don’t understand the problem.
The insight here is that many parts need to work together.
Setting the “toddler” target makes it seem like you’re breaking the problem down into a more manageable chunk, but it’s actually at least as large as the original problem. The village idiot and Einstein are very close together on the spectrum that includes Dogs, chimps, and superhuman AI, and I think a 4 year old might be above the village idiot. If you can do that, just finish it.
4-year old level problem solving ability at 4-year old speeds is a severely anthropomorphic prediction of a designs abilities. If you could do that, why not crank up the speed (at least) and get a thing that can do real work. Perhaps still conceptually simple by adult standards, but way ahead of current bots. You could almost certainly get through very complex problems if you could give instructions to an immortal toddler.
There is no such thing as a digital toddler that is not a recompile away from superhuman AI. I’m guessing that this plan stems from some kind of humility, or not wanting to fail. It feels easier to make what you think is a weak intellect. Given the existing virtual dog, it might feel like they’re making progress. It would certainly be possible to make a toddler that is increasingly convincing as a toddler.
This is wasting a bunch of effort on Machine vision, NLP, and dancing robots, which I think do not feed into general intelligence. If you’re convinced friendliness isn’t important, and you need a hard sample problem for your AI, pick cancer, not cyberchat.
I figured the plan comes from how human intelligence seems to be built up in two stages, first the genetics-driven fetal brain formation without significant any significant sensory input to drive things, then the long slog of picking up patterns from loads and loads of noisy sensory data from toddlerhood onward.
Working from this guess, anatomical differences between the brain of a toddler and the brain of an adult aren’t important here, the point is that after the “toddler” stage, a human-inspired AI design will learn the stuff it needs by processing sensory input, not necessarily with additional brain structure engineering.
There may be a case to be made why such a human-inspired design is not the way to go, but you seem to be arguing against an AI that’s designed to stay at the level of a toddler, instead of proceeding to learn from what it observes to develop towards adult intelligence like real toddlers do.
Why should it take an AI design more than five minutes to process a significant enough amount of rich sensory data to push it to an adult level?
Upon review, I think you are correct that this plan does think it’s solving all the major programming problems by the toddler stage, and the rest is just education.
But the question is still: If you understand how to make intelligence, why make one that’s even nearly as low as us?
Why should it take an AI design more than five minutes to process a significant enough amount of rich sensory data to push it to an adult level?
I don’t know, I have never built an AGI. One possibility is that the design isn’t one that just needs to chew on static data to learn, and instead it needs to try to interact with existing intelligent entities with its first guesses at behavior and then use the feedback to refine its behavior, like human children do. This would slow down initial takeoff since the only intelligent entities initially available are slow live humans. Things could of course be sped up once there’s an ecosystem of adult-level intelligent AIs that could be run as fast as the computers are able.
As for the five minute figure, let’s say humans need about 0.2e9 seconds of interacting with their surroundings to grow up to a semi-adult level. Let’s go with Ray Kurzweil’s estimate of 1e16 computations per second needed for a human brain inspired AI. Five minutes would require an order of 1e22 cps hardware. Moore’s law isn’t quite there yet, I think current fastest computers are somewhere around 1e15 range. So you’d have to be able to shave quite a few orders of magnitude from the back of the envelope estimation right off the cuff. I could also do back of the envelope nastiness with the human sensory bandwidth times 0.2 gigaseconds and how many bits you can push to a CPU in a second, but you probably get the idea.
So it seems to come down to at least how simple or messy the basic discovered AI architecture ends up being. A hard takeoff from the first genuine AGI architecture to superhuman intelligence in days requires an architecture something like ten orders of magnitude more efficient than humans. That needs to be not only possible, but discoverable at the stage of doing the first prototypes of AGI.
The required minimum complexity for an AGI is a pretty big unknown right now, so I’m going with the human-based estimations for the optimistic predictions where the AGI won’t kill everyone. Hard takeoff is of course worth considering for the worst-case scenario predictions.
First, I want to say I think that was a really good response.
One possibility is that the design isn’t one that just needs to chew on static data to learn, and instead it needs to try to interact with existing intelligent entities with its first guesses at behavior and then use the feedback to refine its behavior, like human children do.
I think that this is somehow muddling the notions of intelligence and learning-the-problem, but I don’t have it pinned down at the moment. Feeding training data to an AI should only be needed if the programmers are ignorant of the relevant patterns which will be produced in the mature AI. If the adult AI is actually smarter than the best AI the programmers could put out (the toddler) then something changed which would correspond to a novel AI design principle. But all parties might still be ignorant of that principle, if for example it occurred one day when a teach stumbled on to the right way to explain a concept to the toddler, but it wasn’t obvious how that tied in to the new more efficient data structure in the toddler’s mind.
Because if you out those things that the AI child would learn that would turn it into an AI scientist, then you could just create those structures.
So this isn’t “let’s figure out AI as we go along” as I originally thought, but “Let’s automate the process of figuring out AI” Which is more dangerous and probably more likely to succeed. So I’m updating in the direction of Vladimir_Nesov’s position, but this strategy is still dependent on not knowing what you’re doing.
I could get 4 of orders of magnitude by retreating to 5 weeks instead of five minutes., and that’s still a hard takeoff. I also think it would be relatively easy to get at least an order of magnitude speedup over human learning just by tweaking what gets remembered and what gets forgotten.
That’s probably the real danger here: You learn more about what’s really important to intelligence, and then you make your program a little better, or it just learns and gets a little better, and you celebrate. You don’t at that point suddenly realize that you built an AI without knowing how it works. So you gradually go from a design that needs a small cluster just to act like a dog to a design that’s twice as smart as your desktop’s idle cycles, but you never panic and reevaluate friendliness.
The same kind of friendliness problem could happen with whole brain emulations.
Hmm, the appeal of getting eaten first...
I’m not even that optimistic about ‘success’. It’s actually feels a little weird how much my thinking on AI has changed. A few years ago a digital toddler might have seemed like a sensible approach.
What seems nonsensible about it now? (I presume you’re meaning in the sense of even succeeding at the AGI goal, not in the sense of succeeding in the Friendliness goal.)
If you think you understand what’s needed to make a Human Level AI, then you shouldn’t need a five step plan. (at least not with these steps) If you expect to learn anything important from the toddler stage that will let you move towards the adult stage, then you already know you don’t understand the problem.
From: http://opencog.org/faq/
The insight here is that many parts need to work together.
Setting the “toddler” target makes it seem like you’re breaking the problem down into a more manageable chunk, but it’s actually at least as large as the original problem. The village idiot and Einstein are very close together on the spectrum that includes Dogs, chimps, and superhuman AI, and I think a 4 year old might be above the village idiot. If you can do that, just finish it.
4-year old level problem solving ability at 4-year old speeds is a severely anthropomorphic prediction of a designs abilities. If you could do that, why not crank up the speed (at least) and get a thing that can do real work. Perhaps still conceptually simple by adult standards, but way ahead of current bots. You could almost certainly get through very complex problems if you could give instructions to an immortal toddler.
There is no such thing as a digital toddler that is not a recompile away from superhuman AI. I’m guessing that this plan stems from some kind of humility, or not wanting to fail. It feels easier to make what you think is a weak intellect. Given the existing virtual dog, it might feel like they’re making progress. It would certainly be possible to make a toddler that is increasingly convincing as a toddler.
This is wasting a bunch of effort on Machine vision, NLP, and dancing robots, which I think do not feed into general intelligence. If you’re convinced friendliness isn’t important, and you need a hard sample problem for your AI, pick cancer, not cyberchat.
I figured the plan comes from how human intelligence seems to be built up in two stages, first the genetics-driven fetal brain formation without significant any significant sensory input to drive things, then the long slog of picking up patterns from loads and loads of noisy sensory data from toddlerhood onward.
Working from this guess, anatomical differences between the brain of a toddler and the brain of an adult aren’t important here, the point is that after the “toddler” stage, a human-inspired AI design will learn the stuff it needs by processing sensory input, not necessarily with additional brain structure engineering.
There may be a case to be made why such a human-inspired design is not the way to go, but you seem to be arguing against an AI that’s designed to stay at the level of a toddler, instead of proceeding to learn from what it observes to develop towards adult intelligence like real toddlers do.
Why should it take an AI design more than five minutes to process a significant enough amount of rich sensory data to push it to an adult level?
Upon review, I think you are correct that this plan does think it’s solving all the major programming problems by the toddler stage, and the rest is just education.
But the question is still: If you understand how to make intelligence, why make one that’s even nearly as low as us?
I don’t know, I have never built an AGI. One possibility is that the design isn’t one that just needs to chew on static data to learn, and instead it needs to try to interact with existing intelligent entities with its first guesses at behavior and then use the feedback to refine its behavior, like human children do. This would slow down initial takeoff since the only intelligent entities initially available are slow live humans. Things could of course be sped up once there’s an ecosystem of adult-level intelligent AIs that could be run as fast as the computers are able.
As for the five minute figure, let’s say humans need about 0.2e9 seconds of interacting with their surroundings to grow up to a semi-adult level. Let’s go with Ray Kurzweil’s estimate of 1e16 computations per second needed for a human brain inspired AI. Five minutes would require an order of 1e22 cps hardware. Moore’s law isn’t quite there yet, I think current fastest computers are somewhere around 1e15 range. So you’d have to be able to shave quite a few orders of magnitude from the back of the envelope estimation right off the cuff. I could also do back of the envelope nastiness with the human sensory bandwidth times 0.2 gigaseconds and how many bits you can push to a CPU in a second, but you probably get the idea.
So it seems to come down to at least how simple or messy the basic discovered AI architecture ends up being. A hard takeoff from the first genuine AGI architecture to superhuman intelligence in days requires an architecture something like ten orders of magnitude more efficient than humans. That needs to be not only possible, but discoverable at the stage of doing the first prototypes of AGI.
The required minimum complexity for an AGI is a pretty big unknown right now, so I’m going with the human-based estimations for the optimistic predictions where the AGI won’t kill everyone. Hard takeoff is of course worth considering for the worst-case scenario predictions.
First, I want to say I think that was a really good response.
I think that this is somehow muddling the notions of intelligence and learning-the-problem, but I don’t have it pinned down at the moment. Feeding training data to an AI should only be needed if the programmers are ignorant of the relevant patterns which will be produced in the mature AI. If the adult AI is actually smarter than the best AI the programmers could put out (the toddler) then something changed which would correspond to a novel AI design principle. But all parties might still be ignorant of that principle, if for example it occurred one day when a teach stumbled on to the right way to explain a concept to the toddler, but it wasn’t obvious how that tied in to the new more efficient data structure in the toddler’s mind.
Because if you out those things that the AI child would learn that would turn it into an AI scientist, then you could just create those structures.
So this isn’t “let’s figure out AI as we go along” as I originally thought, but “Let’s automate the process of figuring out AI” Which is more dangerous and probably more likely to succeed. So I’m updating in the direction of Vladimir_Nesov’s position, but this strategy is still dependent on not knowing what you’re doing.
I could get 4 of orders of magnitude by retreating to 5 weeks instead of five minutes., and that’s still a hard takeoff. I also think it would be relatively easy to get at least an order of magnitude speedup over human learning just by tweaking what gets remembered and what gets forgotten.
That’s probably the real danger here: You learn more about what’s really important to intelligence, and then you make your program a little better, or it just learns and gets a little better, and you celebrate. You don’t at that point suddenly realize that you built an AI without knowing how it works. So you gradually go from a design that needs a small cluster just to act like a dog to a design that’s twice as smart as your desktop’s idle cycles, but you never panic and reevaluate friendliness.
The same kind of friendliness problem could happen with whole brain emulations.