Why should it take an AI design more than five minutes to process a significant enough amount of rich sensory data to push it to an adult level?
Upon review, I think you are correct that this plan does think it’s solving all the major programming problems by the toddler stage, and the rest is just education.
But the question is still: If you understand how to make intelligence, why make one that’s even nearly as low as us?
Why should it take an AI design more than five minutes to process a significant enough amount of rich sensory data to push it to an adult level?
I don’t know, I have never built an AGI. One possibility is that the design isn’t one that just needs to chew on static data to learn, and instead it needs to try to interact with existing intelligent entities with its first guesses at behavior and then use the feedback to refine its behavior, like human children do. This would slow down initial takeoff since the only intelligent entities initially available are slow live humans. Things could of course be sped up once there’s an ecosystem of adult-level intelligent AIs that could be run as fast as the computers are able.
As for the five minute figure, let’s say humans need about 0.2e9 seconds of interacting with their surroundings to grow up to a semi-adult level. Let’s go with Ray Kurzweil’s estimate of 1e16 computations per second needed for a human brain inspired AI. Five minutes would require an order of 1e22 cps hardware. Moore’s law isn’t quite there yet, I think current fastest computers are somewhere around 1e15 range. So you’d have to be able to shave quite a few orders of magnitude from the back of the envelope estimation right off the cuff. I could also do back of the envelope nastiness with the human sensory bandwidth times 0.2 gigaseconds and how many bits you can push to a CPU in a second, but you probably get the idea.
So it seems to come down to at least how simple or messy the basic discovered AI architecture ends up being. A hard takeoff from the first genuine AGI architecture to superhuman intelligence in days requires an architecture something like ten orders of magnitude more efficient than humans. That needs to be not only possible, but discoverable at the stage of doing the first prototypes of AGI.
The required minimum complexity for an AGI is a pretty big unknown right now, so I’m going with the human-based estimations for the optimistic predictions where the AGI won’t kill everyone. Hard takeoff is of course worth considering for the worst-case scenario predictions.
First, I want to say I think that was a really good response.
One possibility is that the design isn’t one that just needs to chew on static data to learn, and instead it needs to try to interact with existing intelligent entities with its first guesses at behavior and then use the feedback to refine its behavior, like human children do.
I think that this is somehow muddling the notions of intelligence and learning-the-problem, but I don’t have it pinned down at the moment. Feeding training data to an AI should only be needed if the programmers are ignorant of the relevant patterns which will be produced in the mature AI. If the adult AI is actually smarter than the best AI the programmers could put out (the toddler) then something changed which would correspond to a novel AI design principle. But all parties might still be ignorant of that principle, if for example it occurred one day when a teach stumbled on to the right way to explain a concept to the toddler, but it wasn’t obvious how that tied in to the new more efficient data structure in the toddler’s mind.
Because if you out those things that the AI child would learn that would turn it into an AI scientist, then you could just create those structures.
So this isn’t “let’s figure out AI as we go along” as I originally thought, but “Let’s automate the process of figuring out AI” Which is more dangerous and probably more likely to succeed. So I’m updating in the direction of Vladimir_Nesov’s position, but this strategy is still dependent on not knowing what you’re doing.
I could get 4 of orders of magnitude by retreating to 5 weeks instead of five minutes., and that’s still a hard takeoff. I also think it would be relatively easy to get at least an order of magnitude speedup over human learning just by tweaking what gets remembered and what gets forgotten.
That’s probably the real danger here: You learn more about what’s really important to intelligence, and then you make your program a little better, or it just learns and gets a little better, and you celebrate. You don’t at that point suddenly realize that you built an AI without knowing how it works. So you gradually go from a design that needs a small cluster just to act like a dog to a design that’s twice as smart as your desktop’s idle cycles, but you never panic and reevaluate friendliness.
The same kind of friendliness problem could happen with whole brain emulations.
Why should it take an AI design more than five minutes to process a significant enough amount of rich sensory data to push it to an adult level?
Upon review, I think you are correct that this plan does think it’s solving all the major programming problems by the toddler stage, and the rest is just education.
But the question is still: If you understand how to make intelligence, why make one that’s even nearly as low as us?
I don’t know, I have never built an AGI. One possibility is that the design isn’t one that just needs to chew on static data to learn, and instead it needs to try to interact with existing intelligent entities with its first guesses at behavior and then use the feedback to refine its behavior, like human children do. This would slow down initial takeoff since the only intelligent entities initially available are slow live humans. Things could of course be sped up once there’s an ecosystem of adult-level intelligent AIs that could be run as fast as the computers are able.
As for the five minute figure, let’s say humans need about 0.2e9 seconds of interacting with their surroundings to grow up to a semi-adult level. Let’s go with Ray Kurzweil’s estimate of 1e16 computations per second needed for a human brain inspired AI. Five minutes would require an order of 1e22 cps hardware. Moore’s law isn’t quite there yet, I think current fastest computers are somewhere around 1e15 range. So you’d have to be able to shave quite a few orders of magnitude from the back of the envelope estimation right off the cuff. I could also do back of the envelope nastiness with the human sensory bandwidth times 0.2 gigaseconds and how many bits you can push to a CPU in a second, but you probably get the idea.
So it seems to come down to at least how simple or messy the basic discovered AI architecture ends up being. A hard takeoff from the first genuine AGI architecture to superhuman intelligence in days requires an architecture something like ten orders of magnitude more efficient than humans. That needs to be not only possible, but discoverable at the stage of doing the first prototypes of AGI.
The required minimum complexity for an AGI is a pretty big unknown right now, so I’m going with the human-based estimations for the optimistic predictions where the AGI won’t kill everyone. Hard takeoff is of course worth considering for the worst-case scenario predictions.
First, I want to say I think that was a really good response.
I think that this is somehow muddling the notions of intelligence and learning-the-problem, but I don’t have it pinned down at the moment. Feeding training data to an AI should only be needed if the programmers are ignorant of the relevant patterns which will be produced in the mature AI. If the adult AI is actually smarter than the best AI the programmers could put out (the toddler) then something changed which would correspond to a novel AI design principle. But all parties might still be ignorant of that principle, if for example it occurred one day when a teach stumbled on to the right way to explain a concept to the toddler, but it wasn’t obvious how that tied in to the new more efficient data structure in the toddler’s mind.
Because if you out those things that the AI child would learn that would turn it into an AI scientist, then you could just create those structures.
So this isn’t “let’s figure out AI as we go along” as I originally thought, but “Let’s automate the process of figuring out AI” Which is more dangerous and probably more likely to succeed. So I’m updating in the direction of Vladimir_Nesov’s position, but this strategy is still dependent on not knowing what you’re doing.
I could get 4 of orders of magnitude by retreating to 5 weeks instead of five minutes., and that’s still a hard takeoff. I also think it would be relatively easy to get at least an order of magnitude speedup over human learning just by tweaking what gets remembered and what gets forgotten.
That’s probably the real danger here: You learn more about what’s really important to intelligence, and then you make your program a little better, or it just learns and gets a little better, and you celebrate. You don’t at that point suddenly realize that you built an AI without knowing how it works. So you gradually go from a design that needs a small cluster just to act like a dog to a design that’s twice as smart as your desktop’s idle cycles, but you never panic and reevaluate friendliness.
The same kind of friendliness problem could happen with whole brain emulations.