First, I want to say I think that was a really good response.
One possibility is that the design isn’t one that just needs to chew on static data to learn, and instead it needs to try to interact with existing intelligent entities with its first guesses at behavior and then use the feedback to refine its behavior, like human children do.
I think that this is somehow muddling the notions of intelligence and learning-the-problem, but I don’t have it pinned down at the moment. Feeding training data to an AI should only be needed if the programmers are ignorant of the relevant patterns which will be produced in the mature AI. If the adult AI is actually smarter than the best AI the programmers could put out (the toddler) then something changed which would correspond to a novel AI design principle. But all parties might still be ignorant of that principle, if for example it occurred one day when a teach stumbled on to the right way to explain a concept to the toddler, but it wasn’t obvious how that tied in to the new more efficient data structure in the toddler’s mind.
Because if you out those things that the AI child would learn that would turn it into an AI scientist, then you could just create those structures.
So this isn’t “let’s figure out AI as we go along” as I originally thought, but “Let’s automate the process of figuring out AI” Which is more dangerous and probably more likely to succeed. So I’m updating in the direction of Vladimir_Nesov’s position, but this strategy is still dependent on not knowing what you’re doing.
I could get 4 of orders of magnitude by retreating to 5 weeks instead of five minutes., and that’s still a hard takeoff. I also think it would be relatively easy to get at least an order of magnitude speedup over human learning just by tweaking what gets remembered and what gets forgotten.
That’s probably the real danger here: You learn more about what’s really important to intelligence, and then you make your program a little better, or it just learns and gets a little better, and you celebrate. You don’t at that point suddenly realize that you built an AI without knowing how it works. So you gradually go from a design that needs a small cluster just to act like a dog to a design that’s twice as smart as your desktop’s idle cycles, but you never panic and reevaluate friendliness.
The same kind of friendliness problem could happen with whole brain emulations.
First, I want to say I think that was a really good response.
I think that this is somehow muddling the notions of intelligence and learning-the-problem, but I don’t have it pinned down at the moment. Feeding training data to an AI should only be needed if the programmers are ignorant of the relevant patterns which will be produced in the mature AI. If the adult AI is actually smarter than the best AI the programmers could put out (the toddler) then something changed which would correspond to a novel AI design principle. But all parties might still be ignorant of that principle, if for example it occurred one day when a teach stumbled on to the right way to explain a concept to the toddler, but it wasn’t obvious how that tied in to the new more efficient data structure in the toddler’s mind.
Because if you out those things that the AI child would learn that would turn it into an AI scientist, then you could just create those structures.
So this isn’t “let’s figure out AI as we go along” as I originally thought, but “Let’s automate the process of figuring out AI” Which is more dangerous and probably more likely to succeed. So I’m updating in the direction of Vladimir_Nesov’s position, but this strategy is still dependent on not knowing what you’re doing.
I could get 4 of orders of magnitude by retreating to 5 weeks instead of five minutes., and that’s still a hard takeoff. I also think it would be relatively easy to get at least an order of magnitude speedup over human learning just by tweaking what gets remembered and what gets forgotten.
That’s probably the real danger here: You learn more about what’s really important to intelligence, and then you make your program a little better, or it just learns and gets a little better, and you celebrate. You don’t at that point suddenly realize that you built an AI without knowing how it works. So you gradually go from a design that needs a small cluster just to act like a dog to a design that’s twice as smart as your desktop’s idle cycles, but you never panic and reevaluate friendliness.
The same kind of friendliness problem could happen with whole brain emulations.