The version of human mimicry I’m excited about is having AI rapidly babbling ideas as simulated models of current-day researchers, self-critiquing via something like chain of thought prompting, and passing the promising ones on to humans. I agree that getting closer to alignment (i.e. having more relevant training data) helps, but we don’t necessarily need to be close in sequential time or ask for something far out of distribution, being able to run thousands++ of today’s top researchers in parallel would be a massive boost already.
The version of human mimicry I’m excited about is having AI rapidly babbling ideas as simulated models of current-day researchers, self-critiquing via something like chain of thought prompting, and passing the promising ones on to humans. I agree that getting closer to alignment (i.e. having more relevant training data) helps, but we don’t necessarily need to be close in sequential time or ask for something far out of distribution, being able to run thousands++ of today’s top researchers in parallel would be a massive boost already.