I agree that many parts of AI research are already automated, and by the time we have broadly human-level AI, many more will be. I would not be surprised if the great majority of tasks in current AI research are automated long before we have broadly human-level AI.
I wouldn’t normally describe this as “low-level self-improvement,” but that seems like a semantic issue.
I am skeptical of the Eurisko example (and of claims about Eurisko in general) but I don’t know if it’s relevant to this disagreement.
I agree that many parts of AI research are already automated, and by the time we have broadly human-level AI, many more will be. I would not be surprised if the great majority of tasks in current AI research are automated long before we have broadly human-level AI.
I wouldn’t normally describe this as “low-level self-improvement,” but that seems like a semantic issue.
I am skeptical of the Eurisko example (and of claims about Eurisko in general) but I don’t know if it’s relevant to this disagreement.