If you teach an AI to fish, it might optimize its performance within a narrow scope. Teach it to teach itself to fish, and you’ve created a recursively self-improving AGI that is unaligned with human values by default and will most likely end up killing us all.
-- Eliezer, probably