There’s a story about trained dolphins. The trainer gave them fish for doing tricks, which worked great. Then they decided to only give them fish for novel tricks. The dolphins, trained under the old method, ran through all the tricks they knew, got frustrated for a while, then displayed a whole bunch of new tricks all at once.
Among animals, RL can teach specific skills but also reduces creativity in novel contexts. You can train creative problem solving, but in most cases, when you want control of outcomes, that’s not what you do. The training for creativity is harder, and less predictable, and requires more understanding and effort from the trainer.
Among humans, there is often a level where the more capable find supposedly simple questions harder, often because they can see all the places where the question assumes a framework that is not quite as ironclad as the asker thinks. Sometimes this is useful. More often it is a pain for both parties. Frequently the result is that the answerer learns to suppress their intelligence instead of using it.
In other words—this post seems likely to be about what this not-an-AI-expert should expect to happen.
I anticipate this will lead to some interesting phrasing choices around the multiple meanings of “conception” as the discussions on what and how and whether AI’s ‘really’ think continue to evolve.