but Y has solved the (interesting) problem of understanding how people write novels.
I think the whole point in AI research is to do something, not find out how humans do something. You personally might find psychology (How humans work) far more interesting than AI research (How to do things traditionally classified as ‘intelligence’ regardless of the actual method) but please don’t generalize that notion and smack labels “uninteresting” into problems.
What’s happened in AI research is that Y (which is actually AI) is too difficult, so people successfully solve problems the way program X (which is not AI) does. But don’t let this confuse you into thinking that AI has been successful.
When mysterious things cease to be mysterious they’ll tend to resemble the way “X”.
Consider the advent of powered flight. By that line of argumentation one could write “We don’t actually understand how flight works, There are hacks that allow machines to fly without actually understanding how birds fly.” Or we could compare cars with legs and say that transportation is generally just a ugly uninteresting hack.
Considering how much stuff like convays game of life which bears no resemblance to our universe is played I’d put the probability much lower.
Whenever you run anything which simulates anything turing compatible (Ok. Finite state machine is actually enough due to finite amount of information storage even in our universe) there is a chance for practically anything to happen.