Great post, but I would suggest avoiding cliche titles such as ‘The Unreasonable Effectiveness of X’.
It provides a lot less information about what the post contains than a more carefully crafted title would, for not much gain (maybe people are more likely to read TUEoX posts?).
Basically, I always feel a discomfort when people have ‘TUEoX’ as a title, and never provide a strong argument for why the effectiveness of X is actually unreasonable (i.e. beyond the limits of acceptability and fairness).
Sure, it’s effective yes, and it certainly was unexpected, but is it fair to say it’s unreasonable?
I confess that I have a weakness for slightly fanciful titles. In my defence, though, I do actually think that “unreasonable” is a reasonable way of describing the success of neural networks. The argument in the original paper was something like “it could have been the case that math just wasn’t that helpful in describing the universe, but actually it works really well on most things we try it on, and we don’t have any principled explanation for why that is so”. Similarly, it could have been the case that feedforward neural networks just weren’t very good at learning useful functions, but actually they work really well on most things we try them on, and we don’t have any principled explanation for why that is so.
Great post, but I would suggest avoiding cliche titles such as ‘The Unreasonable Effectiveness of X’.
It provides a lot less information about what the post contains than a more carefully crafted title would, for not much gain (maybe people are more likely to read TUEoX posts?).
Basically, I always feel a discomfort when people have ‘TUEoX’ as a title, and never provide a strong argument for why the effectiveness of X is actually unreasonable (i.e. beyond the limits of acceptability and fairness).
Sure, it’s effective yes, and it certainly was unexpected, but is it fair to say it’s unreasonable?
I confess that I have a weakness for slightly fanciful titles. In my defence, though, I do actually think that “unreasonable” is a reasonable way of describing the success of neural networks. The argument in the original paper was something like “it could have been the case that math just wasn’t that helpful in describing the universe, but actually it works really well on most things we try it on, and we don’t have any principled explanation for why that is so”. Similarly, it could have been the case that feedforward neural networks just weren’t very good at learning useful functions, but actually they work really well on most things we try them on, and we don’t have any principled explanation for why that is so.