I don’t think there’s anything special about the tails.
Take a sheet of paper, and cover up the left 9⁄10 of the high-correlation graph. That leaves the right tail of the X variable. The remaining datapoints have a much less linear shape.
But: take two sheets of paper, and cover up (say) the left 4⁄10, and the right 5⁄10. You get the same shape left over! It has nothing to do with the tail—it just has to do with compressing the range of X values.
The correlation, roughly speaking, tells you what percentage of the variation is not caused by random error. When you compress the X, you compress the “real” variation, but leave the “error” variation as is. So the correlation drops.
Suppose 100 chickens are produced. And, suppose 100% of the population becomes vegetarian. The number of chickens produced will drop to zero.
100 fewer chickens demanded; 100 fewer produced. So, on average, between 1 and 100, the next marginal drop in chicken demand drops production by 1.
Which elicits the question: what is the pattern from 100 down to 0?
Suppose there’s suddenly only one non-vegetarian left. At today’s price, he would demand 1 chicken. Clearly, prices will have to rise if only 1 is produced instead of 100. He might, then, demand only half a chicken at the new, higher price.
That means: an instant drop in demand from 100 to 1 chicken leads to an eventual drop in production of 99.5 chickens. That’s 99.5 fewer produced when 99 fewer are demanded.
Also, an instant drop in demand from 0.5 to 0 leads to a drop in production from 0.5 to zero.
If the function is monotonic, it must be that a drop in demand of X units must lead to an eventual drop in production of X+f(X) units, where f(X) > 0. That’s the only way the math works out.
There is a drop of X chickens produced, to match the drop in quantity demanded at price X. The extra drop of f(X) reflects the fact that even fewer chickens are demanded at the new, higher price that must result.