I agree that a lower-level model doesn’t mean more relevant. Also, I think that reductionism is a tool that can be relevant in certain contexts, as any other tool. In some others, it may not.
Wrong framework for the task
The latter may seem a bit rude, and I apologize. But it actually aims at what I see as the problem here: you started a human communication, then you restricted it from the start, and then generalized your observations while not being able to grasp anything about liking a beach at least. Looks like rationalization to soften your failure to understand another human.
I guess you are aware of what is called emotional intelligence. Every piece of experience we have can start a plethora of processes in our mind, many of them can be understood not through reduction and plain axiomatized logic, but through emotional attuning to your interlocutor, using some intuition and “fuzzy thinking”. Then you would be able to understand what really means for your mom through a bunch of metaphors, stories, and reflection upon it. And you could do some accurate predictions about your mom! From the start, it was a matter of communication and sharing experience, right? Humanity mastered that art, and it obviously not restricted to a reductionist way of thinking.
To me, it seems not really rational to omit a wide spectrum of human knowledge from different areas of life just because “you want to believe”. You could have thousands of more points of view that are more relevant to address the phenomena of liking a beach. How can you possibly make an inference about reductionism is still can be applied to replicate “the beach experience” for your mom without even being exposed to any other point of view out of the reductionist’s worldview box? There is plenty of knowledge on different levels of system description and you are not even trying to connect your ideas with existing knowledge.
About absolute reductionism.
What we do know is that there is no computationally possible way to simulate large ensembles of elementary particles. Now, if one day there will be known that we can not fight combinatorial explosion efficiently by growing computational power (and it looks like just that), how practical will be this fully “theoretically working reductionism”?
Ultimately, your reasoning looks great just because it is reduced by itself. For example, you reduced your possibilities to understand “why mom likes a beach” from many accessible ways of doing so (emotional intelligence, deep conversation, reflection, study of psychology) to just down the reductionist way, and then you concluded that it was a matter of the wrong application of the reductionism, and NOT a matter of lacking knowledge on the subject of communication.
Okay, now I feel like I understand your main point better.
I think I have just another point of view on the example. My point is that the example itself seems a bit artificial.
The human brain still is not conquered by reductionist modeling. So we don’t know yet if there is a possibility to reduce consciousness (we are talking about the feeling of joy, that means about brain and consciousness) to some systems and parts without losing crucial properties of consciousness. At any level.
Bearing that in mind, the example seems a bit meaningless in both cases:
The case where you reduced consciousness by modeling on an adequate level
The case where you reduced consciousness by modeling too low-level
I think it’s too theoretical to say what you actually did wrong if you don’t know how to do the same thing right (that is if you tried to apply reductionism in the very same conversation but in the right way).
With that said, it looks to me like you could just omit the example from life and all the context as poor evidence, leaving the statement about low-level modeling alone.