It seems that if external reality is meaningless, then it’s difficult to ground any form of morality that says actions are good or bad insofar as they have particular effects on external reality.
That is an interesting point. More or less, I agree with this sentence in your fist post:
As far as I can tell, we can do science just as well without assuming that there’s a real territory out there somewhere.
in the sense that one can do science by speaking only about their own observations, without making a distinction between what is observed and what “really exists”.
On the other hand, when I observe that other nervous systems are similar to my own nervous system, I infer that other people have subjective experiences similar to mine. How does this fit in your framework? (Might be irrelevant, sorry if I misunderstood)
>On the other hand, when I observe that other nervous systems are similar to my own nervous system, I infer that other people have subjective experiences similar to mine.
That’s just part of my model. To the extent that empathy of this nature is useful for predicting what other people will do, that’s a useful thing to have in a model. But to then say “other people have subjective experiences somewhere ‘out there’ in external reality” seems meaningless—you’re just asserting your model is “real”, which is a category error in my view.
Yes, it appears meaningless, I and others have tried hard to figure out a possible account of it.
I haven’t tried to get a fully general account of communication but I’m aware there’s been plenty of philosophical work, and I can see partial accounts that work well enough.
My own argument, see https://www.lesswrong.com/posts/zm3Wgqfyf6E4tTkcG/the-short-case-for-verificationism and the post it links back to.
It seems that if external reality is meaningless, then it’s difficult to ground any form of morality that says actions are good or bad insofar as they have particular effects on external reality.
That is an interesting point. More or less, I agree with this sentence in your fist post:
in the sense that one can do science by speaking only about their own observations, without making a distinction between what is observed and what “really exists”.
On the other hand, when I observe that other nervous systems are similar to my own nervous system, I infer that other people have subjective experiences similar to mine. How does this fit in your framework? (Might be irrelevant, sorry if I misunderstood)
>On the other hand, when I observe that other nervous systems are similar to my own nervous system, I infer that other people have subjective experiences similar to mine.
That’s just part of my model. To the extent that empathy of this nature is useful for predicting what other people will do, that’s a useful thing to have in a model. But to then say “other people have subjective experiences somewhere ‘out there’ in external reality” seems meaningless—you’re just asserting your model is “real”, which is a category error in my view.
“The model is the territory” is a category error, but “the model accurately represents the territory” is not.
What does it mean for a model to “represent” a territory?
You’re assuming that the words you are using can represent ideas in your head.
Not at all, to the extent head is a territory.
Tell me what you are doing ,then.
I’m communicating, which I don’t have a fully general account of, but is something I can do and has relatively predictable effects on my experiences.
Your objection to representation was that there is no account if it.
Yes, it appears meaningless, I and others have tried hard to figure out a possible account of it.
I haven’t tried to get a fully general account of communication but I’m aware there’s been plenty of philosophical work, and I can see partial accounts that work well enough.
You’re implicitly assuming it works by using it. So why can’t I assume that representation works, somehow?
I know what successful communication looks like.
What does successful representation look like?
Communication uses symbols, which are representations.