So, I think the crux of why I don’t really agree with your general gist and why I’m guessing a lot of people don’t, is we see meaningfulness as something bigger than just whether or not something is a fact (statement that has a coherent truth value). To most I think, something is meaningful if it somehow is grounded in external reality, not whether or not it can be assess to be true, and many things are meaningful to people that we can’t assess the truthiness of. You seem to already agree that there are many things of which we cannot speak of facts, yet these non-facts are not meaningless to people. Just for example, you perhaps can’t speak to whether or not it is true that a person loves their mother, but that love is likely quite meaningful to them. There’s a kind of deflation of what you mean by “meaning” here that, to me, makes this position kind of boring and useless, since most of the interesting stuff we need to deal with is now outside the realm of facts and “meaning” by your model.
> To most I think, something is meaningful if it somehow is grounded in external reality
This is of course circular when trying to justify the meaningfulness of this “external reality” concept.
>we can’t assess the truthiness of
This is one way to state verificationism, but I’m not *assuming* this, I’m arguing for it.
>Just for example, you perhaps can’t speak to whether or not it is true that a person loves their mother, but that love is likely quite meaningful to them.
It might be meaningless to me, and meaningful for them.
>most of the interesting stuff we need to deal with is now outside the realm of facts and “meaning” by your model.
What interesting stuff do we need to deal with that don’t affect our decisions or probability distributions over our experiences? Since my model doesn’t affect either of those, by construction. (It does hint at some utility functions being somewhat incoherent, but I’m not necessarily standing by that, I prefer to let utility function range broadly.)
So, I think the crux of why I don’t really agree with your general gist and why I’m guessing a lot of people don’t, is we see meaningfulness as something bigger than just whether or not something is a fact (statement that has a coherent truth value). To most I think, something is meaningful if it somehow is grounded in external reality, not whether or not it can be assess to be true, and many things are meaningful to people that we can’t assess the truthiness of. You seem to already agree that there are many things of which we cannot speak of facts, yet these non-facts are not meaningless to people. Just for example, you perhaps can’t speak to whether or not it is true that a person loves their mother, but that love is likely quite meaningful to them. There’s a kind of deflation of what you mean by “meaning” here that, to me, makes this position kind of boring and useless, since most of the interesting stuff we need to deal with is now outside the realm of facts and “meaning” by your model.
> To most I think, something is meaningful if it somehow is grounded in external reality
This is of course circular when trying to justify the meaningfulness of this “external reality” concept.
>we can’t assess the truthiness of
This is one way to state verificationism, but I’m not *assuming* this, I’m arguing for it.
>Just for example, you perhaps can’t speak to whether or not it is true that a person loves their mother, but that love is likely quite meaningful to them.
It might be meaningless to me, and meaningful for them.
>most of the interesting stuff we need to deal with is now outside the realm of facts and “meaning” by your model.
What interesting stuff do we need to deal with that don’t affect our decisions or probability distributions over our experiences? Since my model doesn’t affect either of those, by construction. (It does hint at some utility functions being somewhat incoherent, but I’m not necessarily standing by that, I prefer to let utility function range broadly.)