I don’t think they do. But that should not be in dispute. The point of a logical argument is to achieve complete clarity about the premises and the way they imply the conclusion.
I added two lemmas to clarify. I guess you could quibble with lemma 2, I think it does follow if we assume that we know or at least can know premise 3, but that seems plausible if you’re willing to accept it as a premise at all.
So, I think the crux of why I don’t really agree with your general gist and why I’m guessing a lot of people don’t, is we see meaningfulness as something bigger than just whether or not something is a fact (statement that has a coherent truth value). To most I think, something is meaningful if it somehow is grounded in external reality, not whether or not it can be assess to be true, and many things are meaningful to people that we can’t assess the truthiness of. You seem to already agree that there are many things of which we cannot speak of facts, yet these non-facts are not meaningless to people. Just for example, you perhaps can’t speak to whether or not it is true that a person loves their mother, but that love is likely quite meaningful to them. There’s a kind of deflation of what you mean by “meaning” here that, to me, makes this position kind of boring and useless, since most of the interesting stuff we need to deal with is now outside the realm of facts and “meaning” by your model.
> To most I think, something is meaningful if it somehow is grounded in external reality
This is of course circular when trying to justify the meaningfulness of this “external reality” concept.
>we can’t assess the truthiness of
This is one way to state verificationism, but I’m not *assuming* this, I’m arguing for it.
>Just for example, you perhaps can’t speak to whether or not it is true that a person loves their mother, but that love is likely quite meaningful to them.
It might be meaningless to me, and meaningful for them.
>most of the interesting stuff we need to deal with is now outside the realm of facts and “meaning” by your model.
What interesting stuff do we need to deal with that don’t affect our decisions or probability distributions over our experiences? Since my model doesn’t affect either of those, by construction. (It does hint at some utility functions being somewhat incoherent, but I’m not necessarily standing by that, I prefer to let utility function range broadly.)
I think a claim is meaningful if it’s possible to be true and possible to be false. Of course this puts a lot of work on “possible”.
That’s not the standard verificationist claim, which is more that things are meaningful if they can be verified as true or false.
No, it would be circular if I defined meaning to already include the verificationist claim.
Rather, I define meaning in other terms and then argue that this implies the verificationist claim.
There are gaps in the argument, then.
My premises imply the conclusion. You might not like some of the premises, perhaps.
I don’t think they do. But that should not be in dispute. The point of a logical argument is to achieve complete clarity about the premises and the way they imply the conclusion.
I added two lemmas to clarify. I guess you could quibble with lemma 2, I think it does follow if we assume that we know or at least can know premise 3, but that seems plausible if you’re willing to accept it as a premise at all.
I could break it up into more steps if it’s not entirely clear that the premises imply the conclusion.
So, I think the crux of why I don’t really agree with your general gist and why I’m guessing a lot of people don’t, is we see meaningfulness as something bigger than just whether or not something is a fact (statement that has a coherent truth value). To most I think, something is meaningful if it somehow is grounded in external reality, not whether or not it can be assess to be true, and many things are meaningful to people that we can’t assess the truthiness of. You seem to already agree that there are many things of which we cannot speak of facts, yet these non-facts are not meaningless to people. Just for example, you perhaps can’t speak to whether or not it is true that a person loves their mother, but that love is likely quite meaningful to them. There’s a kind of deflation of what you mean by “meaning” here that, to me, makes this position kind of boring and useless, since most of the interesting stuff we need to deal with is now outside the realm of facts and “meaning” by your model.
> To most I think, something is meaningful if it somehow is grounded in external reality
This is of course circular when trying to justify the meaningfulness of this “external reality” concept.
>we can’t assess the truthiness of
This is one way to state verificationism, but I’m not *assuming* this, I’m arguing for it.
>Just for example, you perhaps can’t speak to whether or not it is true that a person loves their mother, but that love is likely quite meaningful to them.
It might be meaningless to me, and meaningful for them.
>most of the interesting stuff we need to deal with is now outside the realm of facts and “meaning” by your model.
What interesting stuff do we need to deal with that don’t affect our decisions or probability distributions over our experiences? Since my model doesn’t affect either of those, by construction. (It does hint at some utility functions being somewhat incoherent, but I’m not necessarily standing by that, I prefer to let utility function range broadly.)
I think for your purposes you can just define meaningless as “neither true nor false” without detouring into possibility
For this argument, yes, but it might not be sufficient when extending it beyond just uncertainty in verificationism.