> To most I think, something is meaningful if it somehow is grounded in external reality
This is of course circular when trying to justify the meaningfulness of this “external reality” concept.
>we can’t assess the truthiness of
This is one way to state verificationism, but I’m not *assuming* this, I’m arguing for it.
>Just for example, you perhaps can’t speak to whether or not it is true that a person loves their mother, but that love is likely quite meaningful to them.
It might be meaningless to me, and meaningful for them.
>most of the interesting stuff we need to deal with is now outside the realm of facts and “meaning” by your model.
What interesting stuff do we need to deal with that don’t affect our decisions or probability distributions over our experiences? Since my model doesn’t affect either of those, by construction. (It does hint at some utility functions being somewhat incoherent, but I’m not necessarily standing by that, I prefer to let utility function range broadly.)
> To most I think, something is meaningful if it somehow is grounded in external reality
This is of course circular when trying to justify the meaningfulness of this “external reality” concept.
>we can’t assess the truthiness of
This is one way to state verificationism, but I’m not *assuming* this, I’m arguing for it.
>Just for example, you perhaps can’t speak to whether or not it is true that a person loves their mother, but that love is likely quite meaningful to them.
It might be meaningless to me, and meaningful for them.
>most of the interesting stuff we need to deal with is now outside the realm of facts and “meaning” by your model.
What interesting stuff do we need to deal with that don’t affect our decisions or probability distributions over our experiences? Since my model doesn’t affect either of those, by construction. (It does hint at some utility functions being somewhat incoherent, but I’m not necessarily standing by that, I prefer to let utility function range broadly.)