Taking you more strictly at your word than you mean it the program could just return true for the majority belief on empirically non-falsifiable questions. Or it could just return false on all beliefs including your belief that that is illogical. So with the right programs pretty much arbitrary beliefs pass as meaningful.
You actually want it to depend on the state of the universe in the right way, but that’s just another way to say it should depend on whether the belief is true.
That’s a problem with all theories of truth, though. “Elaine is a post-utopian author” is trivially true if you interpret “post-utopian” to mean “whatever professors say is post-utopian”, or “a thing that is always true of all authors” or “is made out of mass”.
To do this with programs rather than philosophy doesn’t make it any worse.
I’m suggesting is that there is a correspondence between meaningful statements and universal computer programs. Obviously this theory doesn’t tell you how to match the right statement to the right computer program. If you match the statement “snow is white” to the computer program that is a bunch of random characters, the program will return no result and you’ll conclude that “snow is white” is meaningless. But that’s just the same problem as the philosopher who refuses to accept any definition of “snow”, or who claims that snow is obviously black because “snow” means that liquid fossil fuel you drill for and then turn into gasoline.
If your closest match to “post-utopian” is a program that determines whether professors think someone is post-utopian, then you can either conclude that post-utopian literally means “something people call post-utopian”—which would probably be a weird and nonstandard word use the same way using “snow” to mean “oil” would be nonstandard—or that post-utopianism isn’t meaningful.
Yeah, probably all theories of truth are circular and the concept is simply non-tabooable. I agree your explanation doesn’t make it worse, but it doesn’t make it better either.
But that’s only useful if you make it circular.
Taking you more strictly at your word than you mean it the program could just return true for the majority belief on empirically non-falsifiable questions. Or it could just return false on all beliefs including your belief that that is illogical. So with the right programs pretty much arbitrary beliefs pass as meaningful.
You actually want it to depend on the state of the universe in the right way, but that’s just another way to say it should depend on whether the belief is true.
That’s a problem with all theories of truth, though. “Elaine is a post-utopian author” is trivially true if you interpret “post-utopian” to mean “whatever professors say is post-utopian”, or “a thing that is always true of all authors” or “is made out of mass”.
To do this with programs rather than philosophy doesn’t make it any worse.
I’m suggesting is that there is a correspondence between meaningful statements and universal computer programs. Obviously this theory doesn’t tell you how to match the right statement to the right computer program. If you match the statement “snow is white” to the computer program that is a bunch of random characters, the program will return no result and you’ll conclude that “snow is white” is meaningless. But that’s just the same problem as the philosopher who refuses to accept any definition of “snow”, or who claims that snow is obviously black because “snow” means that liquid fossil fuel you drill for and then turn into gasoline.
If your closest match to “post-utopian” is a program that determines whether professors think someone is post-utopian, then you can either conclude that post-utopian literally means “something people call post-utopian”—which would probably be a weird and nonstandard word use the same way using “snow” to mean “oil” would be nonstandard—or that post-utopianism isn’t meaningful.
Yeah, probably all theories of truth are circular and the concept is simply non-tabooable. I agree your explanation doesn’t make it worse, but it doesn’t make it better either.