I think the original post is not specific enough to be useful.
I see two essential moot points:
1) Why should be there a system of continuous correspondences between the truth values of sentences that have to do anything with some intuitive notion of truth values?
2) Are the truth values (of the sentences) after taking the fix point actually useful? E.g. can’t it be that we end up truth values of 1⁄2 for almost every sentence we can come up?
Before these points are cleared, the original post is merely an extremely vague speculation.
A closely related analogue to the second issue: in NP-hard optimization problems with lot of {0,1} variables, it’s a most common problem that after a continuous relaxation the system is easily (polynomially) solvable but the solution is worthless as a large fraction of the variables end up to be half which basically says: “no information”.
1) Why should be there a system of continuous correspondences between the truth values of sentences that have to do anything with some intuitive notion of truth values?
I can’t figure out what you’re trying to ask here.
2) Are the truth values (of the sentences) after taking the fix point actually useful? E.g. can’t it be that we end up truth values of 1⁄2 for almost every sentence we can come up?
I suppose the best answer I can give to this is “maybe”. If the logic operations that you use are 1-x, min(x,y), and max(x,y), and the sentences are entirely baseless (i.e. no sentence can be calculated independent of all the others), then a truth value of 1⁄2 for everything will always be consistent. If your sentences happen to actually form a hierarchy where sentences can only talk about sentences lower down, fuzzy logic will give a good answer.
The NP-hard optimization thing you cite is interesting; do you have a link?
Finally, in my defense, the purpose of this post was mainly to advocate for the use of fuzzy logic through the insight that it resolves paradoxes in a manner much more elegant than ordinal type hierarchy thingies, mentioning that fuzzy logic seems to be the only good way to deal with subjective things such as tallness and beauty anyway.
I suspected, but was not sure whether you meant the standard min/max relaxation of logical operators. You could have had more elaborate plans (I could not rule out) that could have lead to unexpected interesting consequences, but this is highly speculative. An analogue again from combinatorial optimization: Moving away from linear (essentially min/max based) relaxations to semidefinite ones could non-trivially improve the performance of coloring and SAT-solving algorithms, at least asymptotically.
“The NP-hard optimization thing you cite is interesting; do you have a link?”
This is a very well known practical folklore knowledge in that area, not explicitly topic of publications, rather part of the introductory training. If you want to have a closer look, search for randomized rounding, which is well established technique and can yield good results for certain problem classes, but may flop for others, exactly due to the above mentioned dominance of fractional solutions (integer/decision-variables taking half(-integeter) values being the typical case.) E.g. undergraduate course materials on the traveling-salesman-problem have concrete examples of that issue occurring in practice.
I think the original post is not specific enough to be useful.
I see two essential moot points:
1) Why should be there a system of continuous correspondences between the truth values of sentences that have to do anything with some intuitive notion of truth values?
2) Are the truth values (of the sentences) after taking the fix point actually useful? E.g. can’t it be that we end up truth values of 1⁄2 for almost every sentence we can come up?
Before these points are cleared, the original post is merely an extremely vague speculation.
A closely related analogue to the second issue: in NP-hard optimization problems with lot of {0,1} variables, it’s a most common problem that after a continuous relaxation the system is easily (polynomially) solvable but the solution is worthless as a large fraction of the variables end up to be half which basically says: “no information”.
I can’t figure out what you’re trying to ask here.
I suppose the best answer I can give to this is “maybe”. If the logic operations that you use are 1-x, min(x,y), and max(x,y), and the sentences are entirely baseless (i.e. no sentence can be calculated independent of all the others), then a truth value of 1⁄2 for everything will always be consistent. If your sentences happen to actually form a hierarchy where sentences can only talk about sentences lower down, fuzzy logic will give a good answer.
The NP-hard optimization thing you cite is interesting; do you have a link?
Finally, in my defense, the purpose of this post was mainly to advocate for the use of fuzzy logic through the insight that it resolves paradoxes in a manner much more elegant than ordinal type hierarchy thingies, mentioning that fuzzy logic seems to be the only good way to deal with subjective things such as tallness and beauty anyway.
To 1):
I suspected, but was not sure whether you meant the standard min/max relaxation of logical operators. You could have had more elaborate plans (I could not rule out) that could have lead to unexpected interesting consequences, but this is highly speculative. An analogue again from combinatorial optimization: Moving away from linear (essentially min/max based) relaxations to semidefinite ones could non-trivially improve the performance of coloring and SAT-solving algorithms, at least asymptotically.
“The NP-hard optimization thing you cite is interesting; do you have a link?”
This is a very well known practical folklore knowledge in that area, not explicitly topic of publications, rather part of the introductory training. If you want to have a closer look, search for randomized rounding, which is well established technique and can yield good results for certain problem classes, but may flop for others, exactly due to the above mentioned dominance of fractional solutions (integer/decision-variables taking half(-integeter) values being the typical case.) E.g. undergraduate course materials on the traveling-salesman-problem have concrete examples of that issue occurring in practice.