Yes, propositions are abstractions which don’t exactly correspond to anything in our mind. But they do seem to have advantages: When communicating, we use sentences, which can be taken to express propositions. And we do seem to intuitively have propositional attitudes like “beliefs” (believing a proposition to be true) and “desires” (wanting a proposition to be true) in our mind. Which are expressible in sentences again. So propositions seem to be a quite natural abstraction. Treating them as being either true or false is a further simplification which works out well often enough.
More complex “models” are often less convenient entities than propositions. We can’t grasp them as simple things we can believe or disbelieve, and we can’t communicate them via simple sentences. Propositions allow us to model things in terms of logic, which combines propositions into more complex ones via logical connectives. We can have a Boolean algebra of propositions, but hardly of “models”. Propositions are simpler entities than models, less internally sophisticated, but more flexible.
There are also cases where models/theories have historically been made more general from introducing propositions. For example, when Kolmogorov formalized probability theory as a Boolean algebra of propositions (or “events”, which amounts to the same thing.) Or when Richard Jeffrey generalized Savage’s decision theory (which was defined on “acts”, “states”, “outcomes”) to use instead a Boolean algebra of propositions.
Propositions are rarely helpful for more complex models, so I do agree that their application is limited. I also think that the usefulness of propositions has been overestimated in the past. For the development of AI, it turned out that logic and Bayesian probability theory (and other forms of “symbolic AI”) were of surprisingly little use.
Yes, propositions are abstractions which don’t exactly correspond to anything in our mind. But they do seem to have advantages: When communicating, we use sentences, which can be taken to express propositions. And we do seem to intuitively have propositional attitudes like “beliefs” (believing a proposition to be true) and “desires” (wanting a proposition to be true) in our mind. Which are expressible in sentences again. So propositions seem to be a quite natural abstraction. Treating them as being either true or false is a further simplification which works out well often enough.
More complex “models” are often less convenient entities than propositions. We can’t grasp them as simple things we can believe or disbelieve, and we can’t communicate them via simple sentences. Propositions allow us to model things in terms of logic, which combines propositions into more complex ones via logical connectives. We can have a Boolean algebra of propositions, but hardly of “models”. Propositions are simpler entities than models, less internally sophisticated, but more flexible.
There are also cases where models/theories have historically been made more general from introducing propositions. For example, when Kolmogorov formalized probability theory as a Boolean algebra of propositions (or “events”, which amounts to the same thing.) Or when Richard Jeffrey generalized Savage’s decision theory (which was defined on “acts”, “states”, “outcomes”) to use instead a Boolean algebra of propositions.
Propositions are rarely helpful for more complex models, so I do agree that their application is limited. I also think that the usefulness of propositions has been overestimated in the past. For the development of AI, it turned out that logic and Bayesian probability theory (and other forms of “symbolic AI”) were of surprisingly little use.