Many important problems in graph theory, Ramsey theory, etc. were solved by considering random combinatorial objects (this was one of the great triumphs of Paul Erdos) and thinking in purely deterministic terms seems very unlikely to have solved these problems.
From a Bayesian perspective, a probability is a cognitive object representing the known evidence about a proposition, flattened into a number. It wouldn’t make sense to draw conclusions about e.g. the existence of certain graphs, just because we in particular are uncertain about the structure of some graph.
The “probabilistic method”, IMHO, is properly viewed as the “measure-theoretic method”, which is what mathematicians usually mean by “probabilistic” anyway. That is, constructions involving random objects can usually (always?) be thought of as putting a measure on the space of all objects, and then arguing about sets of measure 0 and 1 and etc. (I would be interested in seeing examples where this transformation is (a) not relatively straightforward or (b) impossible.) Although the math is the same up to a point, these are different conceptual tools. From Jaynes, Probability Theory:
For example our system of probability could hardly, in style, philosophy, and purpose, be more different from that of Kolmogorov. What we consider to be fully half of probability theory as it is needed in current applications (the principles for assigning probabilities by logical analysis of incomplete information) is not present at all in the Kolmogorov system. Yet when all is said and done we find ourselves, to our own surprise, in agreement with Kolmogorov and in disagreement with his critics, on nearly all technical issues.
Whether thinking in terms of randomness is a useful conceptual tool is a different question; personally, I try to separate the intuitions into Bayesian (for cognition) and measure theory (for everything else, e.g. randomized algorithms, quantum mechanics, etc.). It would be nice if these were one and the same, i.e. if Bayesian probability was just measure theoretic probability over sets of hypotheses, but I don’t know of a good choice for the hypothesis space. The Cox theorems work from basic desiderata of a probabilistic calculus, independent of any measure theory; that is the basis of Bayesian probability theory (see Jaynes, Chapter 2).
From a Bayesian perspective, a probability is a cognitive object representing the known evidence about a proposition, flattened into a number. It wouldn’t make sense to draw conclusions about e.g. the existence of certain graphs, just because we in particular are uncertain about the structure of some graph.
The “probabilistic method”, IMHO, is properly viewed as the “measure-theoretic method”, which is what mathematicians usually mean by “probabilistic” anyway. That is, constructions involving random objects can usually (always?) be thought of as putting a measure on the space of all objects, and then arguing about sets of measure 0 and 1 and etc. (I would be interested in seeing examples where this transformation is (a) not relatively straightforward or (b) impossible.) Although the math is the same up to a point, these are different conceptual tools. From Jaynes, Probability Theory:
Whether thinking in terms of randomness is a useful conceptual tool is a different question; personally, I try to separate the intuitions into Bayesian (for cognition) and measure theory (for everything else, e.g. randomized algorithms, quantum mechanics, etc.). It would be nice if these were one and the same, i.e. if Bayesian probability was just measure theoretic probability over sets of hypotheses, but I don’t know of a good choice for the hypothesis space. The Cox theorems work from basic desiderata of a probabilistic calculus, independent of any measure theory; that is the basis of Bayesian probability theory (see Jaynes, Chapter 2).