You’ll find the whole thing pretty interesting, although it concerns decision theory more than the rationality of belief, although these are deeply connected (the connection is an interesting topic for speculation in itself). Here’s a brief summary of the book. I’m pretty partial to it.
Thinking about Acting: Logical Foundations for Rational Decision Making (Oxford University Press, 2006).
The objective of this book is to produce a theory of rational decision making for realistically resource-bounded agents. My interest is not in “What should I do if I were an ideal agent?”, but rather, “What should I do given that I am who I am, with all my actual cognitive limitations?”
The book has three parts. Part One addresses the question of where the values come from that agents use in rational decision making. The most common view among philosophers is that they are based on preferences, but I argue that this is computationally impossible. I propose an alternative theory somewhat reminiscent of Bentham, and explore how human beings actually arrive at values and how they use them in decision making.
Part Two investigates the knowledge of probability that is required for decision-theoretic reasoning. I argue that subjective probability makes no sense as applied to realistic agents. I sketch a theory of objective probability to put in its place. Then I use that to define a variety of causal probability and argue that this is the kind of probability presupposed by rational decision making. So what is to be defended is a variety of causal decision theory.
Part Three explores how these values and probabilities are to be used in decision making. In chapter eight, it is argued first that actions cannot be evaluated in terms of their expected values as ordinarily defined, because that does not take account of the fact that a cognizer may be unable to perform an action, and may even be unable to try to perform it. An alternative notion of “expected utility” is defined to be used in place of expected values. In chapter nine it is argued that individual actions cannot be the proper objects of decision-theoretic evaluation. We must instead choose plans, and select actions indirectly on the grounds that they are prescribed by the plans we adopt. However, our objective cannot be to find plans with maximal expected utilities. Plans cannot be meaningfully compared in that way. An alternative, called “locally global planning”, is proposed. According to locally global planning, individual plans are to be assessed in terms of their contribution to the cognizer’s “master plan”. Again, the objective cannot be to find master plans with maximal expected utilities, because there may be none, and even if they are, finding them is not a computationally feasible task for real agents. Instead, the objective must be to find good master plans, and improve them as better ones come along. It is argued that there are computationally feasible ways of doing this, based on defeasible reasoning about values and probabilities.
I once spoke with David Schmidtz, a philosophy at the University of Arizona, about Scwartz’s work. All he shows is that more choices makes people anxious and confused. But Dave told me that he got Scwartz to admit that being anxious and confused isn’t the same way as having a net utility decrease. It’s not even close.