Can we estimate the probability of this 3rd hypothesis or even compare it with the probability of the other two?
Marco Discendenti
Itt seems to me that it is actually easy to define a function $u’(...)>=0$ such that the preferences are represented by $E(u’^2)$ and not by $E(u’)$: just take u’=sqrt(u), and you can do the same for any value of the exponent, so the expectation does not play a special role in the theorem, you can replace it with any $L^p$ norm.
There are infinitely many ways to find utility functions that represents preferences on outcomes, for example if outcomes are monetary than any increasing function is equivalent on outcomes but not when you try to extend it to distributions and lotteries with the expected value.
I wander if given a specific function u(...) on every outcome you can also chose “rational” preferences (as in the theorem) according to some other operator on the distributions that is not the average, for example what about the L^p norm or the sup of the distribution (if they are continuous)?
Or is the expected value the special unique operator that have the propety stated by the VN-M theorem?
You don’t necessarily need to start from the preference and use the theorem to define the function, you can also start from the utility function and try to produce an intuitive explanation of why you should prefer to have the best expected value
Thank you for your insight. The problem with this view of utility “just as a language” is that sometimes I feel that the conclusion of utility maximization are not “rational” and I cannot figure out why they should be indeed rational if the language is not saying anything that is meaningful to my intuition.
Very interesting observations. I woudln’t say the theorem is used to support his assumption because the assumptions don’t speak about utils but only about preference over possible outcomes and lotteries, but I see your point.
Actually the assumptions are implicitly saying that you are not rational if you don’t want to risk to get a 1′000′000′000′000$ debt with a small enough probability rather than losing 1 cent (this is strightforward from the archimedean property).
Ok we have a theorem that says that if we are not maximizing the expected value of some function “u” then our preference are apparently “irrational” (violating some of the axioms). But assume we already know our utility function before applying the theorem, is there an argument that shows how and why the preference of B over A (or maybe indifference) is irrational if E(U(A))>E(U(B))?
Apparently the axioms can be considered to talk about preferences, not necessarily about probabilistic expectations. Am I wrong in seeing them in this way?
It seems indeed quite reasonable to maximize utility if you can choose an option that makes it possible, my point is why you should maximize expected utility when the choice is under uncertainty
Thank you for the reference
The ideal gas does have a mathematical definition of entropy, Boltzmann used it in the statistical derivation of the second law:
https://en.wikipedia.org/wiki/Entropy_(statistical_thermodynamics)Here is an account of Boltzmann work and the first objections to his conclusions:
https://plato.stanford.edu/entries/statphys-Boltzmann/
I think you are not considering some relevant points:
1) the artificial system we are considering (an ideal gas in a box) (a) is often used as an example to illustrate and even to derive the second law of thermodynamics by means of mathematical reasoning (the Boltzmann’s H-theorem) and (b) this is because it actually appears to be a prototype for the idea of the second law of thermodynamics so it is not just a random example, it is the root of out intuition of the second law2) the post is talking about the logic behind the arguments which are used to justify the second law of thermodynamics
3) The core point of the post is this:
in the simple case of the ideal gas in the box we end up thinking that it must evolve like the second law is prescribing, and we also have arguments to prove this that we find convincing
yet the ideal gas model, as a toy universe, doesn’t really behave like that, even if it is counterintuitive the decrease of entropy has the same frequency of the increase of entropy
therefore our intuition about the second law and the argument supporting it seems to have some problem
so maybe the second laws is true, but our way of thinking at it maybe is not, or maybe the second law is not true and our way of thinking the universe is flawed: in any case we have a problem
An ideal gas in a box is an egodic system. The Poincarè recurrence theorem states that a volume preserving dynamical system (i.e. any conservative system in classical physics) returns infinitely often in any neighbourhood (as small as you want) of any point of the phase space.
“What mechanism exists to cause the particles to vary in speed (given the magical non-deforming non-reactive box we are containing things in)?”
The system is a compact deterministc dynamical system and Poincarè recurrence applies: it will return infinitely many times close to any low entropic state it was before. Since the particles are only 3 the time needed for the return is small.
“conditional on any given (nonmaximal) level of entropy, the vast majority of states have increasing entropy”
I don’t think this statement can be true in any sense that would produce a non-symmetric behavior over a long time, and indeed it has some problem if you try to express it in a more accurate way:
1) what does “non-maximal” mean? You don’t really have a single maximum, you have a an average maximum and random oscillations around it
2) the “vast majority” of states are actually little oscillations around an average maximum value, and the downward oscillations are as frequent as the upward oscillations
3) any state of low entropy must have been reached in some way and the time needed to go from the maximum to the low entropy state should be almost equal to the time needed to go from the low entropy to the maximum: why shold it be different if the system has time symmetric laws?In your graph you take very few time to reach low entropy states from high entropy—compared to the time needed to reach high entropy again, but would this make the high-low transition look more natural or more “probable”? Maybe it would look even more innatural and improbable!
Good point but gravity could be enough to keep the available positions in a bounded set
You do have spontatenous entropy decreases in very “small” environment. For gas in a box with 3 particles entropy is fluctuating in human-scale times.
In order to apply Poincarè recurrence it is the set of available points of the phase space that must be “compact” and this is likely the case if we assume that the total energy of the universe is finite.
Entropy “reversal”—i.e. decrease—must be equally frequent as entropy increases: you cannot have an increase if you didn’t have a decrease before. My graph is not quantitatively accurate for sure but with a rescaling of times it should be ok.
Your point is that in the case of the low entropy universe you have much possibilities for the time to consider for its random formation compared to the single brain?