Logos01, we seem to be using different definitions of “rational behavior”. So far I can’t tell if this stems from a political dispute, or a factual disagreement, or just an argument that took on a life of its own.
We are, and I initially noted this when I parsed “rational behavior” from “instrumentally rational behavior”.
Please try to state your first claim (from the great-grandparent comment) without using the word “rational” or “reason” or any synonym thereof.
This is a request that is definitionally impossible, since the topic at hand was “what is rational behavior”.
For my own position: if you choose not to do what works or “wins”, then complaining about how you conformed to the right principles will accomplish nothing (except in cases where complaining does help). It will not change the outcome, nor will it increase your knowledge.
No contest.
In my case, what I was getting at was the notion that it is possible to present a counterfactual scenario where doing what “loses” is rationally necessary. For this to occur it would be necessary for the processes of rationality available to Louis-the-Loser to have a set of goals, and come to the conclusion that it is necessary to violate all of them.
Let’s assume that Louis-the-Loser has a supergoal of giving all humans more happiness over time. Over time, Louis comes to the factual conclusion (with total certainty) that humans are dead set on reducing their happiness to negative levels—permanently; and that further they have the capacity to do so. Perhaps you now think that Louis’s sole remaining rational decision is to fail his supergoal: to kill all humans. And this would be maximal happiness for humans since he’d prevent negative happiness, yes? (And therefore, “win”)
But there’s a second answer to this question. And it is equally rationally viable. Give the humans what they want. Allow them to constantly and continuously maximize their unhappiness; perhaps even facilitate them in that endeavor. Now, why is this a reasonable thing to do? Because even total certainty can be wrong, and even P=1 statements can be revised.
I don’t think you understand what P(X)=1 means. It doesn’t just mean X is going to happen if the laws of the universe remain the same, it doesn’t just mean X going to happen if 3^^^3 coins are flipped and at least one lands on heads.
It means that X happens in every possible version of our universe from this point onward. Including ones where the universe is a simulation that explicitly disallows X.
(The only time P(X) = 1 makes sense is in mathematics, e.g. P(two random lines in 2D space are not parallel) = 1)
Yeah, the person making the assertion can be wrong.
for there to be a fundamental shift in reality.
Huh? Did you read what I wrote:
It doesn’t just mean X is going to happen if the laws of the universe remain the same [...] It means that X happens in every possible version of our universe from this point onward
Every. Possible. Universe. This accounts for “fundamental shift[s] in reality”.
“[P(X)=1] doesn’t just mean X is going to happen if the laws of the universe remain the same, it doesn’t just mean X going to happen if 3^^^3 coins are flipped and at least one lands on heads.
It means that X happens in every possible version of our universe from this point onward. Including ones where the universe is a simulation that explicitly disallows X.”
There is no paradox. Mathematics is independent of the physics of the universe in which it is being discussed, e.g. “The integers” satisfy the same properties as they do for us, even if there are 20 spatial dimensions and 20 temporal ones.
Sure, you can change the axioms you start with, but then you are talking about different objects.
I did not say a fundamental shift in the physics of reality.
How could one get a “fundamental” shift by any other method?
The mathematics of probability describe real-world scenarios. Descriptions are subject to change.
Yes, but the axioms of probability theory aren’t (yes, it has axioms). So something like “to determine the probability of X you have to average the occurrences of X in every possible situation” won’t change.
How could one get a “fundamental” shift by any other method?
Reality is not necessarily constrained by that which is physical. The Laws of Physics themselves; the Laws of Logic, several other such wholly immaterial and non-contingent elements are all considered real despite not existing. (Numbers for example.)
It is possible to postulate a counterfactual where any of these could be altered.
Yes, but the axioms of probability theory aren’t (yes, it has axioms).
Enter the paradox I spoke of. The fact that certain things aren’t subject to change itself can be subject to change.
Reality is not necessarily constrained by that which is physical. The Laws of Physics themselves; the Laws of Logic, several other such wholly immaterial and non-contingent elements are all considered real despite not existing. (Numbers for example.)
Any shift to the “Laws of Logic” is equivalent to a change in axioms, so we wouldn’t be talking about our probability. And also, all this accounted for by the whole “average across all outcomes” bit (this is a bit hand-wavy, since one has to weight the average by the Kolmogorov complexity (or some such) of the various outcomes).
It is possible to postulate a counterfactual where any of these could be altered.
Nope. Here are the natural numbers {0,1,2,...} (source):
There is a natural number 0.
Every natural number a has a natural number successor, denoted by S(a).
There is no natural number whose successor is 0.
S is injective, i.e. if a ≠ b, then S(a) ≠ S(b).
If a property is possessed by 0 and also by the successor of every natural number which possesses it, then it is possessed by all natural numbers.
This has uniquely identified the natural numbers. There isn’t a counterfactual where those 5 axioms don’t identify the naturals.
Enter the paradox I spoke of. The fact that certain things aren’t subject to change itself can be subject to change.
But you said:
Sure, you can change the axioms you start with, but then you are talking about different objects.
No, there is no paradox. If one changes the axioms of probability, then you have a new, different version of probability, which cannot be directly compared to our current version, because they are different constructions.
We are, and I initially noted this when I parsed “rational behavior” from “instrumentally rational behavior”.
This is a request that is definitionally impossible, since the topic at hand was “what is rational behavior”.
No contest.
In my case, what I was getting at was the notion that it is possible to present a counterfactual scenario where doing what “loses” is rationally necessary. For this to occur it would be necessary for the processes of rationality available to Louis-the-Loser to have a set of goals, and come to the conclusion that it is necessary to violate all of them.
Let’s assume that Louis-the-Loser has a supergoal of giving all humans more happiness over time. Over time, Louis comes to the factual conclusion (with total certainty) that humans are dead set on reducing their happiness to negative levels—permanently; and that further they have the capacity to do so. Perhaps you now think that Louis’s sole remaining rational decision is to fail his supergoal: to kill all humans. And this would be maximal happiness for humans since he’d prevent negative happiness, yes? (And therefore, “win”)
But there’s a second answer to this question. And it is equally rationally viable. Give the humans what they want. Allow them to constantly and continuously maximize their unhappiness; perhaps even facilitate them in that endeavor. Now, why is this a reasonable thing to do? Because even total certainty can be wrong, and even P=1 statements can be revised.
However, it does require Louis to actually lose.
To focus on one point that seems straightforward:
My first response was, no they can’t. I’ll change that to, “how, exactly?”
I suppose the verbal utterance of “P=1” can be revised. Just not any belief system with a P=1 in it without external hacking (or bad updating).
By being wrong when you made said statement. Or by a fundamental shift in reality.
I don’t think you understand what P(X)=1 means. It doesn’t just mean X is going to happen if the laws of the universe remain the same, it doesn’t just mean X going to happen if 3^^^3 coins are flipped and at least one lands on heads.
It means that X happens in every possible version of our universe from this point onward. Including ones where the universe is a simulation that explicitly disallows X.
(The only time P(X) = 1 makes sense is in mathematics, e.g. P(two random lines in 2D space are not parallel) = 1)
Ergo, for P(X)=1 to be revised requires the person making that assertion be wrong, or for there to be a fundamental shift in reality.
Yeah, the person making the assertion can be wrong.
Huh? Did you read what I wrote:
Every. Possible. Universe. This accounts for “fundamental shift[s] in reality”.
Yup, I most assuredly did.
Saving for those in which the principle you related is altered. Don’t try to wrap your head around it. It’s a paradox.
Which principle?
“[P(X)=1] doesn’t just mean X is going to happen if the laws of the universe remain the same, it doesn’t just mean X going to happen if 3^^^3 coins are flipped and at least one lands on heads.
It means that X happens in every possible version of our universe from this point onward. Including ones where the universe is a simulation that explicitly disallows X.”
There is no paradox. Mathematics is independent of the physics of the universe in which it is being discussed, e.g. “The integers” satisfy the same properties as they do for us, even if there are 20 spatial dimensions and 20 temporal ones.
Sure, you can change the axioms you start with, but then you are talking about different objects.
The principles by which mathematics operates, certainly. Two things here:
1) I did not say a fundamental shift in the physics of reality.
2) The mathematics of probability describe real-world scenarios. Descriptions are subject to change.
In this, you have my total agreement.
How could one get a “fundamental” shift by any other method?
Yes, but the axioms of probability theory aren’t (yes, it has axioms). So something like “to determine the probability of X you have to average the occurrences of X in every possible situation” won’t change.
Reality is not necessarily constrained by that which is physical. The Laws of Physics themselves; the Laws of Logic, several other such wholly immaterial and non-contingent elements are all considered real despite not existing. (Numbers for example.)
It is possible to postulate a counterfactual where any of these could be altered.
Enter the paradox I spoke of. The fact that certain things aren’t subject to change itself can be subject to change.
Any shift to the “Laws of Logic” is equivalent to a change in axioms, so we wouldn’t be talking about our probability. And also, all this accounted for by the whole “average across all outcomes” bit (this is a bit hand-wavy, since one has to weight the average by the Kolmogorov complexity (or some such) of the various outcomes).
Nope. Here are the natural numbers {0,1,2,...} (source):
There is a natural number 0.
Every natural number a has a natural number successor, denoted by S(a).
There is no natural number whose successor is 0.
S is injective, i.e. if a ≠ b, then S(a) ≠ S(b).
If a property is possessed by 0 and also by the successor of every natural number which possesses it, then it is possessed by all natural numbers.
This has uniquely identified the natural numbers. There isn’t a counterfactual where those 5 axioms don’t identify the naturals.
But you said:
It’s a paradox. You’re expecting it to make sense and not be contradictory. That’s… the opposite of correct.
No, there is no paradox. If one changes the axioms of probability, then you have a new, different version of probability, which cannot be directly compared to our current version, because they are different constructions.