So, the reasoning method used in the weird world is a generalization of the normal reasoning method. It’s not a statement about math. However, it should have some implications for math. Because we’re dealing with objectively different world models with objectively different predictions. Of course some math is gonna be different.
Something that without context looks like “just a different model/set of assumptions”, with context may count as a conceptual generalization. (Judging the difference between ideas only by the difference in math descriptions may be misleading.)
I think “regular” baysean probability theory handles your examples quite well. If you view updating as just applying bayes rule then using different starting assumptions is just business as usual. It would just note that something like “without context” is in fact not the thing you are describing.
Using one of your examples:
Imagine two worlds. In the weird world, the property of intelligence is “shared” among beings. In the normal world, it is not “shared”.
You have a question “Are there many beings smarter than humans?”.
Then you encounter some beings much smarter than humans.
In the normal world, you update towards answering “yes” using the Bayes’ rule.
Example:
So you have this box, you put in n intelligence marbles, and then you draw collections of them out again. I am not sure this model is close to what you had in mind, but in principle, you can then compute your prior with the maximum entropy principle. In practice, there are lots of assumptions that you only notice once you see how badly your model fits the situation. Your first collection (“the human”) has a size of 1⁄100n. Then you draw two collections with 1⁄4n. This updates you toward there being less marble collections greater than 1⁄100n, because your estimate for the mean collection size has risen. Had you drawn two collections of size ~1/90 n, this would have updated you towards a higher estimate for the number of collections larger than the first.
There’s probably a misunderstanding. I’m sorry I haven’t explained it more clear.
I meant that even if it’s just “a different set of assumptions” (from the mathematical point of view), it still may count as a different type of thinking if:
Your assumptions are different from the usual ones often enough.
You describe the world differently.
Your reasoning method is based on seeking different kinds of patterns in the world. You believe that such patterns are more informative.
This should count as a different epistemology. This is what I meant by “context”: it’s not about any particular example, it’s about the epistemology.
But maybe this difference in epistemology does lead to a difference in math eventually.
This updates you toward there being less marble collections greater than 1⁄100n, because your estimate for the mean collection size has risen. Had you drawn two collections of size ~1/90 n, this would have updated you towards a higher estimate for the number of collections larger than the first.
This seems accurate: a little bit of smartness increases the probability, a lot of smartness decreases the probability.
I think “regular” baysean probability theory handles your examples quite well. If you view updating as just applying bayes rule then using different starting assumptions is just business as usual. It would just note that something like “without context” is in fact not the thing you are describing.
Using one of your examples:
Example: So you have this box, you put in n intelligence marbles, and then you draw collections of them out again. I am not sure this model is close to what you had in mind, but in principle, you can then compute your prior with the maximum entropy principle. In practice, there are lots of assumptions that you only notice once you see how badly your model fits the situation. Your first collection (“the human”) has a size of 1⁄100 n. Then you draw two collections with 1⁄4 n. This updates you toward there being less marble collections greater than 1⁄100 n, because your estimate for the mean collection size has risen. Had you drawn two collections of size ~1/90 n, this would have updated you towards a higher estimate for the number of collections larger than the first.
There’s probably a misunderstanding. I’m sorry I haven’t explained it more clear.
I meant that even if it’s just “a different set of assumptions” (from the mathematical point of view), it still may count as a different type of thinking if:
Your assumptions are different from the usual ones often enough.
You describe the world differently.
Your reasoning method is based on seeking different kinds of patterns in the world. You believe that such patterns are more informative.
This should count as a different epistemology. This is what I meant by “context”: it’s not about any particular example, it’s about the epistemology.
But maybe this difference in epistemology does lead to a difference in math eventually.
This seems accurate: a little bit of smartness increases the probability, a lot of smartness decreases the probability.