It’s not supposed to be useful: it’s supposed to be true. Rationalists should not fool themselves about how accurate they are being.
or like the kind of thing I can indeed quantify.
Again, “can quantify” can only mean “can make a blind guess about”. They’re unknown unknowns.
To the extent this consideration changes my actions, for instance, making me advocate against AI regulation because how sure could I possibly be in AI risk, it implicitly is arguing in favor of the set of worlds where my interests are better served by not having AI regulation.
I wasn’t implying anything specifically about AI regulation. I did state that the ordering of your preferences remains unaffected: you are repeating that back to me as though it is an objection.
It’s not supposed to be useful: it’s supposed to be true.
I asked for something useful in the first place. I don’t care about self-proclaimed useless versions of the statement “you may be wrong”. It has no use! That is not what most mean when they talk about Knightian uncertainty, I know that, because they usually say the Knightian uncertainty means you should so something different from what you would do otherwise.
You did, but you should not ignore inconvenient truths, whether or not you want to. Rationality is normative.
That is not what most mean when they talk about Knightian uncertainty, I know that, because they usually say the Knightian uncertainty means you should so something different from what you would do otherwise.
To refine that a bit, KU affects the absolute probability of your hypotheses, but not the relative probability. So it shouldn’t affect what you believe, but it should affect how strongly you believe it. How strongly you believe something can easily have an impact on what you do: for instance, you should not start WWIII unless you are very certain it is the best course of action.
You can incorporate KU into Bayes by reserving probability mass for stuff you don’t know or haven’t thought of, but you are not forced to. (As a corolllary, Bayesians who offer very high probabilities aren’t doing so). If the arguments for KU are correct, you should do that as well. Probability estimates are less useful than claims of certainty, but Bayesians are using them because their epistemology tells them that certainty isn’t avalaible: similarly absolute probability should be abandoned if it is not available.
It’s not supposed to be useful: it’s supposed to be true. Rationalists should not fool themselves about how accurate they are being.
Again, “can quantify” can only mean “can make a blind guess about”. They’re unknown unknowns.
I wasn’t implying anything specifically about AI regulation. I did state that the ordering of your preferences remains unaffected: you are repeating that back to me as though it is an objection.
I asked for something useful in the first place. I don’t care about self-proclaimed useless versions of the statement “you may be wrong”. It has no use! That is not what most mean when they talk about Knightian uncertainty, I know that, because they usually say the Knightian uncertainty means you should so something different from what you would do otherwise.
You did, but you should not ignore inconvenient truths, whether or not you want to. Rationality is normative.
Who’s “they” … Cowan, or every Knightian ever?
I would rather not continue this conversation, have a good day.
To refine that a bit, KU affects the absolute probability of your hypotheses, but not the relative probability. So it shouldn’t affect what you believe, but it should affect how strongly you believe it. How strongly you believe something can easily have an impact on what you do: for instance, you should not start WWIII unless you are very certain it is the best course of action.
You can incorporate KU into Bayes by reserving probability mass for stuff you don’t know or haven’t thought of, but you are not forced to. (As a corolllary, Bayesians who offer very high probabilities aren’t doing so). If the arguments for KU are correct, you should do that as well. Probability estimates are less useful than claims of certainty, but Bayesians are using them because their epistemology tells them that certainty isn’t avalaible: similarly absolute probability should be abandoned if it is not available.