The proposition that we should be able to reason about priors of other agents is surely not contentious. The proposition that if I learn something new about my creation, I should update on that information, is also surely not contentious, although there might be some disagreement what that update looks like.
In the case of genetics, if I learned that I’m genetically predisposed to being optimistic, then I would update my beliefs the same way I would update them if I had performed a calibration and found my estimates consistently too high. That is unless I’ve performed calibrations in the past and know myself to be well calibrated. In that case the genetic predisposition isn’t giving me any new information—I’ve already corrected for it. This, again, surely isn’t contentious.
Although I have no idea what this has to do with “species average”. Yes, I have no reason to believe that my priors are better than everybody else’s, but I also have no reason to believe that the “species average” is better than my current prior (there is also the problem that “species” is an arbitrarily chosen category).
But aside from that, I struggle to understand what, in simple terms, is being disagreed about here.
The proposition that we should be able to reason about priors of other agents is surely not contentious. The proposition that if I learn something new about my creation, I should update on that information, is also surely not contentious, although there might be some disagreement what that update looks like.
The form is the interesting thing here. By arguing for the common prior assumption, RH is giving an argument in favor of a form of modest epistemology, which Eliezer has recently written so much against.
In the case of genetics, if I learned that I’m genetically predisposed to being optimistic, then I would update my beliefs the same way I would update them if I had performed a calibration and found my estimates consistently too high. That is unless I’ve performed calibrations in the past and know myself to be well calibrated. In that case the genetic predisposition isn’t giving me any new information—I’ve already corrected for it. This, again, surely isn’t contentious.
In Eliezer’s view, because there are no universally compelling arguments and recursive justification has to hit bottom, you don’t give up your prior just because you see that there was bias in the process which created it—nothing can be totally justified by an infinite chain of unbiased steps anyway. This means, concretely, that you shouldn’t automatically take the “outside view” on the beliefs you have which others are most likely to consider crazy; their disbelief is little evidence, if you have a strong inside view reason why you can know better than them.
In RH’s view, honest truth-seeking agents with common priors should not knowingly disagree (citation: are disagreements honest?). Since a failure of the common prior assumption entails disagreement about the origin of priors (thanks to the pre-rationality condition), and RH thinks disagreement about the origin of priors should rarely be relevant for disagreements about humans, RH thinks honest truth-seeking humans should not knowingly disagree.
I take it RH thinks some averaging should happen somewhere as a result of that, though I am not entirely sure. This would contradict Eliezer’s view.
Although I have no idea what this has to do with “species average”. Yes, I have no reason to believe that my priors are better than everybody else’s, but I also have no reason to believe that the “species average” is better than my current prior (there is also the problem that “species” is an arbitrarily chosen category).
The wording in the paper makes me think RH was intending “species” as a line beyond which the argument might fail, not one he’s necessarily going to draw (IE, he might concede that his argument can’t support a common prior assumption with aliens, but he might not concede that).
I think he does take this to be reason to believe the species average is better than your current prior, to the extent they differ.
But aside from that, I struggle to understand what, in simple terms, is being disagreed about here.
I see several large disagreements.
Is the pre-rationality condition a true constraint on rationality? RH finds it plausible; WD does not. I am conflicted.
If the pre-rationality argument makes sense for common probabilities, does it then make sense for utilities? WD thinks so; RH thinks not.
Does pre-rationality imply a rational creator? WD thinks so; RH thinks not.
Does the pre-prior formalism make sense at all? Can rationality conditions stated with use of pre-priors have any force for ordinary agents who do not reason using pre-priors? I think not, though I think there is perhaps something to be salvaged out of it.
Does the common prior assumption make sense in practice?
Should honest truth-seeking humans knowingly disagree?
Does the common prior assumption make sense in practice?
I don’t know what “make sense” means. When I said “in simple terms”, I meant that I want to avoid that sort of vagueness. The disagreement should be empirical. It seems that we need to simulate an environment with a group of bayesians with different priors, then somehow construct another group of bayesians that satisfy the pre-rationality condition, and then the claim should be that the second group outperforms the first group in accuracy. But I don’t think I saw such claims in the paper explicitly. So I continue to be confused, what exactly the disagreement is about.
Should honest truth-seeking humans knowingly disagree?
Big question, I’m not going to make big claims here, though my intuition tends to say “yes”. Also, “should” is a bad word, I’m assuming that you’re referring to accuracy (as in my previous paragraph), but I’d like to see these things stated explicitly.
you don’t give up your prior just because you see that there was bias in the process which created it
Of course not. But you do modify it. What is RH suggesting?
Is the pre-rationality condition a true constraint on rationality? RH finds it plausible; WD does not. I am conflicted.
“True” is a bad word, I have no idea what it means.
If the pre-rationality argument makes sense for common probabilities, does it then make sense for utilities? WD thinks so; RH thinks not.
RH gives a reasonable argument here, and I don’t see much of a reason why we would do this to utilities in the first place.
Does pre-rationality imply a rational creator? WD thinks so; RH thinks not.
I see literally nothing in the paper to suggest anything about this. I don’t know what WD is talking about.
I don’t know what “make sense” means. When I said “in simple terms”, I meant that I want to avoid that sort of vagueness. The disagreement should be empirical. It seems that we need to simulate an environment with a group of bayesians with different priors, then somehow construct another group of bayesians that satisfy the pre-rationality condition, and then the claim should be that the second group outperforms the first group in accuracy. But I don’t think I saw such claims in the paper explicitly. So I continue to be confused, what exactly the disagreement is about.
Ah, well, I think the “should honest truth-seeking humans knowingly disagree” is the practical form of this for RH.
(Or, even more practical, practices around such disagreements. Should a persistent disagreement be a sign of dishonesty? (RH says yes.) Should we drop beliefs which others persistently disagree with? Et cetera.)
Big question, I’m not going to make big claims here, though my intuition tends to say “yes”.
Then (according to RH), you have to deal with RH’s arguments to the contrary. Specifically, his paper is claiming that you have to have some origin disputes about other people’s priors.
Although that’s not why I’m grappling with his argument. I’m not sure whether rational truth-seekers should persistently disagree, but I’m very curious about some things going on in RH’s argument.
Also, “should” is a bad word, I’m assuming that you’re referring to accuracy (as in my previous paragraph), but I’d like to see these things stated explicitly.
I think “should” is a good word to sometimes taboo (IE taboo quickly if there seems to be any problem), but I don’t see that it needs to be an always-taboo word.
“True” is a bad word, I have no idea what it means.
I’m also pretty unclear on what it could possibly mean here, but nonetheless think it is worth debating. Not only is there the usual problem of spelling out what it means for something to be a constraint on rationality, but now there’s an extra weird thing going on with setting up pre-priors which aren’t probabilities which you use to make any decisions.
I see literally nothing in the paper to suggest anything about this. I don’t know what WD is talking about.
I think I know what WD is talking about, but I agree it isn’t what RH is really trying to say.
the usual problem of spelling out what it means for something to be a constraint on rationality
Is that a problem? What’s wrong with “believing true things”, or, more precisely, “winning bets”? (obviously, these need to be prefixed with “usually” and “across many possible universes”). If I’m being naive and these don’t work, then I’d love to hear about it.
But if they do work, then I really want to see how the idea of pre-rationality is supposed to help me believe more true things and win more bets. I legitimately don’t understand how it would.
Should honest truth-seeking humans knowingly disagree?
My intuition says “yes” in large part due to the word “humans”. I’m not certain whether two perfect bayesians should disagree, for some unrealistic sense of “perfect”, but even if they shouldn’t, it is uncertain that this would also apply to more limited agents.
Is that a problem? What’s wrong with “believing true things”, or, more precisely, “winning bets”?
Winning bets is not literally the same thing as believing true things, nor is it the same thing as having accurate beliefs, or being rational. I think Dutch-book arguments are … not exactly mistaken, but misleading, for this reason. It is not true that the only reason to have probabilistically coherent beliefs is to avoid reliably losing bets. If that were the case, we could throw rationality out the window whenever bets aren’t involved. I think betting is both a helpful illustrative thought experiment (dutch books illustrate irrationality) and a helpful tool for practicing rationality, but not synonymous with rationality.
“Believing true things” is problematic for several reasons. First, that is apparently entirely focused on epistemic rationality, excluding instrumental rationality. Second, although there are many practical cases where it isn’t a problem, there is a question of what “true” means, especially for high-level beliefs about things like tables and chairs which are more like conceptual clusters rather than objective realities. Third, even setting those aside, it is hard to see how we can get from “believing true things” to Bayes’ Law and other rules of probabilistic reasoning. I would argue that a solid connection between “believe true things” and rationality constraints of classical logic can be made, but probabilistic reasoning requires an additional insight about what kind of thing can be a rationality constraint: you don’t just have beliefs, you have degrees of belief. We can say things about why degrees of belief might be better or worse, but to do so requires a notion of quality of belief which goes beyond truth alone; you are not maximizing the expected amount of truth or anything like that.
Another possible answer, which you didn’t name but which might have been named, would be “rationality is about winning”. Something important is meant by this, but the idea is still vague—it helps point toward things that do look like potential rationality constraints and away from things which can’t serve as rationality constraints, but it is not the end of the story of what we might possibly mean by calling something a constraint of rationality.
My intuition says “yes” in large part due to the word “humans”. I’m not certain whether two perfect bayesians should disagree, for some unrealistic sense of “perfect”, but even if they shouldn’t, it is uncertain that this would also apply to more limited agents.
Most of my probability mass is on you being right here, but I find RH’s arguments to the contrary intriguing. It’s not so much that I’m engaging with them in the expectation that I’ll change my mind about whether honest truth seeking humans can knowingly disagree. (Actually I think I should have said “can” all along rather than “should” now that I think about it more!) I do, however, expect something about the structure of those disagreements can be understood more thoroughly. If ideal Bayesians always agree, that could mean understanding the ways that Bayesian assumptions break down for humans. If ideal Bayesians need not agree, it might mean understanding that better.
I really want to see how the idea of pre-rationality is supposed to help me believe more true things and win more bets. I legitimately don’t understand how it would.
I think I can understand this one, to some extent. Supposing that some version of pre-rationality does work out, and if I, hypothetically, understood pre-rationality extremely well (better than RH’s paper explains it)… I would expect more insights into at least one of the following:
agents reasoning about their own prior (what is the structure of the reasoning? to what extent can an agent approve of, or not approve of, its own prior? are there things which can make an agent decide its own prior is bad? what must an agent believe about the process which created its prior? What should an agent do if it discovers that the process which created its prior was biased, or systematically not truth-seeking, or otherwise ‘problematic’?)
common knowledge of beliefs (is it realistic for beliefs to be common knowledge? when? are there more things to say about the structure of common knowledge, which help reconcile the usual assumption that an agent knows its own prior with the paradoxes of self-reference which prevent agents from knowing themselves so well?)
what it means for an agent to have a prior (how to we designate a special belief-state to call the prior, for realistic agents? can we do so at all in the face of logical uncertainty? is it better to just think in terms of a sequence of belief states, with some being relatively prior to others? can we make good models of agents who are becoming rational as they are learning, such that they lack an initial perfectly rational prior?)
reaching agreement with other agents (by an Aumann-like process or otherwise; by bringing in origin disputes or otherwise)
reasoning about one’s own origins (especially in the sense of justification structures; endorsing or not endorsing the way one’s beliefs were constructed or the way those beliefs became what they are more generally).
Winning bets is not literally the same thing as believing true things, nor is it the same thing as having accurate beliefs, or being rational.
They are not the same, but that’s ok. You asked about constraints on, not definitions of rationality. This may not be an exhaustive list, but if someone has an idea about rationality that translates neither into winning some hypothetical bets, nor into having even slightly more accurate beliefs about anything, then I can confidently say that I’m not interested.
(Of course this is not to say that an idea that has no such applications has literally zero value)
Supposing that some version of pre-rationality does work out, and if I, hypothetically, understood pre-rationality extremely well (better than RH’s paper explains it)… I would expect more insights into at least one of the following: <...>
I completely agree that if RH was right, and if you understood him well, then you would receive multiple benefits, most of which could translate into winning hypothetical bets, and into having more accurate beliefs about many things. But that’s just the usual effect of learning, and not because you would satisfy the pre-rationality condition.
I continue to not understand in what precise way the agent that satisfies the pre-rationality condition is (claimed to be) superior to the agent that doesn’t. To be fair, this could be a hard question, and even if we don’t immediately see the benefit, that doesn’t mean that there is no benefit. But still, I’m quite suspicious. In my view this is the single most important question, and it’s weird to me that I don’t see it explicitly addressed.
What is being disagreed about is whether you should update to the species average. If the optimism is about a topic that can’t be easily tested, then all the calibration shows is that your estimates are higher than the species average, not that they are too high in an absolute sense.
Then the question is entirely about whether we expect the species average to be a good predictor. If there is an evolutionary pressure for the species to have correct beliefs about a topic, then we probably should update to the species average (this may depend on some assumptions about how evolution works). But if a topic isn’t easily tested, then there probably isn’t a strong pressure for it.
Another example, let’s replace “species average” with “prediction market price”. Then we should agree that updating our prior makes sense, because we expect prediction markets to be efficient, in many cases. But, if we’re talking about “species average”, it seems very dubious that it’s a reliable predictor. At least, the claim that we should update to the species average depends on many assumptions.
Of course, in the usual Bayesian framework, we don’t update to species average. We only observe species average as evidence and then update towards it, by some amount. It sounds like Hanson wants to leave no trace of the original prior, though, which is a bit weird.
Of course, in the usual Bayesian framework, we don’t update to species average. We only observe species average as evidence and then update towards it, by some amount. It sounds like Hanson wants to leave no trace of the original prior, though, which is a bit weird.
Actually, as Wei Dai explained here, the usual Bayesian picture is even weaker than that. You observe the species average and then you update somehow, whether it be toward, away, or even updating to the same number. Even if the whole species is composed of perfect Bayesians, Aumann Agreement does not mean you just update toward each other until you agree; what the proof actually implies is that you dance around in a potentially quite convoluted way until you agree. So, there’s no special reason to suppose that Bayesians should update toward each other in a single round of updating on each other’s views.
The idea (as far as I understand it) is supposed to be something like: if you don’t think there is an evolutionary pressure for the species you are in to have correct beliefs about a topic, then why do you trust your own beliefs? To the degree that your own beliefs are trustworthy, it is because there is such an evolutionary pressure. Thus, switching to the species average just reduces the noise while not compromising the source of trustworthiness.
If there is no pressure on the species, then I don’t particularly trust neither the species average nor my own prior. They are both very much questionable. So, why should I switch from one questionable prior to another? It is a wasted motion.
Consider an example. Let there be N interesting propositions we want to have accurate beliefs about. Suppose that every person, at birth, rolls a six sided die N times and then for every proposition prop_i they set the prior P(prop_i) = dice_roll_i/10. And now you seem to be saying that for me to set P(prop_i) = 0.35 (which is the species average), is in some way better? More accurate, presumably? Because that’s the only case where switching would make sense.
And now you seem to be saying that for me to set P(prop_i) = 0.35 (which is the species average), is in some way better? More accurate, presumably?
If you have no other information, it does reduce variance, while keeping bias the same. This reduces expected squared error, due to the bias-variance tradeoff.
Eliezer explicitly argues that this is not a good argument for averaging your opinions with a crowd. However, I don’t like his argument there very much. He argues that squared error is not necessarily the right notion of error, and provides an alternative error function as an example where you escape the conclusion.
However, he relies on giving a nonconvex error function. It seems to me that most of the time, the error function will be convex in practice, as shown in A Pragmatist’s Guide to Epistemic Utility by Ben Levinstein.
I think what this means is that given only the two options, averaging your beliefs with those of other people is better than doing nothing at all. However, both are worse than a Bayesian update.
I am having trouble cashing out your example in concrete terms; what kind of propositions could behave like that? More importantly, why would they behave like that?
I realize I said something wrong in my previous comment: evolutionary pressure is not the only kind of reason that someone might think their / their species’ beliefs may be trustworthy. For example, you might think that evolutionary pressure causes beliefs to become more accurate when they are about topics relevant to survival/reproduction, and that the uniformity of logic means that the kind of mind that is good at having accurate beliefs on such topics is also somewhat good at having accurate beliefs on other topics. But if you really think that there is NO reason at all that you might have accurate beliefs on a given topic, it seems to me that you do not have beliefs about that topic at all.
But if you really think that there is NO reason at all that you might have accurate beliefs on a given topic, it seems to me that you do not have beliefs about that topic at all.
This doesn’t seem true to me.
First, you need to assign probabilities in order to coherently make decisions under uncertainty, even if the probabilities are totally made up. It’s not because the probabilities are informative, it’s because if your decisions can’t be justified by any probability distribution, then you’re leaving money on the table somewhere with respect to your own preferences.
Second, recursive justification must hit bottom somewhere. At some point you have to assume something if you’re going to prove anything. So, there has to be a base of beliefs which you can’t provide justification for without relying on those beliefs themselves.
Perhaps you didn’t mean to exclude circular justification, so the recursive-justification-hits-bottom thing doesn’t contradict what you were saying. However, I think the first point stands; you sometimes want beliefs (any beliefs at all!) as opposed to no beliefs, even when there is no reason to expect their accuracy.
I certainly didn’t mean to exclude circular justification: we know that evolution is true because of the empirical and theoretical evidence, which relies on us being able to trust our senses and reasoning, and the reason we can mostly trust our senses and reasoning is because evolution puts some pressure on organisms to have good senses and reasoning.
Maybe what you are saying is useful for an AI but for humans I think the concept of “I don’t have a belief about that” is more useful than making up a number with absolutely no justification just so that you won’t get Dutch booked. I think evolution deals with Dutch books in other ways (like making us reluctant to gamble) and so it’s not necessary to deal with that issue explicitly most of the time.
I agree. The concept of “belief” comes apart into different notions in such cases; like, we might explicitly say “I don’t have a belief about that” and we might internally be unable to summon any arguments one way or another, but we might find ourselves making decisions nonetheless.
I do think this is somewhat relevant for humans rather than only AI, though. If we find ourselves paralyzed and unable to act because we are unable to form a belief, we will end up doing nothing, which in many cases will be worse that things we would have done had we assigned any probability at all. Needing to make decisions is a more powerful justification for needing probabilities than Dutch books are.
I am having trouble cashing out your example in concrete terms; what kind of propositions could behave like that? More importantly, why would they behave like that?
The propositions aren’t doing anything. The dice rolls represent genetic variation (the algorithm could be less convoluted, but it felt appropriate). The propositions can be anything from “earth is flat”, to “I will win a lottery”. Your beliefs about these propositions depend on your initial priors, and the premise is that these can depend on your genes.
For example, you might think that evolutionary pressure causes beliefs to become more accurate when they are about topics relevant to survival/reproduction, and that the uniformity of logic means that the kind of mind that is good at having accurate beliefs on such topics is also somewhat good at having accurate beliefs on other topics.
Sure, there are reasons why we might expect the “species average” predictions not to be too bad. But there are better groups. E.g. we would surely improve the quality of our predictions if, while taking the average, we ignored the toddlers, the senile and the insane. We would improve even more if we only averaged the well educated. And if I myself am educated and sane adult, then I can expect reasonably well that I’m outperforming the “species average”, even under your consideration.
But if you really think that there is NO reason at all that you might have accurate beliefs on a given topic, it seems to me that you do not have beliefs about that topic at all.
If I know nothing about a topic, then I have my priors. That’s what priors are. To “not have beliefs” is not a valid option in this context. If I ask you for a prediction, you should be able to say something (e.g. “0.5″).
I think the species average belief for both “earth is flat” and “I will win a lottery” is much less than 0.35. That is why I am confused about your example.
I think Hanson would agree that you have to take a weighted average, and that toddlers should be weighted less highly. But toddlers should agree that they should be weighted less highly, since they know that they do not know much about the world.
If the topic is “Is xzxq kskw?” then it seems reasonable to say that you have no beliefs at all. I would rather say that than say that the probability is 0.5. If the topic is something that is meaningful to you, then the way that the proposition gets its meaning should presumably also let you estimate its likelihood, in a way that bears some relation to accuracy.
I think the species average belief for both “earth is flat” and “I will win a lottery” is much less than 0.35. That is why I am confused about your example.
Feel free to take more contentious propositions, like “there is no god” or “I should switch in Monty Hall”. But, also, you seem to be talking about current beliefs, and Hanson is talking about genetic predispositions, which can be modeled as beliefs at birth. If my initial prior, before I saw any evidence, was P(earth is flat)=0.6, that doesn’t mean I still believe that earth is flat. It only means that my posterior is slightly higher than someone’s who saw the same evidence but started with a lower prior.
Anyway, my entire point is that if you take many garbage predictions and average them out, you’re not getting anything better than what you started with. Averaging only makes sense with additional assumptions. Those assumptions may sometimes be true in practice, but I don’t see them stated in Hanson’s paper.
I think Hanson would agree that you have to take a weighted average
No, I don’t think weighing makes sense in Hanson’s framework of pre-agents.
But toddlers should agree that they should be weighted less highly, since they know that they do not know much about the world.
No, idiots don’t always know that they’re idiots. An idiot who doesn’t know it is called a “crackpot”. There are plenty of those. Toddlers are also surely often overconfident, though I don’t think there is a word for that.
If the topic is “Is xzxq kskw?” then it seems reasonable to say that you have no beliefs at all.
When modeling humans as Bayesians, “having no beliefs” doesn’t type check. A prior is a function from propositions to probabilities and “I don’t know” is not a probability. You could perhaps say that “Is xzxq kskw?” is not a valid proposition. But I’m not sure why bother. I don’t see how this is relevant to Hanson’s paper.
P(earth is flat)=0.6 isn’t a garbage prediction, since it lets people update to something reasonable after seeing the appropriate evidence. It doesn’t incorporate all the evidence, but that’s a prior for you.
I think God and Monty Hall are both interesting examples, In particular Monty Hall is interesting because so many professional mathematicians got the wrong answer for it, and God is interesting because people disagree as to who the relevant experts are, as well as what epistemological framework is appropriate for evaluating such a proposition. I don’t think I can give you a good answer to either of them (and just to be clear I never said that I agreed with Hanson’s point of view).
Maybe you’re right that xzxq is not relevant to Hanson’s paper.
Regarding weighting, Hanson’s paper doesn’t talk about averaging at all so it doesn’t make sense to ask whether the averaging that it talks about is weighted. But the idea that all agents would update to a (weighted) species-average belief is an obvious candidate for an explanation for why their posteriors would agree. I realize my previous comments may have obscured this distinction, sorry about that.
P(earth is flat)=0.6 isn’t a garbage prediction, since it lets people update to something reasonable after seeing the appropriate evidence.
What is a garbage prediction then? P=0 and P=1? When I said “garbage”, I meant that it has no relation to the real world, it’s about as good as rolling a die to choose a probability.
Why? Are there no conceivable lotteries with that probability of winning? (There are, e.g. if I bought multiple tickets). Is there no evidence that we could see in order to update this prediction? (There is, e.g. the number of tickets sold, the outcomes of past lotteries, etc). I continue to not understand what standard of “garbage” you’re using.
So, I guess it depends on exactly how far back you want to go when erasing your background knowledge to try to form the concept of a prior. I was assuming you still knew something about the structure of the problem, i.e. that there would be a bunch of tickets sold, that you have only bought one, etc. But you’re right that you could recategorize those as evidence in which case the proper prior wouldn’t depend on them.
If you take this to the extreme you could say that the prior for every sentence should be the same, because the minimum amount of knowledge you could have about a sentence is just “There is a sentence”. You could then treat all facts about the number of words in the sentence, the instances in which you have observed people using those words, etc. as observations to be updated on.
It is tempting to say that the prior for every sentence should be 0.5 in this case (in which case a “garbage prediction” would just be one that is sufficiently far away from 0.5 on a log-odds scale), but it is not so clear that a “randomly chosen” sentence (whatever that means) has a 0.5 probability of being true. If by a “randomly chosen” sentence we mean the kinds of sentences that people are likely to say, then estimating the probability of such a sentence requires all of the background knowledge that we have, and we are left with the same problem.
Maybe all of this is an irrelevant digression. After rereading your previous comments, it occurs to me that maybe I should put it this way: After updating, you have a bunch of people who all have a small probability for “the earth is flat”, but they may have slightly different probabilities due to different genetic predispositions. Are you saying that you don’t think averaging makes sense here? There is no issue with the predictions being garbage, we both agree that they are not garbage. The question is just whether to average them.
I was assuming you still knew something about the structure of the problem, i.e. that there would be a bunch of tickets sold, that you have only bought one, etc.
If you’ve already observed all the possible evidence, then your prediction is not a “prior” any more, in any sense of the word. Also, both total tickets sold and the number of tickets someone bought are variables. If I know that there is a lottery in the real world, I don’t usually know how many tickets they really sold (or will sell), and I’m usually allowed to buy more than one (although it’s hard for me to not know how many I have).
After updating, you have a bunch of people who all have a small probability for “the earth is flat”, but they may have slightly different probabilities due to different genetic predispositions. Are you saying that you don’t think averaging makes sense here?
I think that Hanson wants to average before updating. Although if everyone is a perfect bayesian and saw the same evidence, then maybe there isn’t a huge difference between averaging before or after the update.
Either way, my position is that averaging is not justified without additional assumptions. Though I’m not saying that averaging is necessarily harmful either.
If you are doing a log-odds average then it doesn’t matter whether you do it before or after updating.
Like I pointed out in my previous comment the question “how much evidence have I observed / taken into account?” is a continuous question with no obvious “minimum” answer. The answer “I know that a bunch of tickets will be sold, and that I will only buy a few” seems to me to not be a “maximum” answer either, so beliefs based on it seem reasonable to call a “prior”, even if under some framings they are a posterior. Though really it is pointless to talk about what is a prior if we don’t have some specific set of observations in mind that we want our prior to be prior to.
The proposition that we should be able to reason about priors of other agents is surely not contentious. The proposition that if I learn something new about my creation, I should update on that information, is also surely not contentious, although there might be some disagreement what that update looks like.
In the case of genetics, if I learned that I’m genetically predisposed to being optimistic, then I would update my beliefs the same way I would update them if I had performed a calibration and found my estimates consistently too high. That is unless I’ve performed calibrations in the past and know myself to be well calibrated. In that case the genetic predisposition isn’t giving me any new information—I’ve already corrected for it. This, again, surely isn’t contentious.
Although I have no idea what this has to do with “species average”. Yes, I have no reason to believe that my priors are better than everybody else’s, but I also have no reason to believe that the “species average” is better than my current prior (there is also the problem that “species” is an arbitrarily chosen category).
But aside from that, I struggle to understand what, in simple terms, is being disagreed about here.
The form is the interesting thing here. By arguing for the common prior assumption, RH is giving an argument in favor of a form of modest epistemology, which Eliezer has recently written so much against.
In Eliezer’s view, because there are no universally compelling arguments and recursive justification has to hit bottom, you don’t give up your prior just because you see that there was bias in the process which created it—nothing can be totally justified by an infinite chain of unbiased steps anyway. This means, concretely, that you shouldn’t automatically take the “outside view” on the beliefs you have which others are most likely to consider crazy; their disbelief is little evidence, if you have a strong inside view reason why you can know better than them.
In RH’s view, honest truth-seeking agents with common priors should not knowingly disagree (citation: are disagreements honest?). Since a failure of the common prior assumption entails disagreement about the origin of priors (thanks to the pre-rationality condition), and RH thinks disagreement about the origin of priors should rarely be relevant for disagreements about humans, RH thinks honest truth-seeking humans should not knowingly disagree.
I take it RH thinks some averaging should happen somewhere as a result of that, though I am not entirely sure. This would contradict Eliezer’s view.
The wording in the paper makes me think RH was intending “species” as a line beyond which the argument might fail, not one he’s necessarily going to draw (IE, he might concede that his argument can’t support a common prior assumption with aliens, but he might not concede that).
I think he does take this to be reason to believe the species average is better than your current prior, to the extent they differ.
I see several large disagreements.
Is the pre-rationality condition a true constraint on rationality? RH finds it plausible; WD does not. I am conflicted.
If the pre-rationality argument makes sense for common probabilities, does it then make sense for utilities? WD thinks so; RH thinks not.
Does pre-rationality imply a rational creator? WD thinks so; RH thinks not.
Does the pre-prior formalism make sense at all? Can rationality conditions stated with use of pre-priors have any force for ordinary agents who do not reason using pre-priors? I think not, though I think there is perhaps something to be salvaged out of it.
Does the common prior assumption make sense in practice?
Should honest truth-seeking humans knowingly disagree?
I don’t know what “make sense” means. When I said “in simple terms”, I meant that I want to avoid that sort of vagueness. The disagreement should be empirical. It seems that we need to simulate an environment with a group of bayesians with different priors, then somehow construct another group of bayesians that satisfy the pre-rationality condition, and then the claim should be that the second group outperforms the first group in accuracy. But I don’t think I saw such claims in the paper explicitly. So I continue to be confused, what exactly the disagreement is about.
Big question, I’m not going to make big claims here, though my intuition tends to say “yes”. Also, “should” is a bad word, I’m assuming that you’re referring to accuracy (as in my previous paragraph), but I’d like to see these things stated explicitly.
Of course not. But you do modify it. What is RH suggesting?
“True” is a bad word, I have no idea what it means.
RH gives a reasonable argument here, and I don’t see much of a reason why we would do this to utilities in the first place.
I see literally nothing in the paper to suggest anything about this. I don’t know what WD is talking about.
Ah, well, I think the “should honest truth-seeking humans knowingly disagree” is the practical form of this for RH.
(Or, even more practical, practices around such disagreements. Should a persistent disagreement be a sign of dishonesty? (RH says yes.) Should we drop beliefs which others persistently disagree with? Et cetera.)
Then (according to RH), you have to deal with RH’s arguments to the contrary. Specifically, his paper is claiming that you have to have some origin disputes about other people’s priors.
Although that’s not why I’m grappling with his argument. I’m not sure whether rational truth-seekers should persistently disagree, but I’m very curious about some things going on in RH’s argument.
I think “should” is a good word to sometimes taboo (IE taboo quickly if there seems to be any problem), but I don’t see that it needs to be an always-taboo word.
I’m also pretty unclear on what it could possibly mean here, but nonetheless think it is worth debating. Not only is there the usual problem of spelling out what it means for something to be a constraint on rationality, but now there’s an extra weird thing going on with setting up pre-priors which aren’t probabilities which you use to make any decisions.
I think I know what WD is talking about, but I agree it isn’t what RH is really trying to say.
Is that a problem? What’s wrong with “believing true things”, or, more precisely, “winning bets”? (obviously, these need to be prefixed with “usually” and “across many possible universes”). If I’m being naive and these don’t work, then I’d love to hear about it.
But if they do work, then I really want to see how the idea of pre-rationality is supposed to help me believe more true things and win more bets. I legitimately don’t understand how it would.
My intuition says “yes” in large part due to the word “humans”. I’m not certain whether two perfect bayesians should disagree, for some unrealistic sense of “perfect”, but even if they shouldn’t, it is uncertain that this would also apply to more limited agents.
Winning bets is not literally the same thing as believing true things, nor is it the same thing as having accurate beliefs, or being rational. I think Dutch-book arguments are … not exactly mistaken, but misleading, for this reason. It is not true that the only reason to have probabilistically coherent beliefs is to avoid reliably losing bets. If that were the case, we could throw rationality out the window whenever bets aren’t involved. I think betting is both a helpful illustrative thought experiment (dutch books illustrate irrationality) and a helpful tool for practicing rationality, but not synonymous with rationality.
“Believing true things” is problematic for several reasons. First, that is apparently entirely focused on epistemic rationality, excluding instrumental rationality. Second, although there are many practical cases where it isn’t a problem, there is a question of what “true” means, especially for high-level beliefs about things like tables and chairs which are more like conceptual clusters rather than objective realities. Third, even setting those aside, it is hard to see how we can get from “believing true things” to Bayes’ Law and other rules of probabilistic reasoning. I would argue that a solid connection between “believe true things” and rationality constraints of classical logic can be made, but probabilistic reasoning requires an additional insight about what kind of thing can be a rationality constraint: you don’t just have beliefs, you have degrees of belief. We can say things about why degrees of belief might be better or worse, but to do so requires a notion of quality of belief which goes beyond truth alone; you are not maximizing the expected amount of truth or anything like that.
Another possible answer, which you didn’t name but which might have been named, would be “rationality is about winning”. Something important is meant by this, but the idea is still vague—it helps point toward things that do look like potential rationality constraints and away from things which can’t serve as rationality constraints, but it is not the end of the story of what we might possibly mean by calling something a constraint of rationality.
Most of my probability mass is on you being right here, but I find RH’s arguments to the contrary intriguing. It’s not so much that I’m engaging with them in the expectation that I’ll change my mind about whether honest truth seeking humans can knowingly disagree. (Actually I think I should have said “can” all along rather than “should” now that I think about it more!) I do, however, expect something about the structure of those disagreements can be understood more thoroughly. If ideal Bayesians always agree, that could mean understanding the ways that Bayesian assumptions break down for humans. If ideal Bayesians need not agree, it might mean understanding that better.
I think I can understand this one, to some extent. Supposing that some version of pre-rationality does work out, and if I, hypothetically, understood pre-rationality extremely well (better than RH’s paper explains it)… I would expect more insights into at least one of the following:
agents reasoning about their own prior (what is the structure of the reasoning? to what extent can an agent approve of, or not approve of, its own prior? are there things which can make an agent decide its own prior is bad? what must an agent believe about the process which created its prior? What should an agent do if it discovers that the process which created its prior was biased, or systematically not truth-seeking, or otherwise ‘problematic’?)
common knowledge of beliefs (is it realistic for beliefs to be common knowledge? when? are there more things to say about the structure of common knowledge, which help reconcile the usual assumption that an agent knows its own prior with the paradoxes of self-reference which prevent agents from knowing themselves so well?)
what it means for an agent to have a prior (how to we designate a special belief-state to call the prior, for realistic agents? can we do so at all in the face of logical uncertainty? is it better to just think in terms of a sequence of belief states, with some being relatively prior to others? can we make good models of agents who are becoming rational as they are learning, such that they lack an initial perfectly rational prior?)
reaching agreement with other agents (by an Aumann-like process or otherwise; by bringing in origin disputes or otherwise)
reasoning about one’s own origins (especially in the sense of justification structures; endorsing or not endorsing the way one’s beliefs were constructed or the way those beliefs became what they are more generally).
They are not the same, but that’s ok. You asked about constraints on, not definitions of rationality. This may not be an exhaustive list, but if someone has an idea about rationality that translates neither into winning some hypothetical bets, nor into having even slightly more accurate beliefs about anything, then I can confidently say that I’m not interested.
(Of course this is not to say that an idea that has no such applications has literally zero value)
I completely agree that if RH was right, and if you understood him well, then you would receive multiple benefits, most of which could translate into winning hypothetical bets, and into having more accurate beliefs about many things. But that’s just the usual effect of learning, and not because you would satisfy the pre-rationality condition.
I continue to not understand in what precise way the agent that satisfies the pre-rationality condition is (claimed to be) superior to the agent that doesn’t. To be fair, this could be a hard question, and even if we don’t immediately see the benefit, that doesn’t mean that there is no benefit. But still, I’m quite suspicious. In my view this is the single most important question, and it’s weird to me that I don’t see it explicitly addressed.
What is being disagreed about is whether you should update to the species average. If the optimism is about a topic that can’t be easily tested, then all the calibration shows is that your estimates are higher than the species average, not that they are too high in an absolute sense.
Then the question is entirely about whether we expect the species average to be a good predictor. If there is an evolutionary pressure for the species to have correct beliefs about a topic, then we probably should update to the species average (this may depend on some assumptions about how evolution works). But if a topic isn’t easily tested, then there probably isn’t a strong pressure for it.
Another example, let’s replace “species average” with “prediction market price”. Then we should agree that updating our prior makes sense, because we expect prediction markets to be efficient, in many cases. But, if we’re talking about “species average”, it seems very dubious that it’s a reliable predictor. At least, the claim that we should update to the species average depends on many assumptions.
Of course, in the usual Bayesian framework, we don’t update to species average. We only observe species average as evidence and then update towards it, by some amount. It sounds like Hanson wants to leave no trace of the original prior, though, which is a bit weird.
Actually, as Wei Dai explained here, the usual Bayesian picture is even weaker than that. You observe the species average and then you update somehow, whether it be toward, away, or even updating to the same number. Even if the whole species is composed of perfect Bayesians, Aumann Agreement does not mean you just update toward each other until you agree; what the proof actually implies is that you dance around in a potentially quite convoluted way until you agree. So, there’s no special reason to suppose that Bayesians should update toward each other in a single round of updating on each other’s views.
The idea (as far as I understand it) is supposed to be something like: if you don’t think there is an evolutionary pressure for the species you are in to have correct beliefs about a topic, then why do you trust your own beliefs? To the degree that your own beliefs are trustworthy, it is because there is such an evolutionary pressure. Thus, switching to the species average just reduces the noise while not compromising the source of trustworthiness.
If there is no pressure on the species, then I don’t particularly trust neither the species average nor my own prior. They are both very much questionable. So, why should I switch from one questionable prior to another? It is a wasted motion.
Consider an example. Let there be N interesting propositions we want to have accurate beliefs about. Suppose that every person, at birth, rolls a six sided die N times and then for every proposition prop_i they set the prior P(prop_i) = dice_roll_i/10. And now you seem to be saying that for me to set P(prop_i) = 0.35 (which is the species average), is in some way better? More accurate, presumably? Because that’s the only case where switching would make sense.
If you have no other information, it does reduce variance, while keeping bias the same. This reduces expected squared error, due to the bias-variance tradeoff.
Eliezer explicitly argues that this is not a good argument for averaging your opinions with a crowd. However, I don’t like his argument there very much. He argues that squared error is not necessarily the right notion of error, and provides an alternative error function as an example where you escape the conclusion.
However, he relies on giving a nonconvex error function. It seems to me that most of the time, the error function will be convex in practice, as shown in A Pragmatist’s Guide to Epistemic Utility by Ben Levinstein.
I think what this means is that given only the two options, averaging your beliefs with those of other people is better than doing nothing at all. However, both are worse than a Bayesian update.
I am having trouble cashing out your example in concrete terms; what kind of propositions could behave like that? More importantly, why would they behave like that?
I realize I said something wrong in my previous comment: evolutionary pressure is not the only kind of reason that someone might think their / their species’ beliefs may be trustworthy. For example, you might think that evolutionary pressure causes beliefs to become more accurate when they are about topics relevant to survival/reproduction, and that the uniformity of logic means that the kind of mind that is good at having accurate beliefs on such topics is also somewhat good at having accurate beliefs on other topics. But if you really think that there is NO reason at all that you might have accurate beliefs on a given topic, it seems to me that you do not have beliefs about that topic at all.
This doesn’t seem true to me.
First, you need to assign probabilities in order to coherently make decisions under uncertainty, even if the probabilities are totally made up. It’s not because the probabilities are informative, it’s because if your decisions can’t be justified by any probability distribution, then you’re leaving money on the table somewhere with respect to your own preferences.
Second, recursive justification must hit bottom somewhere. At some point you have to assume something if you’re going to prove anything. So, there has to be a base of beliefs which you can’t provide justification for without relying on those beliefs themselves.
Perhaps you didn’t mean to exclude circular justification, so the recursive-justification-hits-bottom thing doesn’t contradict what you were saying. However, I think the first point stands; you sometimes want beliefs (any beliefs at all!) as opposed to no beliefs, even when there is no reason to expect their accuracy.
I certainly didn’t mean to exclude circular justification: we know that evolution is true because of the empirical and theoretical evidence, which relies on us being able to trust our senses and reasoning, and the reason we can mostly trust our senses and reasoning is because evolution puts some pressure on organisms to have good senses and reasoning.
Maybe what you are saying is useful for an AI but for humans I think the concept of “I don’t have a belief about that” is more useful than making up a number with absolutely no justification just so that you won’t get Dutch booked. I think evolution deals with Dutch books in other ways (like making us reluctant to gamble) and so it’s not necessary to deal with that issue explicitly most of the time.
I agree. The concept of “belief” comes apart into different notions in such cases; like, we might explicitly say “I don’t have a belief about that” and we might internally be unable to summon any arguments one way or another, but we might find ourselves making decisions nonetheless.
I do think this is somewhat relevant for humans rather than only AI, though. If we find ourselves paralyzed and unable to act because we are unable to form a belief, we will end up doing nothing, which in many cases will be worse that things we would have done had we assigned any probability at all. Needing to make decisions is a more powerful justification for needing probabilities than Dutch books are.
The propositions aren’t doing anything. The dice rolls represent genetic variation (the algorithm could be less convoluted, but it felt appropriate). The propositions can be anything from “earth is flat”, to “I will win a lottery”. Your beliefs about these propositions depend on your initial priors, and the premise is that these can depend on your genes.
Sure, there are reasons why we might expect the “species average” predictions not to be too bad. But there are better groups. E.g. we would surely improve the quality of our predictions if, while taking the average, we ignored the toddlers, the senile and the insane. We would improve even more if we only averaged the well educated. And if I myself am educated and sane adult, then I can expect reasonably well that I’m outperforming the “species average”, even under your consideration.
If I know nothing about a topic, then I have my priors. That’s what priors are. To “not have beliefs” is not a valid option in this context. If I ask you for a prediction, you should be able to say something (e.g. “0.5″).
I think the species average belief for both “earth is flat” and “I will win a lottery” is much less than 0.35. That is why I am confused about your example.
I think Hanson would agree that you have to take a weighted average, and that toddlers should be weighted less highly. But toddlers should agree that they should be weighted less highly, since they know that they do not know much about the world.
If the topic is “Is xzxq kskw?” then it seems reasonable to say that you have no beliefs at all. I would rather say that than say that the probability is 0.5. If the topic is something that is meaningful to you, then the way that the proposition gets its meaning should presumably also let you estimate its likelihood, in a way that bears some relation to accuracy.
Feel free to take more contentious propositions, like “there is no god” or “I should switch in Monty Hall”. But, also, you seem to be talking about current beliefs, and Hanson is talking about genetic predispositions, which can be modeled as beliefs at birth. If my initial prior, before I saw any evidence, was P(earth is flat)=0.6, that doesn’t mean I still believe that earth is flat. It only means that my posterior is slightly higher than someone’s who saw the same evidence but started with a lower prior.
Anyway, my entire point is that if you take many garbage predictions and average them out, you’re not getting anything better than what you started with. Averaging only makes sense with additional assumptions. Those assumptions may sometimes be true in practice, but I don’t see them stated in Hanson’s paper.
No, I don’t think weighing makes sense in Hanson’s framework of pre-agents.
No, idiots don’t always know that they’re idiots. An idiot who doesn’t know it is called a “crackpot”. There are plenty of those. Toddlers are also surely often overconfident, though I don’t think there is a word for that.
When modeling humans as Bayesians, “having no beliefs” doesn’t type check. A prior is a function from propositions to probabilities and “I don’t know” is not a probability. You could perhaps say that “Is xzxq kskw?” is not a valid proposition. But I’m not sure why bother. I don’t see how this is relevant to Hanson’s paper.
P(earth is flat)=0.6 isn’t a garbage prediction, since it lets people update to something reasonable after seeing the appropriate evidence. It doesn’t incorporate all the evidence, but that’s a prior for you.
I think God and Monty Hall are both interesting examples, In particular Monty Hall is interesting because so many professional mathematicians got the wrong answer for it, and God is interesting because people disagree as to who the relevant experts are, as well as what epistemological framework is appropriate for evaluating such a proposition. I don’t think I can give you a good answer to either of them (and just to be clear I never said that I agreed with Hanson’s point of view).
Maybe you’re right that xzxq is not relevant to Hanson’s paper.
Regarding weighting, Hanson’s paper doesn’t talk about averaging at all so it doesn’t make sense to ask whether the averaging that it talks about is weighted. But the idea that all agents would update to a (weighted) species-average belief is an obvious candidate for an explanation for why their posteriors would agree. I realize my previous comments may have obscured this distinction, sorry about that.
What is a garbage prediction then? P=0 and P=1? When I said “garbage”, I meant that it has no relation to the real world, it’s about as good as rolling a die to choose a probability.
P(I will win the lottery) = 0.6 is a garbage prediction.
Why? Are there no conceivable lotteries with that probability of winning? (There are, e.g. if I bought multiple tickets). Is there no evidence that we could see in order to update this prediction? (There is, e.g. the number of tickets sold, the outcomes of past lotteries, etc). I continue to not understand what standard of “garbage” you’re using.
So, I guess it depends on exactly how far back you want to go when erasing your background knowledge to try to form the concept of a prior. I was assuming you still knew something about the structure of the problem, i.e. that there would be a bunch of tickets sold, that you have only bought one, etc. But you’re right that you could recategorize those as evidence in which case the proper prior wouldn’t depend on them.
If you take this to the extreme you could say that the prior for every sentence should be the same, because the minimum amount of knowledge you could have about a sentence is just “There is a sentence”. You could then treat all facts about the number of words in the sentence, the instances in which you have observed people using those words, etc. as observations to be updated on.
It is tempting to say that the prior for every sentence should be 0.5 in this case (in which case a “garbage prediction” would just be one that is sufficiently far away from 0.5 on a log-odds scale), but it is not so clear that a “randomly chosen” sentence (whatever that means) has a 0.5 probability of being true. If by a “randomly chosen” sentence we mean the kinds of sentences that people are likely to say, then estimating the probability of such a sentence requires all of the background knowledge that we have, and we are left with the same problem.
Maybe all of this is an irrelevant digression. After rereading your previous comments, it occurs to me that maybe I should put it this way: After updating, you have a bunch of people who all have a small probability for “the earth is flat”, but they may have slightly different probabilities due to different genetic predispositions. Are you saying that you don’t think averaging makes sense here? There is no issue with the predictions being garbage, we both agree that they are not garbage. The question is just whether to average them.
If you’ve already observed all the possible evidence, then your prediction is not a “prior” any more, in any sense of the word. Also, both total tickets sold and the number of tickets someone bought are variables. If I know that there is a lottery in the real world, I don’t usually know how many tickets they really sold (or will sell), and I’m usually allowed to buy more than one (although it’s hard for me to not know how many I have).
I think that Hanson wants to average before updating. Although if everyone is a perfect bayesian and saw the same evidence, then maybe there isn’t a huge difference between averaging before or after the update.
Either way, my position is that averaging is not justified without additional assumptions. Though I’m not saying that averaging is necessarily harmful either.
If you are doing a log-odds average then it doesn’t matter whether you do it before or after updating.
Like I pointed out in my previous comment the question “how much evidence have I observed / taken into account?” is a continuous question with no obvious “minimum” answer. The answer “I know that a bunch of tickets will be sold, and that I will only buy a few” seems to me to not be a “maximum” answer either, so beliefs based on it seem reasonable to call a “prior”, even if under some framings they are a posterior. Though really it is pointless to talk about what is a prior if we don’t have some specific set of observations in mind that we want our prior to be prior to.