I thought it was that they should reach the same conclusions. Both might increase or decrease their confidence in a proposition, in which case only one at most would be moving in the direction the other began at.
Right, this is another reasons that Aumann is not a good argument for what people use it for here. One can actually construct situations where they might radically alter their estimates in either direction depending on their priors and estimates. Most of the time it is used here people seem to imply that people should update in each others directions. There are good reasons for that that aren’t Aumann.
Related to this, I don’t know of anything in the literature that precisely talks about almost-Bayesians in any useful way, but I strongly suspect that if it can be made more precise, one will be able to show that under reasonable assumptions more often than not the right thing will be to update towards each other. This follows from something like the averaging trick for estimating statistics. But I don’t know how to make this more precise.
I wonder if one could take Bayesian learning systems with different sets of training data or something similar and then see how they should update on the full set. It might be interesting to do this in a sort of Monte Carlo situation to see for reasonable distributions empirically how Bayesians should generally move.
That seems like a very valid point. Humans keep all sorts of reference classes and prior experience in the back of their minds. Much of it probably is so far back that making it conscious or even realizing which prior data is informing the conclusions could be quite difficult. At some level when people speak of intuition about things like math problems they are talking about precisely this sort of thing.
One can actually construct situations where they might radically alter their estimates in either direction depending on their priors and estimates.
In which case it wouldn’t be proper to invoke Aumann’s Agreement; the two parties would be working with non-identical priors.
I invoked it in our conversation because you keep insisting that you’ve seen the same materials and generally agree with me on all points of the possibility of longevity augmentation and the nature of physics, chemistry, and biology.
That you and I are not “perfect Bayesians” is irrelevant to the condition of Aumann’s Agreement being applicable to our situation. The distance we have from such ‘perfection’ should be mappable to the distance we have from agreeing with the ‘ideally rational analysis’ of our strongly overlapping set of priors.
The trouble is, we seem to be diametrically opposed in this conversation. Which means, necessarily, that one of us has strayed rather far from that ideal. Which is a needlessly-complicated way of saying exactly what I’d already said.
In which case it wouldn’t be proper to invoke Aumann’s Agreement; the two parties would be working with non-identical priors.
Huh? No. I mean you can construct prior sets that will result in them moving radically in one direction or another. Exercise: For any epsilon>0, there is a set of priors and hypothesis space containing a hypothesis H such that one can construct two Bayesians who start off with P(H)< eps, and after updating have P(H)>1-eps.
I invoked it in our conversation because you keep insisting that you’ve seen the same materials and generally agree with me on all points of the possibility of longevity augmentation and the nature of physics, chemistry, and biology.
Well, we agree on the major things that pretty much everyone LW would agree on. We seem to agree on slightly more than that. But on other issues, especially meta issues like “how good in general are humans at estimating technological progress” we do seem to disagree. We also apparently disagree on how likely it is for a large number of scientists and engineers to focus on an area and not succeed in that area. The fusion power example I gave earlier seems relevant in that context.
That you and I are not “perfect Bayesians” is irrelevant to the condition of Aumann’s Agreement being applicable to our situation. The distance we have from such ‘perfection’ should be mappable to the distance we have from agreeing with the ‘ideally rational analysis’ of our strongly overlapping set of priors.
This is not obvious to me. I think it is probably true for near-perfect Bayesians by some reasonable metric, but I’m not aware of a metric to measure how Bayesian non-Bayesians are. Moreover, there are many theorems in many areas where if one weakens the premises a small amount the results of the theorem don’t just fail but fail catastrophically. Without a better understanding of the underlying material I don’t think I can judge whether this is correct. My impression (very much not my area of math) is that very little has been done trying to look at how imperfect Bayesians should/will behave.
The trouble is, we seem to be diametrically opposed in this conversation.
That seems like an extreme phrasing. The disagreement in question is whether in the next fifty years substantial life-extension is likely or that it is so likely that all conceivable world-lines from here which don’t result in civilization collapsing will result in substantial life extensions in the next fifty years. Diametrically opposed I think would be something like your view v. the view that life-extension is definitely not going to happen in the next fifty years barring something like aliens coming down and giving us advanced technology or some sort of Friendly AGI going foom.
See also lessdazed’s remark about the difficulty humans have of exchanging all relevant information.
In which case it wouldn’t be proper to invoke Aumann’s Agreement; the two parties would be working with non-identical priors.
Huh? No. I mean you can construct prior sets that will result in them moving radically in one direction or another. Exercise: For any epsilon>0, there is a set of priors and hypothesis space containing a hypothesis H such that one can construct two Bayesians who start off with P(H)< eps, and after updating have P(H)>1-eps.
If and only if their priors do not match one another. That’s the whole point of Aumann’s Agreement Theorem. For which there are proofs.
But on other issues, especially meta issues like “how good in general are humans at estimating technological progress” we do seem to disagree.
Not significantly. It is my belief that humans are especially poor at this.
We also apparently disagree on how likely it is for a large number of scientists and engineers to focus on an area and not succeed in that area. The fusion power example I gave earlier seems relevant in that context.
I honestly do not recall having seen it, but these threads have gotten rather larger than my available attention span and/or recall capacity in general anyhow. That being said, the fusion power problem is actually a very good example of this. The overwhelming majority of the endeavor in that field has gone into Tokamak-style fusion power generation. There is broad acceptance of the idea that once the materials science has reached a specific state, Tokamak reactors will achieve “power-out” state. There is far less agreement in fusion energy research on just about every other approach, and those approaches have received marginal attention.
Contrastingly, there are a wide array of avaialable solutions to practical antiagapics, and many of them have had basic demonstrations of viability of the underlaying ‘dependent’ technologies. (Caloric restriction is documented, we have genetic samples of populations of various longevities to analyze for potential pharmaceuticals, we can already clone tissues and have a tool that allows the assembly of tissues into organs, there is a steady and progressive history of ever-more-successful non-biological implants, etc., etc., etc..). This renders antiagapics into a significantly different sort of endeavor than fusion. In fusion research, the “throw everything on the wall and see what sticks” approach simply hasn’t been used. And that is what I have espoused as the source of my confidence in my assertion.
Diametrically opposed I think would be something like your view v. the view that life-extension is definitely not going to happen in the next fifty years barring something like aliens coming down and giving us advanced technology or some sort of Friendly AGI going foom.
We’re in a binary state here. I assert P(X)=~1 is true, you assert this is false. These positions are diametrical / binary opposites.
See also lessdazed’s remark about the difficulty humans have of exchanging all relevant information.
Exchanging relevant information is exceedingly difficult when both parties can relate the same statements of fact and cite the same materials as necessarily resulting in opposing conclusions.
I don’t think this conversation’s going to go anywhere further at this point, by the way, so this is going to be the last comment I make in this particular thread.
No. I mean you can construct prior sets that will result in them moving radically in one direction or another. Exercise: For any epsilon>0, there is a set of priors and hypothesis space containing a hypothesis H such that one can construct two Bayesians who start off with P(H)< eps, and after updating have P(H)>1-eps.
If and only if their priors do not match one another. That’s the whole point of Aumann’s Agreement Theorem. For which there are proofs.
I’m not sure what you mean here. For any epsilon >0 I can start with two Bayesians who share the same priors and have different estimates for different statements (before their probabilities become common knowledge) and a hypothesis H such that that before the updating P(H) < eps, for both and after the updating P(H) > 1- eps for both. I’m not at all sure why you think I need two sets of priors to pull this off. Nothing in this contradicts Aumann’s theorem.
Also, you are wrong about Aumann’s theorem. It isn’t an iff, the implication goes only one way. You can start off with different Bayesians who have different priors and who after updating get the same posteriors. Aumann simply is talking about the case where they have the same priors. It says nothing about what happens if they have different priors. In fact, there are weaker theorems about limiting behavior in certain contexts where the priors disagree but as long as they aren’t too pathological you can get as the number of observations increases they start to agree.
A toy example that may help here:
Assume that there is a coin. Alice and Bob have different priors about this coin. Alice assigns a 25% chance that the coin is fair, a 20% that the coin always turns up heads, a 25% chance that the coin always turns up tailss and a 30% chance that the coin turns up heads 2/3rds of the time. Bob has a 20% chance that the coin is fair, a 25% chance that the coin always turns up heads, a 20% chance that the coin always turns up tails, and a 30% chance that the coin turns up heads 2/3rds of the time. Now, first consider what happens if on the first flip the coin turns up heads. Bob and Alice will now assign zero probability to the possibility that the coin always turns up tails. They now agree on that possibility. Furthermore, assume they keep flipping the coin and observing the results. Then it isn’t that hard to see that as long as the coin actually is one of the four options in the limiting situation Alice and Bob will agree. And you can explicitly state with what probability you should expect any given degree of disagreement between them.
I honestly do not recall having seen it
This suggests to me that you may not be paying that much attention what others (especially I) have written in reply to you. It may therefore may make sense to go back and reread the thread when you have time.
That being said, the fusion power problem is actually a very good example of this. The overwhelming majority of the endeavor in that field has gone into Tokamak-style fusion power generation
This seems to be a valid point. The method of approach doesn’t to some extent look like the approach to anti-aging research in so far as most of the research is focusing on a single method. But I’m not convinced that this argument is that strong. There’s also been a fair bit of research into laser confinement fusion for example. And before it became apparent that they could not be efficient enough to provide power, Farnsworth style fusors were also researched heavily. Industry still researches scaling and making Farnsworth fusors more efficient because they make very nice portable neutron generators that one can turn on and off. So while the majority of the research funding has been going to toroidal magnetic confinement there’s been a very large amount of money put into other types. It is only in the context of percentage of the total that the amount put in looks small.
We’re in a binary state here. I assert P(X)=~1 is true, you assert this is false. These positions are diametrical / binary opposites.
By this definition any two people who disagree about a probability estimate our diametric. This seems like not a good definition if one wants to capture common intuition of the terms. Certainly in contrast for example I don’t think that if you told someone in the general public that “this person thinks that life extension is likely in the next fifty years and this other person considers it to be a near certainity” that they would describe this as diametric opposition.
See also lessdazed’s remark about the difficulty humans have of exchanging all relevant information.
Exchanging relevant information is exceedingly difficult when both parties can relate the same statements of fact and cite the same materials as necessarily resulting in opposing conclusions.
Well, yes. That’s part of the problem. Humans have massive amounts of information that they’ve moved into their background processing. I have pretty decent intuition for certain classes of mathematical problems. But that’s from accumulated experience. I can pretty reliably make conjectures about those classes of problems. But I can’t point explicitly to what is causing me to do so. It is possible that we have differing background sets of data that are impacting our processing at a base level.
I thought it was that they should reach the same conclusions. Both might increase or decrease their confidence in a proposition, in which case only one at most would be moving in the direction the other began at.
Right, this is another reasons that Aumann is not a good argument for what people use it for here. One can actually construct situations where they might radically alter their estimates in either direction depending on their priors and estimates. Most of the time it is used here people seem to imply that people should update in each others directions. There are good reasons for that that aren’t Aumann.
Related to this, I don’t know of anything in the literature that precisely talks about almost-Bayesians in any useful way, but I strongly suspect that if it can be made more precise, one will be able to show that under reasonable assumptions more often than not the right thing will be to update towards each other. This follows from something like the averaging trick for estimating statistics. But I don’t know how to make this more precise.
I wonder if one could take Bayesian learning systems with different sets of training data or something similar and then see how they should update on the full set. It might be interesting to do this in a sort of Monte Carlo situation to see for reasonable distributions empirically how Bayesians should generally move.
In my opinion, that humans aren’t Bayesians is much less of a problem than that humans can’t share all their information.
That seems like a very valid point. Humans keep all sorts of reference classes and prior experience in the back of their minds. Much of it probably is so far back that making it conscious or even realizing which prior data is informing the conclusions could be quite difficult. At some level when people speak of intuition about things like math problems they are talking about precisely this sort of thing.
In which case it wouldn’t be proper to invoke Aumann’s Agreement; the two parties would be working with non-identical priors.
I invoked it in our conversation because you keep insisting that you’ve seen the same materials and generally agree with me on all points of the possibility of longevity augmentation and the nature of physics, chemistry, and biology.
That you and I are not “perfect Bayesians” is irrelevant to the condition of Aumann’s Agreement being applicable to our situation. The distance we have from such ‘perfection’ should be mappable to the distance we have from agreeing with the ‘ideally rational analysis’ of our strongly overlapping set of priors.
The trouble is, we seem to be diametrically opposed in this conversation. Which means, necessarily, that one of us has strayed rather far from that ideal. Which is a needlessly-complicated way of saying exactly what I’d already said.
Huh? No. I mean you can construct prior sets that will result in them moving radically in one direction or another. Exercise: For any epsilon>0, there is a set of priors and hypothesis space containing a hypothesis H such that one can construct two Bayesians who start off with P(H)< eps, and after updating have P(H)>1-eps.
Well, we agree on the major things that pretty much everyone LW would agree on. We seem to agree on slightly more than that. But on other issues, especially meta issues like “how good in general are humans at estimating technological progress” we do seem to disagree. We also apparently disagree on how likely it is for a large number of scientists and engineers to focus on an area and not succeed in that area. The fusion power example I gave earlier seems relevant in that context.
This is not obvious to me. I think it is probably true for near-perfect Bayesians by some reasonable metric, but I’m not aware of a metric to measure how Bayesian non-Bayesians are. Moreover, there are many theorems in many areas where if one weakens the premises a small amount the results of the theorem don’t just fail but fail catastrophically. Without a better understanding of the underlying material I don’t think I can judge whether this is correct. My impression (very much not my area of math) is that very little has been done trying to look at how imperfect Bayesians should/will behave.
That seems like an extreme phrasing. The disagreement in question is whether in the next fifty years substantial life-extension is likely or that it is so likely that all conceivable world-lines from here which don’t result in civilization collapsing will result in substantial life extensions in the next fifty years. Diametrically opposed I think would be something like your view v. the view that life-extension is definitely not going to happen in the next fifty years barring something like aliens coming down and giving us advanced technology or some sort of Friendly AGI going foom.
See also lessdazed’s remark about the difficulty humans have of exchanging all relevant information.
If and only if their priors do not match one another. That’s the whole point of Aumann’s Agreement Theorem. For which there are proofs.
Not significantly. It is my belief that humans are especially poor at this.
I honestly do not recall having seen it, but these threads have gotten rather larger than my available attention span and/or recall capacity in general anyhow. That being said, the fusion power problem is actually a very good example of this. The overwhelming majority of the endeavor in that field has gone into Tokamak-style fusion power generation. There is broad acceptance of the idea that once the materials science has reached a specific state, Tokamak reactors will achieve “power-out” state. There is far less agreement in fusion energy research on just about every other approach, and those approaches have received marginal attention.
Contrastingly, there are a wide array of avaialable solutions to practical antiagapics, and many of them have had basic demonstrations of viability of the underlaying ‘dependent’ technologies. (Caloric restriction is documented, we have genetic samples of populations of various longevities to analyze for potential pharmaceuticals, we can already clone tissues and have a tool that allows the assembly of tissues into organs, there is a steady and progressive history of ever-more-successful non-biological implants, etc., etc., etc..). This renders antiagapics into a significantly different sort of endeavor than fusion. In fusion research, the “throw everything on the wall and see what sticks” approach simply hasn’t been used. And that is what I have espoused as the source of my confidence in my assertion.
We’re in a binary state here. I assert P(X)=~1 is true, you assert this is false. These positions are diametrical / binary opposites.
Exchanging relevant information is exceedingly difficult when both parties can relate the same statements of fact and cite the same materials as necessarily resulting in opposing conclusions.
I don’t think this conversation’s going to go anywhere further at this point, by the way, so this is going to be the last comment I make in this particular thread.
I’m not sure what you mean here. For any epsilon >0 I can start with two Bayesians who share the same priors and have different estimates for different statements (before their probabilities become common knowledge) and a hypothesis H such that that before the updating P(H) < eps, for both and after the updating P(H) > 1- eps for both. I’m not at all sure why you think I need two sets of priors to pull this off. Nothing in this contradicts Aumann’s theorem.
Also, you are wrong about Aumann’s theorem. It isn’t an iff, the implication goes only one way. You can start off with different Bayesians who have different priors and who after updating get the same posteriors. Aumann simply is talking about the case where they have the same priors. It says nothing about what happens if they have different priors. In fact, there are weaker theorems about limiting behavior in certain contexts where the priors disagree but as long as they aren’t too pathological you can get as the number of observations increases they start to agree.
A toy example that may help here:
Assume that there is a coin. Alice and Bob have different priors about this coin. Alice assigns a 25% chance that the coin is fair, a 20% that the coin always turns up heads, a 25% chance that the coin always turns up tailss and a 30% chance that the coin turns up heads 2/3rds of the time. Bob has a 20% chance that the coin is fair, a 25% chance that the coin always turns up heads, a 20% chance that the coin always turns up tails, and a 30% chance that the coin turns up heads 2/3rds of the time. Now, first consider what happens if on the first flip the coin turns up heads. Bob and Alice will now assign zero probability to the possibility that the coin always turns up tails. They now agree on that possibility. Furthermore, assume they keep flipping the coin and observing the results. Then it isn’t that hard to see that as long as the coin actually is one of the four options in the limiting situation Alice and Bob will agree. And you can explicitly state with what probability you should expect any given degree of disagreement between them.
This suggests to me that you may not be paying that much attention what others (especially I) have written in reply to you. It may therefore may make sense to go back and reread the thread when you have time.
This seems to be a valid point. The method of approach doesn’t to some extent look like the approach to anti-aging research in so far as most of the research is focusing on a single method. But I’m not convinced that this argument is that strong. There’s also been a fair bit of research into laser confinement fusion for example. And before it became apparent that they could not be efficient enough to provide power, Farnsworth style fusors were also researched heavily. Industry still researches scaling and making Farnsworth fusors more efficient because they make very nice portable neutron generators that one can turn on and off. So while the majority of the research funding has been going to toroidal magnetic confinement there’s been a very large amount of money put into other types. It is only in the context of percentage of the total that the amount put in looks small.
By this definition any two people who disagree about a probability estimate our diametric. This seems like not a good definition if one wants to capture common intuition of the terms. Certainly in contrast for example I don’t think that if you told someone in the general public that “this person thinks that life extension is likely in the next fifty years and this other person considers it to be a near certainity” that they would describe this as diametric opposition.
Well, yes. That’s part of the problem. Humans have massive amounts of information that they’ve moved into their background processing. I have pretty decent intuition for certain classes of mathematical problems. But that’s from accumulated experience. I can pretty reliably make conjectures about those classes of problems. But I can’t point explicitly to what is causing me to do so. It is possible that we have differing background sets of data that are impacting our processing at a base level.