This should have been two posts. First, “My house is spooking me, what’s up with that? Spooky things, amirite?”. Second, “There’s some distinct information content between experiencing an event and being told about someone else experiencing that event. Therefore AAT doesn’t work, amirite?”
To the second point, I haven’t read any write-ups of AAT. Does it say you have to speak messages that have the same evidential impact to others as your accumulated experience does to you? That sounds like a pretty terrible theorem. I thought there was some wiggle room where each agent could regard the other as an imperfect updater, whose words nevertheless had some informativeness about reality, and from that they eventually reach close posteriors given close priors once upon a time. So the end effect is like an evidence swap, but it works even if you can’t quite convey the messages you want, so long as your statements are informative about your present belief state at each round of communication and you can each update a bit.
They have “common knowledge” of each others rationality. (This means that they’re rational, they each know that the other is rational, they each know the other knows that they are rational, they each know that the other knows that they know they are rational, and so on.)
They each have the same prior for some proposition “A”.
They then get some evidence. What evidence they each get depends on the agent, so they end up with different beliefs.
They then communicate to each other their current beliefs. (They don’t communicate anything about the evidence they saw or their reasoning. They only trade one number: the probability they currently assign to A.)
Because the belief of the other agent tells them something about what the other agent saw they must update their current beliefs.
Theorem: After repeating this enough times they end up with the same beliefs.
The first two of these hypotheses I think pretty clearly don’t apply to this context; all of the uncertainty that I subjectively feel comes from not trusting that you are rational. If I heard someone close to me say something like this, then my first instinct would be to think of them as being less rational, as this seems a more likely explanation than the explanation they’ve given.
However there are a small number of people that I feel like, if they came to me with this evidence, they would be able to present it in a way that could convince me.
So at least my introspection says that bullet point 2 is the failing hypothesis, and correcting towards this (by having the evidence come from people I trust more) will actually result in more updating. This seems consistent with your post, since people generally trust themselves the most.
Is there some point at which AAT suggests that people are disagreeing because they have different experiences, and something needs to be checked on?
My example is a time when I was with people who were arguing about how hot the hot and sour soup was, and eventually some sampling established that one side of the table had been given hotter soup.
This is an easy case, of course—everyone’s nervous systems were similarly calibrated for capsaisin.
The other comment is maybe less in the spirit of your comment, so here’s a more direct reply:
If different agents communicate their evidence to one another continually, and keep having different evidence that draws their beliefs apart, the simplest beliefs should end up being that they are in different reference classes. I think this ends up being a question of specific evidence and updates, and isn’t really relevant to AAT.
As an example, it is easy for me to believe that my friend is allergic to peanuts and still eat peanuts myself. We both eat a mystery food, independently, then talk about our experiences. He went to the hospital, and I thought it was tasty. We both conclude the food had peanuts; we can completely Aumann despite our different experiences.
I am rereading your question as: “When do circumstances become different enough that evidence from one situation doesn’t apply to the other situation?” and this sounds like the fundamental question of reference class tennis, which I believe does not have a good answer.
This should have been two posts. First, “My house is spooking me, what’s up with that? Spooky things, amirite?”. Second, “There’s some distinct information content between experiencing an event and being told about someone else experiencing that event. Therefore AAT doesn’t work, amirite?”
To the second point, I haven’t read any write-ups of AAT. Does it say you have to speak messages that have the same evidential impact to others as your accumulated experience does to you? That sounds like a pretty terrible theorem. I thought there was some wiggle room where each agent could regard the other as an imperfect updater, whose words nevertheless had some informativeness about reality, and from that they eventually reach close posteriors given close priors once upon a time. So the end effect is like an evidence swap, but it works even if you can’t quite convey the messages you want, so long as your statements are informative about your present belief state at each round of communication and you can each update a bit.
Here’s how AAT works:
You have two perfect Bayesian agents.
They have “common knowledge” of each others rationality. (This means that they’re rational, they each know that the other is rational, they each know the other knows that they are rational, they each know that the other knows that they know they are rational, and so on.)
They each have the same prior for some proposition “A”.
They then get some evidence. What evidence they each get depends on the agent, so they end up with different beliefs.
They then communicate to each other their current beliefs. (They don’t communicate anything about the evidence they saw or their reasoning. They only trade one number: the probability they currently assign to A.)
Because the belief of the other agent tells them something about what the other agent saw they must update their current beliefs.
Theorem: After repeating this enough times they end up with the same beliefs.
The first two of these hypotheses I think pretty clearly don’t apply to this context; all of the uncertainty that I subjectively feel comes from not trusting that you are rational. If I heard someone close to me say something like this, then my first instinct would be to think of them as being less rational, as this seems a more likely explanation than the explanation they’ve given.
However there are a small number of people that I feel like, if they came to me with this evidence, they would be able to present it in a way that could convince me.
So at least my introspection says that bullet point 2 is the failing hypothesis, and correcting towards this (by having the evidence come from people I trust more) will actually result in more updating. This seems consistent with your post, since people generally trust themselves the most.
Is there some point at which AAT suggests that people are disagreeing because they have different experiences, and something needs to be checked on?
My example is a time when I was with people who were arguing about how hot the hot and sour soup was, and eventually some sampling established that one side of the table had been given hotter soup.
This is an easy case, of course—everyone’s nervous systems were similarly calibrated for capsaisin.
The other comment is maybe less in the spirit of your comment, so here’s a more direct reply:
If different agents communicate their evidence to one another continually, and keep having different evidence that draws their beliefs apart, the simplest beliefs should end up being that they are in different reference classes. I think this ends up being a question of specific evidence and updates, and isn’t really relevant to AAT.
As an example, it is easy for me to believe that my friend is allergic to peanuts and still eat peanuts myself. We both eat a mystery food, independently, then talk about our experiences. He went to the hospital, and I thought it was tasty. We both conclude the food had peanuts; we can completely Aumann despite our different experiences.
I am rereading your question as: “When do circumstances become different enough that evidence from one situation doesn’t apply to the other situation?” and this sounds like the fundamental question of reference class tennis, which I believe does not have a good answer.