Future Filters [draft]
See Katja Grace’s article: http://hplusmagazine.com/2011/05/13/anthropic-principles-and-existential-risks/
There are two comments I want to make about the above article.
First: the resolution to God’s Coin Toss seems fairly straightforward. I argue that the following scenario is formally equivalent to ‘God’s Coin Toss’
“Dr. Evil’s Machine”
Dr. Evil has a factory for making clones. The factory has 1000 separate identical rooms. Every day, a clone is produced in each room at 9:00 AM. However, there is a 50% chance of malfunction, in which case 900 of the clones suddenly die by 9:30 AM, the remaining 100 are healthy and notice nothing. At the end of the day Dr. Evil ships off all the clones which were produced and restores the rooms to their original state.
You wake up at 10:00 AM and learn that you are one of the clones produced in Dr. Evil’s factory, and your learn all of the information above. What is the probability that that the machine malfunctioned today?
In the second reformulation, the answer is clear from Bayes’ rule. Let P(M) be the probability of malfunction, and P(S) be the probability that you are alive at 10:00 AM. From the information given, we have
P(M) = 1⁄2
P(~M) = 1⁄2
P(S|M) = 1⁄10
P(S|~M) = 1
Therefore,
P(S) = P(S|M) P(M) + P(S|~M)P(~M) = (1/2)(1/10) + (1/2)(1) = 11⁄20
P(M|S) = P(S|M) P(M)/P(S) = (1/20)/(11/20) = 1⁄11
That is, given the information you have, you should conclude that the probability that the machine malfunctioned is 1⁄11.
The second comment concerns Grace’s reasoning about future filters.
I will assume that the following model is a fair representation of Grace’s argument about relative probabilities for the first and second filters.
Future Filter Model I
Given: universe with N planets, T time steps. Intelligent life can arise on a planet at most once.
At each time step:
each surviving intelligent species becomes permanently visible to all other species with probability c (the third filter probability)
each surviving intelligent species self-destructs with probability b (the second filter probability)
each virgin planet produces an intelligent species with probability a (the first filter probability)
Suppose N=one billion, T=one million. Put uniform priors on a, b, c, and the current time t (an integer between 1 and T).
Your species appeared on your planet at unknown time step t_0. The current time t is also unknown. At the current time, no species has become permanently visible in the universe. Conditioned on this information, what is the posterior density for first filter parameter a?
But the relevant event structure is not clear. It’s easy to do the math, but it’s not clear which math should be done. The discussions of Sleeping Beauty a few months back (I think it was) should’ve made it clear that there is little point in postulating probabilities (cousin_it might have a citation ready, I remember he made this point a few times), because it’s mostly a dispute about definitions (of random variables, etc.).
Instead, one should consider specific decision problems and ask about what decisions should be made. Figuring out the decisions might even involve calculating probabilities, but these would be introduced for a clear purpose, so that it’s not merely a matter of definitions and there’s actually a right answer, in the context of a particular method for solving a particular decision problem. While solving different decision problems, we might even encounter different “contradictory” probabilities associated with the same verbal specifications of events.
Considering it as a decision problem is a particular side in the definition/axiom dispute—a side that also corresponds with requiring the probabilities be the frequencies—i.e. if you use the other definitions the probabilities will not be frequencies. So I think the resolution to Sleeping Beauty is even stronger—there is a right side, and a right way to go about the problem.
Considering what as a decision problem? As formulated, we are not given one.
Exactly! :P
Assigning constant rewards for correct answers can be compared with assigning constant rewards to each person at the end of the experiment, and these options are (I think) isomorphic to the two ways to look at the problem through probability—the fact that the choice seems more intuitive through the lens of decision theory is a fact about our brains, not the problem.
You’ve just shifted the definitional debate to deciding which decision problem to use, which was not my suggestion.
But I claim it is an inevitable consequence of your suggestion, since the same sort of arguments that might be made about which way of calculating the probability can be made about which utility problem to solve, if you’re doing the same math. Or put another way, you can take the decision-theory result and use it to calculate the rational probabilities, so any stance on using decision theory is a stance on probabilities (if the rewards are fixed).
I think the problem just looks so obvious to us when we use decision theory that we don’t connect it to the non-obvious-seeming dispute over probabilities.
Again, I didn’t suggest trying to reformulate a problem as a decision problem as a way of figuring out which probability to assign. Probability-assignment is not an interesting game. My point was that if you want to understand a problem, understand what’s going on in a given situation, consider some decision problems and try to solve them, instead of pointlessly debating which probabilities to assign (or which decision problems to solve).
Oh, so you don’t think that viewing it as a decision problem clarifies it? Then choosing a decision problem to help answer the question doesn’t seem any more helpful than “make your own decision on the probability problem,” since they’re the same math. This then veers toward the even-more-unhelpful “don’t ask the question.”
It’s not intended to help with answering the question, no more than dissolving any other definitional debate helps with determining which definition is the better. It’s intended to help with understanding of the thought experiment instead.
Changing the labels on the same math isn’t “dissolving” anything, as it would if probabilities were like the word “sound.” “Sound” goes away when dissolved because it’s subjective and dissolving switches to objective language. Probabilities are uniquely derivable from objective language. Additionally there is no “unaskable question,” at least in typical probability theory—you’d have to propose a fairly extreme revision to get a relevant decision theory answer to not bear on the question of probabilities.
The Future Filter Model I strikingly resembles a hidden Markov model, in which each stage of the hidden chain is a filter and the “observables” are the detectable trace of a civilization...
I trailed off at the end of the post because I came up with a different model: http://lesswrong.com/lw/5q1/colonization_models_a_programming_tutorial_part_12/