It seems promising that several people are converging on the same “updateless” idea. But sometimes I wonder why it took so long, if it’s really the right idea, given the amount of brainpower spent on this issue. (Take a look at http://www.anthropic-principle.com/profiles.html and consider that Nick Bostrom wrote “Investigations into the Doomsday Argument” in 1996 and then did his whole Ph.D. on anthropic reasoning, culminating in a book published in 2002.)
BTW, weren’t the SIAI summer interns supposed to try to write one LessWrong post a week (or was it a month)? What happened to that plan?
I suppose you’re referring to the fact that we are “designed” by evolution. But why did evolution create a species that invented the number field sieve (to give a random piece of non-basic math) before UDT? It doesn’t make any sense.
Also inventing basic math is a hell of a lot harder than reading it in a textbook afterward.
In what sense is it “hard”? I don’t think it’s hard in a computational sense, like NP-hard. Or is it? I guess it goes back to the question of “what algorithm are we using to solve these types of problems?”
No, I’m referring to the fact that people are crazy and the world is mad. You don’t need to reach so hard for an explanation of why no one’s invented UDT yet when many-worlds wasn’t invented for thirty years.
I also don’t think general madness is enough of an explanation. Both are counterintuitive ideas in areas without well-established methods to verify progress, e.g. building a working machine or standard mathematical proof techniques.
The OB/LW/SL4/TOElist/polymathlist group is one intellectual community drawing on similar prior work that hasn’t been broadly disseminated.
The same arguments apply with much greater force to the the causal decision theory vs evidential decision theory debate.
The interns wound up more focused on their group projects. As it happens, I had told Katja Grace that I was going to write up a post showing the difference between UDT and SIA (using my apples example which is isomorphic with the example above), but in light of this post it seems needless.
UDT is basically the bare definition of reflective consistency: it is a non-solution, just statement of the problem in constructive form. UDT says that you should think exactly the same way as the “original” you thinks, which guarantees that the original you won’t be disappointed in your decisions (reflective consistency). It only looks good in comparison to other theories that fail this particular requirement, but otherwise are much more meaningful in their domains of application.
TDT fails reflective consistency in general, but offers a correct solution in a domain that is larger than those of other practically useful decision theories, while retaining their expressivity/efficiency (i.e. updating on graphical models).
The OB/LW/SL4/TOElist/polymathlist group is one intellectual community drawing on similar prior work that hasn’t been broadly disseminated.
What prior work are you referring to, that hasn’t been broadly disseminated?
The same arguments apply with much greater force to the the causal decision theory vs evidential decision theory debate.
I think much less brainpower has been spent on CDT vs EDT, since that’s thought of as more of a technical issue that only professional decision theorists are interested in. Likewise, Newcomb’s problem is usually seen as an intellectual curiosity of little practical use. (At least that’s what I thought until I saw Eliezer’s posts about the potential link between it and AI cooperation.)
Anthropic reasoning, on the other hand, is widely known and discussed (I remember the Doomsday Argument brought up during a casual lunch-time conversation at Microsoft), and thought to be both interesting in itself and having important applications in physics.
The interns wound up more focused on their group projects.
I miss the articles they would have written. :) Maybe post the topic ideas here and let others have a shot at them?
“What prior work are you referring to, that hasn’t been broadly disseminated?”
I’m thinking of the corpus of past posts on those lists, which bring certain tools and concepts (Solomonoff Induction, anthropic reasoning, Pearl, etc) jointly to readers’ attention. When those tools are combined and focused on the same problem, different forum participants will tend to use them in similar ways.
You might think that more top-notch economists and game theorists would have addressed Newcomb/TDT/Hofstadter superrationality given their interest in the Prisoner’s Dilemma.
Looking at the actual literature on the Doomsday argument, there are some physicists involved (just as some economists and others have tried their hands at Newcomb), but it seems like more philosophers. And anthropics doesn’t seem core to professional success, e.g. Tegmark can indulge in it a bit thanks to showing his stuff in ‘hard’ areas of cosmology.
I just realized/remembered that one reason that others haven’t found the TDT/UDT solutions to Newcomb/anthropic reasoning may be that they were assuming a fixed human nature, whereas we’re assuming an AI capable of self-modification. For example, economists are certainly more interested in answering “What would human beings do in PD?” than “What should AIs do in PD assuming they know each others’ source code?” And perhaps some of the anthropic thinkers (in the list I linked to earlier) did invent something like UDT, but then thought “Human beings can never practice this, I need to keep looking.”
This post is an argument against voting on your updated probability when there is a selection effect such as this. It applies to any evidence (marbles, existence etc), but only in a specific situation, so has little to do with SIA, which is about whether you update on your own existence to begin with in any situation. Do you have arguments against that?
It’s for situations in which different hypotheses all predict that there will be beings subjectively indistinguishable from you, which covers the most interesting anthropic problems in my view. I’ll make some posts distinguishing SIA, SSA, UDT, and exploring their relationships when I’m a bit less busy.
Are you saying this problem arises in all situations where multiple beings in multiple hypotheses make the same observations? That would suggest we can’t update on evidence most of the time. I think I must be misunderstanding you. Subjectively indistinguishable beings arise in virtually all probabilistic reasoning. If there were only one hypothesis with one creature like you, then all would be certain.
The only interesting problem in anthropics I know of is whether to update on your own existence or not. I haven’t heard a good argument for not (though I still have a few promising papers to read), so I am very interested if you have one. Will ‘exploring their relationships’ include this?
This was my take after going through a similar analysis (with apples, not paperclips) at the SIAI summer intern program.
It seems promising that several people are converging on the same “updateless” idea. But sometimes I wonder why it took so long, if it’s really the right idea, given the amount of brainpower spent on this issue. (Take a look at http://www.anthropic-principle.com/profiles.html and consider that Nick Bostrom wrote “Investigations into the Doomsday Argument” in 1996 and then did his whole Ph.D. on anthropic reasoning, culminating in a book published in 2002.)
BTW, weren’t the SIAI summer interns supposed to try to write one LessWrong post a week (or was it a month)? What happened to that plan?
People are crazy, the world is mad. Also inventing basic math is a hell of a lot harder than reading it in a textbook afterward.
I suppose you’re referring to the fact that we are “designed” by evolution. But why did evolution create a species that invented the number field sieve (to give a random piece of non-basic math) before UDT? It doesn’t make any sense.
In what sense is it “hard”? I don’t think it’s hard in a computational sense, like NP-hard. Or is it? I guess it goes back to the question of “what algorithm are we using to solve these types of problems?”
No, I’m referring to the fact that people are crazy and the world is mad. You don’t need to reach so hard for an explanation of why no one’s invented UDT yet when many-worlds wasn’t invented for thirty years.
I also don’t think general madness is enough of an explanation. Both are counterintuitive ideas in areas without well-established methods to verify progress, e.g. building a working machine or standard mathematical proof techniques.
The OB/LW/SL4/TOElist/polymathlist group is one intellectual community drawing on similar prior work that hasn’t been broadly disseminated.
The same arguments apply with much greater force to the the causal decision theory vs evidential decision theory debate.
The interns wound up more focused on their group projects. As it happens, I had told Katja Grace that I was going to write up a post showing the difference between UDT and SIA (using my apples example which is isomorphic with the example above), but in light of this post it seems needless.
UDT is basically the bare definition of reflective consistency: it is a non-solution, just statement of the problem in constructive form. UDT says that you should think exactly the same way as the “original” you thinks, which guarantees that the original you won’t be disappointed in your decisions (reflective consistency). It only looks good in comparison to other theories that fail this particular requirement, but otherwise are much more meaningful in their domains of application.
TDT fails reflective consistency in general, but offers a correct solution in a domain that is larger than those of other practically useful decision theories, while retaining their expressivity/efficiency (i.e. updating on graphical models).
What prior work are you referring to, that hasn’t been broadly disseminated?
I think much less brainpower has been spent on CDT vs EDT, since that’s thought of as more of a technical issue that only professional decision theorists are interested in. Likewise, Newcomb’s problem is usually seen as an intellectual curiosity of little practical use. (At least that’s what I thought until I saw Eliezer’s posts about the potential link between it and AI cooperation.)
Anthropic reasoning, on the other hand, is widely known and discussed (I remember the Doomsday Argument brought up during a casual lunch-time conversation at Microsoft), and thought to be both interesting in itself and having important applications in physics.
I miss the articles they would have written. :) Maybe post the topic ideas here and let others have a shot at them?
“What prior work are you referring to, that hasn’t been broadly disseminated?”
I’m thinking of the corpus of past posts on those lists, which bring certain tools and concepts (Solomonoff Induction, anthropic reasoning, Pearl, etc) jointly to readers’ attention. When those tools are combined and focused on the same problem, different forum participants will tend to use them in similar ways.
You might think that more top-notch economists and game theorists would have addressed Newcomb/TDT/Hofstadter superrationality given their interest in the Prisoner’s Dilemma.
Looking at the actual literature on the Doomsday argument, there are some physicists involved (just as some economists and others have tried their hands at Newcomb), but it seems like more philosophers. And anthropics doesn’t seem core to professional success, e.g. Tegmark can indulge in it a bit thanks to showing his stuff in ‘hard’ areas of cosmology.
I just realized/remembered that one reason that others haven’t found the TDT/UDT solutions to Newcomb/anthropic reasoning may be that they were assuming a fixed human nature, whereas we’re assuming an AI capable of self-modification. For example, economists are certainly more interested in answering “What would human beings do in PD?” than “What should AIs do in PD assuming they know each others’ source code?” And perhaps some of the anthropic thinkers (in the list I linked to earlier) did invent something like UDT, but then thought “Human beings can never practice this, I need to keep looking.”
This post is an argument against voting on your updated probability when there is a selection effect such as this. It applies to any evidence (marbles, existence etc), but only in a specific situation, so has little to do with SIA, which is about whether you update on your own existence to begin with in any situation. Do you have arguments against that?
It’s for situations in which different hypotheses all predict that there will be beings subjectively indistinguishable from you, which covers the most interesting anthropic problems in my view. I’ll make some posts distinguishing SIA, SSA, UDT, and exploring their relationships when I’m a bit less busy.
Are you saying this problem arises in all situations where multiple beings in multiple hypotheses make the same observations? That would suggest we can’t update on evidence most of the time. I think I must be misunderstanding you. Subjectively indistinguishable beings arise in virtually all probabilistic reasoning. If there were only one hypothesis with one creature like you, then all would be certain.
The only interesting problem in anthropics I know of is whether to update on your own existence or not. I haven’t heard a good argument for not (though I still have a few promising papers to read), so I am very interested if you have one. Will ‘exploring their relationships’ include this?
You can judge for yourself at the time.