I don’t think this should be downvoted (was at −2 when I wrote this). I generally downvote only those things that are either trolling or that will have the same effect because the discussion will spiral in horribly non-truthy directions. This comment seems to me like it is a legitimate question, with a legitimate answer, that may lead to productive plans for the future.
The question is legitimate because UDT/TDT/etc are concepts that are basically novel to LW but that we would hope could be developed to the point of practical utility if LW is going to be shine in some way other than being a popularizing mechanism for academic research.
The answer follows from the same thinking that explains why the question is legitimate.
With classical decision theory you’re basically just trying to figure out the costs and benefits of multiple options with some uncertainty mixed in. Then you pick the one that adds up to “the best thing”. The formal math basically exhorts you to simply perform numerical multi-pronged due diligence which most people don’t do for most things but that “decision theory” can encourage them to do in a structured way. It gives them a research keyword to get lots of advice about methodology and also lets them know that being meticulous and planful about important choices is honestly likely to lead to better outcomes in some circumstance (where the value of information can even be estimated, no less).
Just getting that slight improvement in process might end up making a huge difference because the sanity waterline really is that low.
But at the present time, I don’t have any clue what what practical steps someone would perform if they were going to implement either TDT or UDT for practical personal or philanthropic benefit. Look up dollar values? Try and find probabilistic estimates of impact? But that stuff is simply the traditional way to go about such things and doesn’t require one to invoke a new decision theory, and so the concept doesn’t pay rent in utility if deploying it just means “doing the same thing other theories with better known names already endorsed”.
So to add in the new aspects of our novel decision theory we’re supposed to… um… look up the source code of voters? And then somehow emulate it? If the state of the world is uncertain do we need to run different emulations and average them to factor out the uncertainty? And do we completely ignore the other data that it seemed like basic prudence to gather, or do we integrate it with a formula somehow? On a spreadsheet? Using some python script we hack up? Are we going to have to hire a programmer with monte-carlo simulation experience to figure this out? Do we need to rent some super-computer time?
Its just not clear what “mental algorithm” should be performed if we’re going to gainfully apply a novel decision theory of the sort we’ve been kicking around in LW, completely leaving aside having an algorithm we can actually perform on paper the way we can calculate stock valuations using simple accounting rules to find the starting point and playing with estimates of the growth of total cash versus total stock.
And since “perfect is the enemy of good enough”, and “good enough is the enemy of at all”, and optimal philanthropy efforts don’t seem to have been paying attention to the arena of politics basically at all (probably due to mindkiller processes that people in this community can appreciate) Carl’s approach seems to me like the best way to get at low hanging fruit.
In the meantime, I think it would be really cool to find practical applications of UDT/TDT to real world situations (like maybe, answering the question “Does this dress make me look fat?” or figuring out when and how to fire someone you like but who is doing a terrible job) that can be worked out as “examples for the text book” showing how to take a given “word problem” through a data gathering stage and into an algorithmic “plug and chug” stage.
Until TDT/UDT has been spelled out in such brutal simplicity that I can read about it, do some practice until I acquire the theory grounded ability to notice when I’m doing well or poorly and judge myself to be doing it well, and then teach a patient 12 year old with an IQ of 120 to implement it on paper for a test case, it seems unrealistic to expect the theory to be applied by Carl to philanthropic political donations.
That was a beautiful, useful, and tactful exposition of the basic reasons I refrained from reversing the downvotes. Along with not voting, I didn’t say “it sounds a bit unfair and self-aggrandizing to expect Carl to use a decision theory you’ve invented, but have not actually explicitly described;” because that’s minimally useful, except in contrast to your essay.
Ah, I think I see. You subscribe to an author oriented theory of moderation with strong emphasis on moral reward/punishment… and you don’t have a lot of charity here...
I generally just try to promote interesting content. I downvote into negative territory only when I expect content to be a net negative in terms of epistemic hygiene as when people go into conflict mode and stop sympathizing with their conversational partners. Trolling, spamming, and sometimes even just content “above out sanity waterline” deserve negative votes in my book. This seems like none of those things.
Honest mistakes growing out of inferential distance (IE assuming that because Eliezer understands his own theory and wrote about it in public that now everyone can deploy it), that are related to deeply worthwhile content given the big picture (IE that TDT is an early draft for an optimization algorithm that might be able to recursively and safely self improve around morally fraught issues in a self-correcting way) seems like the kind of thing that need more attention rather than less. So I vote up in order to increase the attention it receives instead of down to express moral displeasure.
Every so often I see something that seems to have been downvoted where I can plausibly imagine that the content was interesting, but there seem to be inferential distance problems between the voters and the commenter and I see if I can bridge the distance. Seeing as the comment is down to −3 now, I seem to be failing in this case. But maybe this will bring it up? :-)
Seeing as the comment is down to −3 now, I seem to be failing in this case.
You may have been more successful if you avoided telling people how they should vote, particularly with wording implying prohibition. Stating your own decision, reasoning and preference without presumption in general makes me more inclined to accommodate.
I may have voted up the comment in question if Eliezer had included even a single sentence description on what difference using TDT instead of Lame DT would have made to the question of ‘politics as charity’. That would have been a real contribution to the thread. As is it is more of a ‘complete the pattern’ then rant response that does not indicate that Eliezer even considered what TDT would mean in the context.
(It would change the example in the introduction and change the framing in some of the remainder. Politics would still be charity, even though what that means is changed somewhat.)
I downvoted because in any piece of public writing “I am right” should never be offered as a postulate, and my opinion of those who do so is best left unsaid.
The big problem is that many ordinary people already outperform the standard wrong classical causal decision theory. They get the vote out, they get their man into power, they get the preferences, restraints, and rents they are seeking and they laugh all the way to the bank.
There is a saying that a paradox is to a logician as the smell of burning insulation is to the electrical engineer. Many paradoxes are not really that serious, but the voting paradox strikes me analogous to seeing flames.
The big problem is that many ordinary people already outperform the standard wrong classical causal decision theory. They get the vote out, they get their man into power, they get the preferences, restraints, and rents they are seeking and they laugh all the way to the bank.
Elections have losers as well as winners. Do you think people who vote for losers have a different decision theory to the people who vote for winners?
I remember a scene from a novel by Gene Wolfe in which a bunch of tribesmen find themselves on a battlefield against a foe armed with energy weapons. The tribesmen all engage in superstitious ritual meant to provide personal protection. Some of them get blown to bits and some don’t. The ones who survive are going to end up thinking that their ritual works.
By focusing only on the victors in the ritual of democracy, when you judge the rationality of the slackers who don’t vote at all, you are creating a similar illusion. Supporters of the loser do everything that supporters of the winner do. They go house to house, they hold rallies, they donate money, they send letters to the editor. They make that big investment of time, hope, and energy, because they believe in democracy, and they still don’t get any of what they want.
Your rhetorical question contains a noun-phrase “people who vote for losers”. This seems to refer to the faction that misses out on the spoils of the electoral system because too many members of the faction subscribe to the theory the voting doesn’t matter, resulting in the faction losing because they couldn’t get their vote out. So the words “people who vote” are being used to refer to people who don’t vote.
This reminds me that I ought to stop reading LessWrong and get back to work on OuterCircle
I don’t think this should be downvoted (was at −2 when I wrote this). I generally downvote only those things that are either trolling or that will have the same effect because the discussion will spiral in horribly non-truthy directions. This comment seems to me like it is a legitimate question, with a legitimate answer, that may lead to productive plans for the future.
The question is legitimate because UDT/TDT/etc are concepts that are basically novel to LW but that we would hope could be developed to the point of practical utility if LW is going to be shine in some way other than being a popularizing mechanism for academic research.
The answer follows from the same thinking that explains why the question is legitimate.
With classical decision theory you’re basically just trying to figure out the costs and benefits of multiple options with some uncertainty mixed in. Then you pick the one that adds up to “the best thing”. The formal math basically exhorts you to simply perform numerical multi-pronged due diligence which most people don’t do for most things but that “decision theory” can encourage them to do in a structured way. It gives them a research keyword to get lots of advice about methodology and also lets them know that being meticulous and planful about important choices is honestly likely to lead to better outcomes in some circumstance (where the value of information can even be estimated, no less).
Just getting that slight improvement in process might end up making a huge difference because the sanity waterline really is that low.
But at the present time, I don’t have any clue what what practical steps someone would perform if they were going to implement either TDT or UDT for practical personal or philanthropic benefit. Look up dollar values? Try and find probabilistic estimates of impact? But that stuff is simply the traditional way to go about such things and doesn’t require one to invoke a new decision theory, and so the concept doesn’t pay rent in utility if deploying it just means “doing the same thing other theories with better known names already endorsed”.
So to add in the new aspects of our novel decision theory we’re supposed to… um… look up the source code of voters? And then somehow emulate it? If the state of the world is uncertain do we need to run different emulations and average them to factor out the uncertainty? And do we completely ignore the other data that it seemed like basic prudence to gather, or do we integrate it with a formula somehow? On a spreadsheet? Using some python script we hack up? Are we going to have to hire a programmer with monte-carlo simulation experience to figure this out? Do we need to rent some super-computer time?
Its just not clear what “mental algorithm” should be performed if we’re going to gainfully apply a novel decision theory of the sort we’ve been kicking around in LW, completely leaving aside having an algorithm we can actually perform on paper the way we can calculate stock valuations using simple accounting rules to find the starting point and playing with estimates of the growth of total cash versus total stock.
And since “perfect is the enemy of good enough”, and “good enough is the enemy of at all”, and optimal philanthropy efforts don’t seem to have been paying attention to the arena of politics basically at all (probably due to mindkiller processes that people in this community can appreciate) Carl’s approach seems to me like the best way to get at low hanging fruit.
In the meantime, I think it would be really cool to find practical applications of UDT/TDT to real world situations (like maybe, answering the question “Does this dress make me look fat?” or figuring out when and how to fire someone you like but who is doing a terrible job) that can be worked out as “examples for the text book” showing how to take a given “word problem” through a data gathering stage and into an algorithmic “plug and chug” stage.
Until TDT/UDT has been spelled out in such brutal simplicity that I can read about it, do some practice until I acquire the theory grounded ability to notice when I’m doing well or poorly and judge myself to be doing it well, and then teach a patient 12 year old with an IQ of 120 to implement it on paper for a test case, it seems unrealistic to expect the theory to be applied by Carl to philanthropic political donations.
That was a beautiful, useful, and tactful exposition of the basic reasons I refrained from reversing the downvotes. Along with not voting, I didn’t say “it sounds a bit unfair and self-aggrandizing to expect Carl to use a decision theory you’ve invented, but have not actually explicitly described;” because that’s minimally useful, except in contrast to your essay.
Ah, I think I see. You subscribe to an author oriented theory of moderation with strong emphasis on moral reward/punishment… and you don’t have a lot of charity here...
I generally just try to promote interesting content. I downvote into negative territory only when I expect content to be a net negative in terms of epistemic hygiene as when people go into conflict mode and stop sympathizing with their conversational partners. Trolling, spamming, and sometimes even just content “above out sanity waterline” deserve negative votes in my book. This seems like none of those things.
Honest mistakes growing out of inferential distance (IE assuming that because Eliezer understands his own theory and wrote about it in public that now everyone can deploy it), that are related to deeply worthwhile content given the big picture (IE that TDT is an early draft for an optimization algorithm that might be able to recursively and safely self improve around morally fraught issues in a self-correcting way) seems like the kind of thing that need more attention rather than less. So I vote up in order to increase the attention it receives instead of down to express moral displeasure.
Every so often I see something that seems to have been downvoted where I can plausibly imagine that the content was interesting, but there seem to be inferential distance problems between the voters and the commenter and I see if I can bridge the distance. Seeing as the comment is down to −3 now, I seem to be failing in this case. But maybe this will bring it up? :-)
You may have been more successful if you avoided telling people how they should vote, particularly with wording implying prohibition. Stating your own decision, reasoning and preference without presumption in general makes me more inclined to accommodate.
I may have voted up the comment in question if Eliezer had included even a single sentence description on what difference using TDT instead of Lame DT would have made to the question of ‘politics as charity’. That would have been a real contribution to the thread. As is it is more of a ‘complete the pattern’ then rant response that does not indicate that Eliezer even considered what TDT would mean in the context.
(It would change the example in the introduction and change the framing in some of the remainder. Politics would still be charity, even though what that means is changed somewhat.)
I downvoted because in any piece of public writing “I am right” should never be offered as a postulate, and my opinion of those who do so is best left unsaid.
The big problem is that many ordinary people already outperform the standard wrong classical causal decision theory. They get the vote out, they get their man into power, they get the preferences, restraints, and rents they are seeking and they laugh all the way to the bank.
There is a saying that a paradox is to a logician as the smell of burning insulation is to the electrical engineer. Many paradoxes are not really that serious, but the voting paradox strikes me analogous to seeing flames.
Elections have losers as well as winners. Do you think people who vote for losers have a different decision theory to the people who vote for winners?
I remember a scene from a novel by Gene Wolfe in which a bunch of tribesmen find themselves on a battlefield against a foe armed with energy weapons. The tribesmen all engage in superstitious ritual meant to provide personal protection. Some of them get blown to bits and some don’t. The ones who survive are going to end up thinking that their ritual works.
By focusing only on the victors in the ritual of democracy, when you judge the rationality of the slackers who don’t vote at all, you are creating a similar illusion. Supporters of the loser do everything that supporters of the winner do. They go house to house, they hold rallies, they donate money, they send letters to the editor. They make that big investment of time, hope, and energy, because they believe in democracy, and they still don’t get any of what they want.
Your rhetorical question contains a noun-phrase “people who vote for losers”. This seems to refer to the faction that misses out on the spoils of the electoral system because too many members of the faction subscribe to the theory the voting doesn’t matter, resulting in the faction losing because they couldn’t get their vote out. So the words “people who vote” are being used to refer to people who don’t vote.
This reminds me that I ought to stop reading LessWrong and get back to work on OuterCircle