Open Thread, January 15-31, 2012
If it’s worth saying, but not worth its own post (even in Discussion), then it goes here.
(I plan to make these threads from now on. Downvote if you disapprove. If I miss one, feel free to do it yourself.)
- 6 Feb 2012 1:43 UTC; 9 points) 's comment on The Singularity Institute needs remote researchers (writing skill not required) by (
- What would you do with a financial safety net? by 16 Jan 2012 23:38 UTC; 6 points) (
- 25 Jan 2012 16:55 UTC; 4 points) 's comment on Welcome to Less Wrong! (2012) by (
- 17 Jan 2012 1:21 UTC; -1 points) 's comment on Welcome to Less Wrong! (2012) by (
An outside view of LessWrong:
As far as I can tell, the reason for this is that in practical matters, “politics is the mind killer” is the mind killer.
Is there an argument behind “quotidian” besides “I have a short mental time horizon and don’t like to think weird thoughts”?
Why would LessWrong be able to come to a consensus on political subjects? Who would care about such a consensus if it came about?
There’s already enough geek-libertarian atmosphere that those of us who aren’t really notice it. But yeah—as I said, I’m not actually sure it would be a good idea. But the shying away from practical application to that particular part of things people are actually interested in fixing in their daily lives is a noteworthy absence.
Your implied claim that quotidian thoughts are unworthy of attention is … look, if you want to convince people all of this is actually a good idea, then when someone asks “so, OK. What are the practical applications of reading a million words of philosophy and learning probability maths?”, answering “How dare you be so short-termist” strikes me as unlikely to work. I mean, I could be wrong …
If it is not too much trouble, could you explain further what you mean by that?
It seems to be treated as a thought stopper. “Do not go beyond this point.” There are good reasons for it, but the behaviour looks just like shying away from a bad thought.
The thoughts are there, they’re just not expressed on this particular site.
I always assumed it was more a discussion-stopper, meant to keep people polite and quiet. However, your interpretation is probably better.
I assume that was the intention. I’m not actually convinced that it would improve the site for us to dive headfirst into politics … but it’s odd for the stuff discussed here not to be applied even somewhere else, or even in the discussion section, without a flurry of downvotes. There’s a strong social norm that even the slightest hint of political discussion is inherently bad and must be avoided.
It should be noted that RationalWiki is not a website known to be, let us say, lacking in killed minds.
It is a very silly place.
I agree
I sometimes run into a situation where I see a comment I’m ambivalent about about, that I would normally not vote on. However, this comment also has an extreme vote total, either very high or very low. I would prefer this comment to be more like 0, but I’m not sure it’s acceptable to vote according to what I want the total to be, as opposed to what I think about the post, because it gives me more voting power than I would otherwise have. What do you do in this situation?
You get to modify the karma rating by one in either direction. Do so in whatever manner seems most desirable to you.
You have too much voting power if you create a sock puppet and vote twice.
This is my attempt to figure out what is most desirable to me. At the moment, I want to do whatever would be the best overall policy if everyone followed it, with “best” here being defined as “resulting in the best lesswrong possible” (with a very complicated definition of best that I don’t think I can specify well).
Given that that’s what I want, how best to achieve it? The karma system is valuable because it makes more visible posts that are highly upvoted, so it’s valuable to the extent that the highest upvoted comments are the best.
It should be noted that only relative karma matters (for sorting within an article), and the karma of other posts will tend to be rising (most posts wind up with positive karma). There is some number between 0 and 1 (call it x)that represents the expected vote of someone who votes.
Because karma is relative, if you’ve decide you care enough to vote, you should subtract x from your vote to determine if it counts as evidence that the post is good or bad. Do you want to vote 1-x, -x, or −1-x? Note 1-x>0, and the other two (not voting and down voting) are less than 0, downvoting by quite a bit. Which of these best corresponds to the sentiment “I liked this but think it’s overrated”?
I roughly follow the following (prioritized) rules:
Up-vote if I want to see more posts like this/down-vote if I don’t want to see more posts like this, regardless of the current total.
A comment that I do not feel very strongly about I may up- or down-vote based on what total karma I expect the comment of this kind to deserve.
Very occasionally, I might like or dislike the author for unrelated reasons, and decide to up-/down-vote based on that.
You should vote without knowledge of total karma, otherwise it biases comments’ karma scores towards 0 (except at extremes, where it creates bandwagon effects). Power doesn’t enter into it, though.
You’re assuming that biasing karma scores towards zero (relative to what they would be before) is bad. Sure, it could be, but I don’t see any particular reason why.
[citation needed]
I have previous thought that maybe karma should be hidden until after you vote.
But then there’s the problem where part of the point of karma is to tell you whether something is worth reading. If karma was hidden until after voting, users would still have their total karma to motivate them, and we could still hide sufficiently negative comments.
Maybe we should hide comment karma before voting, but not article karma?
Does your preference mean that you honestly think the intrinsic value of the comment does not justify its vote count, or that you just generally prefer moderation and extremes irritate you?
In the former case, I would definitely vote toward what I thought would be a more justified vote count. Though in the latter case, I would probably be completely blind to my bias.
I meant that the intrinsic value of the comment does not justify its vote count.
Some thinking is easier in privacy.
These are interesting results, but the research was from 1985--”Programmer Performance and the Effects of the Workplace,” in Proceedings of the 8th International Conference on Software Engineering, August 1985. It seems unlikely that things have changed, but I don’t know whether the results have been replicated.
I don’t know of any studies, but there are many anecdotal reports about this.
Worth noting: is correlational, not causal.
Straw fascist … has a point?
Yes he does, and it’s a Superhappy kind of point… if all the words in this video are taken at face value, “you’ll never have to think again” near the end spells “wireheading”.
It all comes down to the grand debate between inconvenient uncertain “freedom” and more founded, more stable “happiness”; during our recent conversations, I’ve been leaning towards the former in some things and you’ve been cautioning people about how they might prefer to trade that for the latter—but in the end it’s all just skirting our terminal values, so there’s certainly no “correct” or “incorrect” conclusion to arrive at.
The biggest risk of “existential risk mitigation” is that it will be used by the “precautionary principle” zealots to shut down scientific research. There is some evidence that it has been attempted already, see the fear-mongering associated with the startup of the new collider at CERN.
A slowdown, much less an actual halt, in new science is the one thing I am certain will increase future risks, since it will undercut our ability to deal with any disasters that actually do occur.
Was there really deceptive fear-mongering? That’s news to me. Fear was overblown, but I don’t think anyone was using it for anything other than what they thought was safety.
I highly doubt this. All plausible major x-risks appear to be man-made. Slowing down would give us more time to see them coming. Why would it undercut our ability to deal with a disaster?
I’m not highly read on the criticisms, but it wouldn’t surprise me if someone vaguely influential invoked the CERN hysteria to argue for reducing the funding of basic research. But I don’t have a cite for you.
It’s not clear to me that asteroid impacts, major plagues, or becoming caught in a Malthusian trap are not x-risks on the same order of magnitude as man-made x-risks. (Yes, a Malthusian trap is man-made, but it can’t necessarily be prevented by stopping scientific research). And for man-made x-risks, what is the mechanism for “seeing the disaster coming” that isn’t essentially doing more research?
A major plague is not, strictly speaking, an existential risk, although it would deal a lot of suffering. It will delay malthusian trap, though...
Making science slow down means that you make the best and brightest not do their best in the research. So this drives them to optimizing algorithmical trading.
Also, you would want to slow down the research of new things and imncrease the research of implications; but how do you draw a line? Is the fact that a nuclear reactor can go critical and level a nearby city a useful cautionary knowledge about building power plant or a “stop giving them ideas” thing?
ETA: I do not mean that any of the currently running reactors is that bad — I mean how to research nuclear fission in years 1900-1925 to have a safe nuclear power plant before a nuclear bomb.
If you claim that a modern nuclear reactor can level a nearby city, you are telling a falsehood.
I was slightly unclear. Your statement is true.
I do not say that a modern nuclear reactor can level a city. I don’t even claim or disclaim that the worst currently running nuclear reactor can level a city under reasonably imaginable coditions (I tend to agree that the fallout will be a problem, but a full-scale nuclear explosion is very unlikely but I have not enough evidence and knowledge to be sure either way).
I describe a situtation of the research of nuclear fission. Imagine that someone knows that a bigger pile of uranium emits more radiation and wants to build a power plant based on this in 10–20 years. Some research is done to be able to predict the behaviour of such a system — of course, there are no power plant designs from Earth-2010-our-timeline.
How should one do the research to prevent Chernobyl type disasters, minimize the risk of Fukushima type disasters and not find something that makes military build a nuclear bomb before first nuclear power plant is built?
Note that one needs to do enrichment both for a power plant and for a bomb.
It is true that simply piling even warhead-grade enriched uranium will not lead to a weapon-scale explosion, but the results of building a reactor without careful research into implications are not likely to be good.
Will a halt in new science undercut our ability to deal with those disasters to a greater extent than it makes those disasters more likely? What if the halt was only in certain domains, life genetic engineering of deadly viruses?
There’s no reason to believe that we’ve reached the optimum point for ending scientific research in any particular field. If we’d stopped medical research in 1900, the 1918 flu pandemic would have been worse. And basic research doesn’t have a label telling us how it’s going to be useful, yet the evidence is pretty strong that basic research is worth the money.
Regarding your specific example, isn’t it worth knowing that the mutations to make that virus (1) already exist in nature, and (2) aren’t really that far from being naturally incorporated into a single virus. If it took 500 passes instead of 10, we’d be relieved to learn that, right? In short, it seems like this kind of research is likely to be of practical use in treating serious flu virii (spelling?) in the relatively near future.
The question is not “Is it useful?” but “Is it useful enough to justify the risk?” In that case, the answer might well be yes, but there will probably be cases in the future where the knowledge is not worth the risk.
I agree that you have identified the right question. I disagree with you on when the balance shifts. In particular, I think you’ve picked a bad example of “dangerous” research, because I don’t think the virus research you identified is a close question.
(That said, not my downvotes)
Upon further research, you’re right. The research appears not to be as dangerous as it seemed at first glance.
As part of my work for Luke, I looked into price projections for whole genome sequencing, as in not SNP genotyping, which I expect to pass the $100 mark by 2014. The summary is that I am confident whole-genome sequencing will be <$1000 by 2020, and slightly skeptical <$100 by 2020.
Starting point: $4k in bulk right now, from Illumina http://investor.illumina.com/phoenix.zhtml?c=121127&p=irol-newsArticle_print&ID=1561106 (I ran into a ref saying knomeBASE did <$5k sequencing—http://hmg.oxfordjournals.org/content/20/R2/R132.full#xref-ref-106-1 - but after thoroughly looking through their site, I’m fairly sure what they are actually offering is interpretation of a sequence, possibly done by Illumina.)
Projections: “The advent of personal genome sequencing” Drmanac http://wch.org.au/emplibrary/ccch/CPH_D5_L4_Genome_Sequencing.pdf Genetics in Medicine (http://journals.lww.com/geneticsinmedicine/Abstract/2011/03000/The_advent_of_personal_genome_sequencing.4.aspx)
cite 9 = Metzger ML. Sequencing technologies—the next generation. Nature Rev. Genet. 2010;11:31– 46 http://eebweb.arizona.edu/nachman/Further%20Interest/Metzker_2009.pdf Confusingly, on pg44:
Where does 2014 come from? I suggest attributing it to Drmanac and not Metzker. (I’ve emailed him to ask where his 2014 came from.) Drmanac is commercially involved and seems very optimistic; compare his answers in http://www.clinchem.org/content/55/12/2088.full to the other experts. But there is general agreement it is possible (see also paragraph 3 in https://www.sciencemag.org/content/311/5767/1544.full ).
Here’s a citation for 2013: http://content.usatoday.com/communities/sciencefair/post/2011/07/race-to-1000-human-genome-machine-intensifies/1 discussing the new sequencing device in http://www.nature.com/nature/journal/v475/n7356/full/nature10242.html (more media coverage: http://www.nature.com/news/2011/110720/full/475278a.html )
A guy from GenomeQuest (http://www.crunchbase.com/company/genomequest) agrees with Rothberg, saying $100 (not $1000) will be hit within a decade, and $1000 by July 2013: http://blogs.discovermagazine.com/gnxp/2010/07/genomic-liftoff/#comment-27818
As well: Snyder M, Du J, Gerstein M. Personal genome sequencing: current approaches and challenges http://stanford.edu/class/gene210/files/readings/Snyder_GenesDev_2010.pdf—pg 3 has a nice graph of the super-exponential price decrease (left, blue) vs total number of sequenced genomes (right, red). Probably don’t need that though for a footnote.
A promising lead would be journalist Kevin Davies’s The $1,000 Genome: The Revolution in DNA Sequencing and the New Era of Personalized Medicine. I read a few reviews including one in Nature, but unfortunately no one specifically quotes a due date for price-points and the book is not on library.nu for me to search.
Hopefully that is enough for sequencing! Phew. (Something of an echo chamber.)
--”Secrets of my DNA”, Wired March 2011 (so 2014?)
Inside China’s Genome Factory, Technology Review
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3663089/
http://biomickwatson.wordpress.com/2013/05/15/a-pedantic-look-at-the-cost-of-sequencing/
http://biomickwatson.wordpress.com/2013/06/18/the-1000-myth/
http://biomickwatson.wordpress.com/2013/06/18/the-1000-myth/#comment-2031
http://www.utsandiego.com/news/2013/Jun/19/1000-genome-mirage/2/?#article-copy
Consumer WGSes hit ~$1000 with Veritas in 2016. In 2018, Dante Labs began offering WGS at ~$600, with a sale of $350. And we now have a rumor that Illumina will announce a $100 genome in a few months (presumably in early 2019): https://twitter.com/coregenomics/status/1058790189752049664
$100 might be a little questionable here (apparently Illumina has a history of making the most favorable possible assumptions about volume/amortization) but revisiting my original prediction from 7 years ago:
I was too pessimistic about SNP genotyping (it was actually more like $50 in 2014, I was completely unaware of UK Biobank at the time or its scale or savings), definitely right about ‘<$1000 by 2020’, and I think I will turn out to be somewhat wrong about WGS being <$100 by 2020: even if Illumina is fudging some numbers for early 2019 at $100, it’ll have almost a whole year to drop the cost a little more, and honestly, even if it’s actually $110 does it make a difference considering how many things you can use whole genomes for & general medical overhead? You can hardly get some prescription aspirin these days for $100...
Overall self-assessment: I was more right than I had any right to be in that set of predictions given I was using some simple extrapolating and adding some pessimism/mean-reversion. Not bad, past-self!
Just got a Veritas-related email:
Even allowing for promotional discounts, I’m still impressed. EDIT: Dante Labs too!
Thanks! I’ve been conflicted about which SNP service to use, and now I don’t have to decide. :) Do you know if there are any potential downsides to consenting to let Veritas use the data for research? Would you tick that box?
Yes. In fact, I am already a PGP participant.
I am not sure you necessarily want to use Veritas/Dante Labs (Veritas might be sold out already based on their Twitter), as WGS reports are usually pretty raw and you won’t get all of the interpretive services somebody like 23andMe would provide. I don’t believe 23andMe or the other major services let you just upload sequencing data either, only download. Offhand, I’m not sure how easy it would be to even use Promethease (not that Promethease is very worthwhile, as most of their report is candidate-gene junk). Personally, I am holding off on getting a WGSes done. I don’t know what I would do with mine, and the price should keep getting lower.
Oh, I misunderstood the purpose of your comment and thought you were recommending people to take advantage of the sale. I knew it was going to sell out quickly so I made the order prior to posting my question. (I gave consent for research since it said that I could withdraw that consent at any time.)
It looks like Veritas offers VCF file download so it’s compatible with Promethease but the format it uses only gives 16,000 genotypes. Also apparently Veritas used to provide the full BAM raw data, but no longer does, which is disappointing, so I’ll probably cancel my order and take advantage of the Dante $199 sale instead which does offer BAM. Looks like sequencing.com lets you upload a BAM file and offers a bunch of apps to do different analyses on it.
No, I was mentioning the sales because they offer a measurement of what WGS costs end to end now—presumably Veritas/Dante or Nebula are offering at close to their marginal cost (as they aren’t big or wealthy enough to afford to give it away and WGSes aren’t exactly a repeat-customer business). As far as Dante goes, I have seen some complaints about very slow or inconsistent service; on IRC, one of us did a previous sale and their original spit didn’t work, so they sent him another tube and forgot the postage. Not sure if he’s gotten his WGS yet either.
I see. Given that I haven’t done a genotype yet, would you suggest that I go through with Dante anyway, or wait until the price comes down further? (Presumably it would definitely be worth doing at $100?)
Well, do you have anything in mind specifically to do with it? If you do, it may not be worthwhile to wait. But if you don’t have something which needs to be done with a WGS right now, you probably aren’t going to be struck with inspiration once you get your download either.
I’m reading Moldbug’s Patchwork and considering it as a replacement for Democracy. I expected it to be dystopia, but it actually sounds like a neat place to live, it is however a scary Eutopia.
Patchwork 1: a positive vision
Patchwork 2: profit strategies for our new corporate overlords
Patchwork 3: what we have and what’s so bad about it
Patchwork 4: a reactionary theory of world peace
Has anyone else read this recently?
I’ve read through the pieces, and I’m struggling to come up with something to say that a reactionary absolutist like Moldbug would find interesting. For example, in the first piece linked, Moldbug says (Let’s ignore that the last sentence is questionable as a matter of historical fact):
I don’t disagree that it is a Schelling point. But is it stable? History strongly suggests that legitimacy is a real thing that is an important variable for predicting whether governments can stay in power and institutions can remain influential in a society. In other words, there’s a reason why mature absolute monarchies (like Louis XIV) invented “divine right of kings.” I assert that you can’t throw that away (as Moldbug does) and assume that nothing changes about the setup.
My next point would be that there is no reason to expect a government to make a profit. But Moldbug’s commitment to accepting the verdict of history means that he wouldn’t find this very persuasive. if one believes that might makes right, then government probably does need to make a profit. In other words, when you acquire power by winning, there’s every reason to expect that failing to continue winning will lead in short order to your replacement.
The idea is that it is possible to make the cake bigger by having efficient government. This is why he invokes Laffer curves as relevant concepts.
I find myself sympathetic to this. If you say give some amount of stocks to foundations that provide free healthcare to those who can’t afford it or preserve natural habitat ect. that matches current GDP spending, but come up with a government that is more efficient at providing funds for all these endeavours you get more spent in an absolute sense on healthcare or environmentalism than otherwise.
If you want to do efficient charity, you don’t work in a soup kitchen, you work hard where you have a comparative advantage to earn as much money as possible and then donate it to an efficient charity. Moldbug may not approve but I actually think his design with the right ownership structure, might be together with some properly designed foundations be a much better “goodness generating machine” than a democratic US or EU might ever be.
I also like the idea of being able to live in a society with laws that you can agree with, if you don’t like it you just leave and go somewhere where you do agree with them.
The profit motive is transparent and it is something that is easy to track down than “doing good”, which is as the general goal of government far less transparent. As a shareholder or employee in a prosperous society you could easily start lobbying among other share holders to spend their own money to set up new charity foundations or have existing ones re-evaluate their goals.
It also has the neat property of seemingly guaranteeing human survival in a Malthusian em future (check out Robin Hansons writing on this). As long as humans own stocks it wouldn’t matter if they where made obsolete by technology they could still basically collect a simply vast amount of rent which would continue growing at a rapid rate for millennia or even millions of years. The real problem is how these humans don’t get hacked into being consumption machines by various transhuman service providers but optimize for Eudaimonia.
He says robot armies and cryptographically locked weaponry eliminate the need to care about what your population thinks. The technology simply wasn’t there in the time of Louis XIV. The governing structure has no need to mess with people’s minds in various ways to convince them it is a just system.
And the thing is, while such technology as ubiquitous surveillance or automated soldiers in the hands of government sounds scary, there seems to be no relevant reason at all to think other government types won’t have this technology anyway. Worse the technology to modify your mind in various ways will also be rapidly available (as if current brainwashing and propaganda technology wasn’t scary enough).
In other words people living in such Patchwork instead of the futuristic US or the PRC would trade political freedoms for freedom of thought and association. The last two are not really guaranteed in any sense, but he gives several strong reasons why a sovereign corporation might have an interest in preserving them. Reasons that most other states as self-stabilizing systems don’t seem to have.
He basically says that whether we like it or not might does make right. The USA defeated Nazi Germany not because it was nobler but because it was stronger. This is why Germany is a democracy today. The US defeated the Soviet Union not because it was nobler but because its economy could support more military spending and the Soviet Communist party couldn’t or wouldn’t use military means as efficiently as say the Chinese to stomp out dissenting citizens. This is why Russia is a democracy today. Democracies won because they where better at convincing people that they where legitimate, their economies where better and as a result of these two they where better at waging war than other forms of government.
He also seems very confident that if his proposed form of government was enacted somewhere it would drastically out-compete all existing ones.
Many government programs provide services to people who can’t afford the the value of the service provided. Police and public education provided to inner-cities cannot be paid from the wealth of the beneficiaries. Moldbug complains about the inefficiency of the post office, but that problem is entirely caused by non-efficiency based commitments like delivering mail to middle-of-nowhere small towns. Without those constraints, USPS looks more like FedEx. That’s not a Moldbuggian insght—everyone who’s spent a reasonable amount of time thinking about the issue knows this trade-off.
And I simply don’t believe this is a likely outcome. There will be times when a realm does not want to use its full arsenal of unobtanium weapons (i.e. to deal with jaywalking and speeding). Anyway, isn’t it easier (and more efficient) to use social engineering to suppress populist sedition?
I mostly agree with your analysis, in that I think we’ve been lucky in some sense that the good guys won. But doesn’t Moldbug have some totally different explanation for the Cold War, involving infighting between the US State Dept. and the Pentagon?
I think it likely that any system of government backed by unobtanium weapons would defeat any existing government system. It’s not clear to me that a consent-of-the-governed system backed by the super weapons wouldn’t beat Moldbug’s absolutist system. And even if that isn’t true, why should we want a return to absolutism. It’s painfully obvious to me that my rejection of absolutism is the basis of most of my disagreement with Moldbug. I think government should provide “unprofitable” services, and he doesn’t.
The good guys did win, because I’m not a National Socialist or a Communist or a Muslim or a Roman. But I don’t think we where lucky. “The Gift We Give Tomorrow” should illustrate why I don’t think you can say we where “lucky”. By definition anyone that won would have made sure we viewed them as the more or less good guys.
That wasn’t Moldbug’s argument about the USSR, it was mine :)
Yes, if I recall right his model goes something like this: The State Department wanted to make the Soviet Union its client much like say Britain or or West Germany or Japan where, it viewed US society and Soviet society as on a converging path, with the Soviet Union’s ruling class having its heart in the right place but sometimes going too far. Something they could never do with any truly right wing regime. This is why they often basically sabotaged the Pentagon’s efforts and attempts at client making. The Cold War and the Third World in general would have never been as bloody if the State Department vs. the Pentagon civli war by proxy wouldn’t have been going on.
Sure but I don’t want to live in a society that takes this logic to its general conclusion. I want to be able to dislike the government I’m living under even if I can’t do anything about it. Many people might not either, and we may be willing to tolerate living in a different less wealthy part of patch land or paying higher taxes for it.
What is that? Can we depack this concept?
I’m trying to figure out what you mean by this. Can’t we have a “Deliver mail to far off corners foundation” and give it 0.5% of the stocks of Neo-Washington corp. when the thing takes off? Do you in principle object to government being for profit or is it just you think that nonprofits funded by shares of the government of equal GDP fractions as they have right now couldn’t provide services of equally quality? What is the governments mission then? Which unprofitable services should it provide? All possible ones? Those that have the most eloquent rent-seekers? Those that are “good”? Can you define then the mission of government in words that are a bit more specific than universal benevolence? And if democratic government is so good at that why don’t we have seed AI report to congress for approval of each self-modification? Don’t worry the AI also gets one vote.
So, Moldbug’s Cold War explanation is total nonsense? I thinks the Cold War follows after WWII even if the USA was ruled by King Truman I and the USSR was rule by King Stalin I. More formally, I think political realism is the empirically best description of international relations.
Anyway, you asked about patches and realms, and I said that governments do the unprofitable. If it were profitable, government wouldn’t need to do it. Moldbug seems to say that we ought not to want government to do the unprofitable. That explains his move to a corporate form of government, but it doesn’t justify the abandonment of the role that every government in history has decided it wanted to do.
You completely missed my point. Who gets to decide what is unprofitable? Who decides which unprofitable things are worth doing? The set of all possible unprofitable activities is vastly larger than the set of profitable ones.
You do realize we where talking about the USSR just a few seconds ago right? I guess Russia was a bad place to make cars so the government had to step in and do that.
Communism (and socialism in general) have inefficient (i.e. not wealth-maximizing) preferences for wealth distribution. So no, it doesn’t surprise me that that massive government planning was required to try to implement the communist preference. If equal wealth distribution were wealth-maximizing, then the government wouldn’t have needed to intervene to make it happen.
This isn’t a groundbreaking point. It falls out straightforwardly from the economic definition of efficiency.
I repeat myself:
Unless you are arguing Communist preferences of wealth redistribution and the opportunity cost that entails where automatically representative of those of “the Russian people” because duh they had the October revolution and a civil war in which Communists won. In which case I will ask why they would not be in North Korea, and would also ask you if all regimes deciding things are representative of “the people” why do we even need this democracy thing? Obviously Ancient Egyptian peasants wanted to be involved in the unprofitable business of building Pyramids for Pharaoh.
If we are not sure the ancient Egyptian Monarchies captured people’s preferences for unprofitable activities that should be done according to the values of those indirectly funding them, if the same cannot be said of Rome, if the same cannot be said of Communism … why do you think it can be said of say the US government? Why do you think this is more efficient than having government be a money making machine that gives its citizens free money because they own stock and lets them spend it on whatever charity (which also by definition do unprofitable things) or indulgence (which often are also unprofitable—whenever I go stop to smell the flowers or go watch a movie I don’t do this to maximize my profit in currency, but to hopefully maximize my utility) they want? Or if it interferes with the operation of the state why not have the stockholders spend it in some other part of Patchland that specializes in being a great place to spend your money for good causes or fun?
And if you don’t think people’s preferences even matter when deciding what unprofitable stuff to spend resources on … well whose preferences should then?
I want unprofitable stuff that I like done too. Like helping people not having to die if they don’t want to. All else being equal I don’t however much care who does them. BTW I’m not too sure about Moldbug’s government type either, I wouldn’t volunteer to live there just based on his arguments, but I do think he does a good job of dealing with regular arguments in favour of democracy. I do think a city or patch of desert somewhere to test the form of government might be a good idea.
For Moldbug, the answer is . . . not you. Unless the CEO of the realm put your charity on the cleared list. But I suspect that most of the things I would want to do with my dividends would be prohibited as security risks. Political control without thought control has never happened, and I don’t think that super weapons could make it happen.
I’m interested in your answer.
That is a good argument. Overall I think Moldbug does a better job of giving decent explanatory power for the modern world than providing workable solutions (if there are any) for its ills. :)
Please elaborate on how, completely disregarding political realism in favor of an overarching conspiracy theory (as already mentioned above) and just ignoring the whole iceberg of neuroscience, evolutionary psychology, etc, one can arrive at a decent explanation for it all. “The leftist social sciences professor down the street is a witch, she did it” is not up to my standards of “decent”.
That is not Moldbug’s model. How much have you read?
He has decent models in my mind for many things including the genesis of the leftwards social movement for the past few decades or centuries, the genesis of modern morality, US foreign policy, the sociological aspect development of ideology ect.
I don’t think I’m that much of a outlier in my estimation here, I’ve heard many people I know from LessWrong express interest in his thought (for example gwern, or Vladimir_M). He even had a live recorded debate with Robin Hanson back in 2010 on Futarchy (though he lost, everyone looses debates to RH ;) ). Top posters like Yvian and Eliezer also seem to have read some of Moldbug since they refer to his writing occasionally, ect. People sometimes agree and other times disagree with him, but I think they generally don’t view him as a “crank” .
I really don’t have the time right now to discuss all of this but there are a few older discussions in the comment sections of various articles (just search for “Moldbug” on the site), LessWrong that may interesting you if you’d like to learn more about his stuff and why people find it interesting.
My recent thread on one of his post also had some discussion.
I have read all of that, at first glance expecting a fun and intriguing contrarian ride. It came across as considerably more insane (in the LW/OB sense) and less grounded in reality than the milder forms of ol’ good fascism to me.
I generally don’t see what’s so insane about WASP Blue State Protestant progressivism being the sociological, philosophical and cultural predecessor of WASP Blue State progressivism.
Or say that modern ethics aren’t the product of pure reason and moral progress but a clear descendant of older Western morality.
Or that US foreign policy is often crazy and mixed up because the US isn’t a monolithic entity and that more specifically the interests of the State department and the Pentagon diverge.
Or that in a modern parlimentary democracy power is wielded by opinion makers (academia and journalists) who create the intellectual fashion of the rich and well positioned subscribe to and with a twenty or so year lag the general population (they adopt it not just to copy the elites but because legislation and education are updated to push new beliefs on them) which then vote for representatives that are supposed to keep the unelected elites in check and working for their interests. Culturally any ethical ideas or value sets adopted by elite academia are assured long term victory.
I think that covers my examples.
Meh, fascists are often too mystical for my tastes (try reading Julius Evola. Religious Paleocons are a bit better but their axioms are all messed up, believing in God and all that. The few irreligious ones are often lots of fun.
source
This is why choosing the state as the actor that must bear unprofitable activities, regardless of on who’s behalf, seems to my sentiments less an aesthetic choice or one that should be based on historic preference but an economic question that deserves some investigation. The losses of utility over such a trivial preference seem potentially large.
I suppose it depends on what you see as “charity”. For example, free childhood vaccinations can be seen as charity—after all, why shouldn’t people just buy their own vaccines on the free market ? -- but having a vaccinated population with herd immunity is, nonetheless, a massive public good. The same can be said of public education, or, yes, canes for blind people.
Let’s do some [Edit: more abstract] analysis for a moment. [Edit: I suggest that] government is the entity that has been allocated the exclusive right to legitimate violence. And the biggest use of this threat of violence is compulsory taxation. Why do people put up with this threat of violence? As Thomas Hobbes says, to get out of the state of nature and into civil society. (As Moldbug says, land governed by the rule of law is more valuable than ungoverned land).
What does the government do with the money it receives. At core, it provides services to people who don’t want them. The quote mentioned letting prisoners choose their jailors. It probably would increase prisoner utility to offer the choice. It might even save money (for example, some prison systems mandate completing a GED if the prisoner lacks a high school degree). But that’s not what society wants to do to criminals. If the government uses compulsory power to fund prisons, I assert a requirement that the spending vaguely correspond to taxpayer desires for the use of the funds. (Moldbug seems to disagree).
Consider another example, the DMV. At root, the government threatens violence if you drive on the road without the required government license, on the belief that the quality of driving improves when skill requirements are imposed and the requirements will not (or cannot) be imposed without the threat of violence. It is common knowledge that going to the DMV to get the license is a miserable experience because the lines are long and the workers are not responsive to customer concerns. By contrast, the MacDonald’s next door is filled with helpful people who quickly provide you with the service desired as efficiently as possible. Why the difference? In part, it is the compulsory nature of the license and in part, it is that benefits of improved service at the DMV do not accrue to anyone working for or supervising the DMV. See James Wilson’s insightful discussion (pages 113-115 & 134-136) (There’s also an interesting discussion of the post office on pp. 122-25). I assert that much “inefficiency” in government is simply the deadweight loss inherent in compulsory taxation, which is one part of government Moldbug doesn’t want to abolish.
And there’s less justification for calling an entity with compulsory tax powers a profit making entity. In what way has Moldbug’s Calgood acted in a competitive marketplace? Voting with your feet is just as possible in the United States or Western Europe today as it would be in the patch & realm system.
Max Weber was a libertarian?
Hmm. It’s embarrassing to admit I’m not as well read as I’d like. I’d only ever heard the concept in libertarian discussions. Thanks.
Every time I read Moldbug’s stuff I am startled by the extent to which he tries to give an economic analysis and solution to a political problem.
The reason we have government isn’t that we sat down once upon a time in the state of nature to design a political system. We have government because we live in a world where violence is a potentially effective tactic for achieving goals. Government exists to curb and control this tendency, to govern it.
Uncontrolled violence turns out to be destructive to both the subject of the violence and also the wielder—it turns out that it’s potentially more fun to be in a citizen-soldier in a democracy than a menial soldier in an tyranny, or a member of a warlord’s entourage.
Politically, we don’t do welfare spending and criminal justice purely for the fuzzies, or solely because they’re ends in themselves. Every so often, we have organized and vigorous protests against the status quo. When this happens, those in power can either appease the protesters, use force to crush the protesters, or try to make them go away quietly without violence. If the protesters are determined enough, this last approach doesn’t work. And the government can either use clubs, or buy off the protesters.
It turns out that power structures that become habitually brutal don’t do too well. People who get in the habit of using force aren’t good neighbors, aren’t good police, and aren’t trusty subordinates. Bystanders don’t want to live in a society that uses tanks and poison gas on retired veterans or that kills protesting students; leaders who try to use those tactics tend to get voted out of power—or else overthrown.
Moldbug talking about cryptographically controlled weapons is missing the point: we don’t want to live in a society that uses too much overt violence on its members. And we tolerate a lot of inefficiencies to avoid this need.
I believe the main thrust of Moldbug’s writings is that we should be (but aren’t) solving an engineering problem rather than moralizing when we engage in politics (although, he seems to fall into this trap himself what with all his blaming of “leftists” for everything under the sun).
So much of Moldbug’s belief system, and even his constructed identity as an “enlightened reactionary”, ride on his complete rejection of whiggish historical narratives; however, he takes this to such an extent that he ends up falling into the very trap that Whig Interpretation’s original critic, Herbert Butterfield warned of in his seminal work on the subject:
Except, none of his prescriptions are sensible engineering. Crypto-controlled weapons as foundation for social order are more science-fiction than sensible design for controlling violence in society. it’s much too easy for people to build or buy weapons, or else circumvent the protections. Pinning your whole society on perfect security seems pretty crazy from a design point of view.
Right, I don’t think he succeeds either. I was merely trying to summarize his project as I think he sees it.
Just because governments often employ violence just before they loose power does not mean that employing violence was the cause of their downfall. Many sick people take medication just before they die. Sure violence may do them no good, like an aspirin does no good for a brain tumour, but it is hard to therefore argue that aspirin is the cause of death. The assertion is particularly dubious since historically speaking governments have used a whole lot of violence and this actually seems to have often saved them. Even in modern times we have plenty examples of this.
This Robin Hanson post seems somewhat relevant:
The state can be thought of as a sedentary bandit, who instead of pillaging and burning a village of farmers extorted them and eventually started making sure no one else pillages or burns them since that interferes with the farmers paying him. The roving bandit has no incentive to assure the sustainability of a particular farming settlement he parasites. A stationary banding in a sense farms the settlement.
Government can expediently be defined, ultimately beneath all the full, as a territorial monopolist of violence. There is a trade off between government violence used to prevent anyone else from exercising violence and violence by other organized groups. How do we know we are at the optimal balance in a utilitarian sense?
Also Moldbug dosen’t want to do away with government he wants to propose a different kind of government. And we have in the past had systems of government that where the result of people sitting down and then trying to design a political system. To take modern examples of this (though I could easily pull out several Greek city states), perhaps the Soviet Union was a bad design, but the United States of America literally took over the world. In any case this demonstrates that new forms of government (not necessarily very good government) can be designed and implemented.
Government violence s ideally more predictable than the violence it prevents (that’s the whole reason we in the West think rule of law is a good idea). Sure the government has other tools to prevent violence than just violence of its own, but ultimately all law is violence. In the sense of the WHO definition:
You can easily make the violence painless by say sedating a would be rapist with the stun setting on your laser gun, and you can easily also eliminate the suffering of imprisoning him, by modifying his brain with advanced tools. But changing a persons mind without their consent or by giving them a choice between 6 years imprisonment and modifying their brain has surely just experienced violence according to the above definition.
The point of the the cryptographically controlled weapons is that you need a very small group of people who thinks being a citizen soldier is less fun than being paid handsomely by Blackwater to work.
Abba Lerner, “The Economics and Politics of Consumer Sovereignty” (1972):
In raw utility the inefficiencies we tolerate to pay for this could easily be diverted to stop much more death and suffering elsewhere. Perhaps we are simply suffering from scope insensitivity, our minds wired for small tribes where the leader being violent towards a person means the leader being violent to a non-trival fraction of the population.
Also are you really that sure that people wouldn’t want to live in a Neocameralist system? When you say efficiency I don’t think you realize how emotionally appealing clean streets, good schools, low corruption and perfect safety from violent crime or theft is. What would be the price of real-estate there? It is not a confidence that he gives Singapore as an example, a society that uses more violence against its citizens than most Western democracies.
Further more consider this:
That sounds pretty draconian. But we also know Singapore is a pretty efficiently run government by most metrics. Is Singapore an unpleasant place to life? If so why do so many people want to live there? If you answer economic opportunities or standard of living or job opportunities, well then maybe Moldbug does have a point in his very economic approach to it.
I had assumed we were talking about government for [biased, irrational] humans, not for perfect utilitarians or some other mythical animal. I was saying that routine application of too much violence will upset humans, not that it should upset them.
I’m sure many people would live quite happily in Singapore. Clearly, it works for the Singaporians. But I don’t think that model can be replicated elsewhere automatically, nor do I think Moldbug has a completely clear notion why it works.
Moldbug talks about splitting up the revenue generation (taxation) from the social-welfare spending. This seems like a recipe for absentee-landlord government. And historically that has worked terribly. The government of Singapore does have to live there, and that’s a powerful restraint or feedback mechanism.
In the US (and I believe the rest of the world), the population would like to pay lower taxes, and pointing to the social welfare benefits is the thing that convinces them to pay and tolerate higher rates. I think once the separation between spending and taxation becomes too diffuse, you’ll get tax revolts. Remember, we are designing a government for humans here—short-sighted, biased, irrational, and greedy. So the benefits of unpleasant things have to be made as obvious as possible.
I’m open to being corrected on this, since I don’t have a good source for Singaporean immigration statistics, but my prior is that people who choose to live in Singapore are coming there from other places that are much more corrupt while also still being rather draconian (China, Malaysia). I’m pretty sure well-educated Westerners could get a well-paying job in Singapore, and the reason few move there is not, in fact, about economics.
I’ll be blunt. He’s signaling self-righteous revulsion. It doesn’t ever pattern match to anything nice. Case closed.
(Not for the idea of libertarian absolutism, of course—just for this Moldbug fellow himself! It’s just that I don’t want to tear through the buzzing of some conceited guy to get to the truth of the matter; there’s always an alternative available in the market of ideas.)
Please indicate which particular part of the above do you find objectionable or not up to LW standards. Do you have any evidence that a paranoid attitude like his was ever not indicative of a crank?
I didn’t down vote you, but I generally disagree with your pattern match. Moldbug signals self-righteous revulsion ironically since he is mirroring the self-righteous revulsion of the SWPLs to anything that challenges their ideological outlook.
Moldbug has been praised by many people I know from LessWrong as having a good style of writing. For example here:
Yes apparently some like it as much as they like Eliezer’s writing.
Maybe that’s what got you down voted? He’s a bit too verbose for my taste, but as he said it himself several times that inhumanely long posts are one of the ways he keeps out the wrong kind of crowd which is attracted by a far right view for all the wrong reasons.
And why doesn’t he hire a few moderators from among his allies, then open the doors to the right type of crowd—that is, someone like the SIAI staff—and let them see the light? He is, after all, saying that it’d be nice for his goals to hit more people where it counts with some quality propaganda. I bet that he’s simply addicted to his ultra-contrarian throne beyond all reason.
He’s just a random guy writing a blog. He can’t even post that often since he’s raising a baby girl. How many such people choose minions to moderate their blogs?
Why would he want the SIAI type around? The average SIAI employee might spend some time thinking about his arguments, but I suspect they have differing values.
I actually agree.
Want to get mind-killed HARD? I mean, frothing at the monitor hard? Here’s his opinion on the Norway massacre.
Yeah, yeah, yeah, I know, if you could ask Hitler and a random leftist from the academia in the 1930 whether 1) the sky is blue and 2) Stalin is an evil butcher, they’d perform equally well on 1) and you’d have to support Hitler over the leftist on 2), but still. This is not “deliberate hardcore contrarianism”. This is not even trolling—go ask /b/, they’re the internationally acknowledged authority. Moldbug is spitting on the graves of innocent teenagers slaughtered by an evident psychotic.
He says that a certain amount of lives is worth a regime change. This is not a controversial position in the real world. No one for example would in polite society say that regime change in 1945 in Nazi Germany was not worth the lives of 40 or 4000 young or 40 000 young men or women. He also calls the man a butcher, and clearly is implying that change can not be achieved by terrorism nor even conventional military action.
Considering the discussions that are calmly had on LessWrong (infanticide for starters) I can produce a dozen such “incriminating” out of context quotes. Come to think of it you can get quotes of me saying that infanticide is not as bad a crime as killing an adult, that incest among consenting adults should be legal, ect. Will you use that when itching to win a argument with me?
Also Robin Hanson has said that 9/11 isn’t a big deal. Aha that’s something I can use! How dare he spit on the graves of … oh wait no actually that would be an inappropriate thing for me to say.
Even on a pure level of instrumental rationality, calmly pointing out the above quote and linking to it and asking if say I agree or not, would made him look much worse as well as being equally effective at any point you are trying to make. Also I would respond by stating the truth (as I shall now), that I haven’t read it so far and that I haven’t yet done the calculations about how many lives changing Norway’s government to Marxist-Leninism/Anarchy-Syndicalism/Futarchy/Fascism would be worth, but that I suspect considering there are few better alternatives if any, and that Norway has such a small population that it wouldn’t amount to more than ~5. Unless there is a really awesome form of government I don’t know about yet.
I get it you really really hate Moldbug, but I’m basically currently of the opinion that the only one getting mindkilled here is you. Why couldn’t you discuss the points and maintain the usual level of discourse? I fail to see how this was the right course of action, unless you are acting up being mindkilled for the purpose to shut down a certain topic for good.
Now I’m really curious what calculation would lead you to that number.
I know very little of the political situation in Norway but it is plausible that a better supreme court or batch of ministers could do a better job to the point of saving more lives than would be lost by such action (one has to factor in not just their lives but the cost of greater security measures such acts would produce if they where detected).
Yay I made the CIA watch list!
Oh I see, I thought you had some sort of general formula for weighing (naive) consequentialism and deontology. If I had known there were going to be creepy specifics, I wouldn’t have asked.
I assumed 5 or 6 seemed a creepily specific number and thus you wanted specifics. Note that as I said this was pure uninformed speculation. Norway is a partcularly well governed country, and overall specific people tend to in well governed Western states to matter much less here than say in Saddam Hussein’s Iraq. However even a marginally better job at governing or crafting laws creates averts a lol of dusts specks.
I usually apply pure consequentalism measuring happy healthy years of life as a first estimate. When actually contemplating real actions to support I go by vritue ethics, since some costs are hard to capture in utilitarian thinking. Following virtue ethics I wouldn’t ever support an attempt to assassinate someone unless they where directly responsible for a massive amount of death, as in signing death warrants or conducting killings or hiring thugs. Perhaps controversially I wouldn’t put torturing people a ok reason to assassinate since that seems really hard to establish because of propaganda, misninformation, a more convoluted paper trail, ect.
I didn’t strictly speaking make that comment to persuade anyone, I’m just making small talk. Who’d take some guy’s esoteric politics as a matter of actual life and death when we can have an AI?
You are WRONG! I sneer at him a bit, as many tend to when they find that people much smarter than themselves are kind of fucked in the head. I don’t consider him a force for evil. I just clarify that in some cases I value politeness more than I do free speech if, and only if, said free speech is in a natural, all too human language. Nowhere have I suggested that weighing evil acts is inappropriate—say, when you clearly see how to apply math—otherwise I would’ve promptly freaked out back at the dust specks question.
I believe that it’s exactly what he’s admitting to himself—Norway is a kickass place to live in! - , and that although he might be serious about “Eurocommunism” as a potential engine of collapse, he’s not serious serious. In other words, his prime motivation in formulating that sentence is a good opportunity for what he sees as trolling.
Sorry? The discussion over infanticide we were holding a couple of weeks ago was—maybe—not perfectly relaxed, but neither side found the other’s statements of position to be unacceptably crude, like flinging excrement in public. I do find exactly that about Moldbug’s statement.
And you are trolling… why? Oh sorry, quoting a troll trolling on another forum/blog isn’t actually trolling.
Right.
No thank you. LessWrong is a no froth zone. Please take it elsewhere.
Nope, sorry. I have seen that the Less Wrong community lacks neither the experience nor the intellectual courage to deal with highly provocative words head on—sometimes keeping its cool and sometimes not, but hardly resorting to a policy of selective blindness.
I’m not saying that Moldbug is a terrorist/would-be Hitler/whatever, or that people aren’t allowed to like him on rational grounds, or anything of the sort. I am merely refusing to quickly avert my eyes from a thoroughly appalling detail.
Even if you feel that judging the quality of someone’s thought requires reporting any particularly offensive statements in their other writings, you could at least do so without encouraging mindkill and frothing. Arguably we shouldn’t be discussing Moldbug in the first place.
I more or less agree (for a different reason as you can see), however, I’m against pretty much any restrictions on LW topics provided even a modicum of intelligence being shown.
So if I’m an opponent (and yes, don’t waste time pointing out my self-identification as such and pre-written bottom line), I don’t like staying silent and I don’t think that “mind-kill” is at all a net negative to LW discussion… heck, I’m gonna have oh-so-irresponsible fun.
In the current context I find this statement disingenuous. I’ll be perfectly honest I think you are acting in bad faith.
Yes yes we all know Blue tribe clearly acts as an apologist for greater atrocities than Green.
[insert emotional screed against Green here]
At LW, religion is often used as a textbook example of irrationality. To some extent, this is correct. Belief in untestable supernatural is a textbook example of belief in belief and privileging the hypothesis.
However, religion is not only about belief in supernatural. A mainstream church that survives centuries must have a lot of instrumental rationality. It must provide solutions for everyday life. There are centuries of knowledge accumulated in these solutions. Mixed with a lot of irrationality, sure. Many religious people were pretty smart, for example Reverend Thomas Bayes, right? Also in my life I know religious people whose rationality is very high above average.
I am afraid that because of the halo effect we can miss a great source of rationality here. For example I am pretty sure that there are many successful anti-acrasia tactics written by religious authors. Another example: a list of capital sins, if you replace the religious terminology with something more lesswrongian, is simply a list of mental biases. (Pride = refusing to use an outside view. Gluttony = using a scarcity mindset in an abundance environment.) So I guess we could sometimes reuse the wheel instead of reinventing it.
Have you seen this sequence? It reveals how the LDS church gets things done: By providing a real community for its members, and making them feel like they belong by giving them responsibilities. I’m sure an aspiring-rationalist version of that would be even better.
This is the super-secret rationality technique of churches. It’s the reason religious people are happier than nonreligious people in the US. It’s the domain where religious people are correct when they say that nonreligious people are missing out on something good. Now we just have to implement it. It’s not something that we can do individually.
I agree that religious organizations have developed many effective techniques for getting certain kinds of things done, and I endorse adopting those techniques where they achieve goals I endorse.
I’m not sure I agree that this isn’t already happening, though.
Can you provide some examples of such techniques that aren’t also in use outside of the religious organizations that developed them?
Incidentally, the word “rationality” seems to contribute nothing to this topic beyond in-group signalling effects.
This isn’t obviously true. Once a belief system is established it is easily continued via indoctrination, especially when the indoctrination includes the idea that indoctrinating others is a Good thing.
This TED talk is relevant: http://blog.ted.com/2012/01/17/atheism-2-0-alain-de-botton-on-ted-com/
Accedia, an overview of catholic (and other, if I remember correctly) writing about sloth, plus a personal memoir. As I recall, quite an interesting book, but not personally useful—and this is backed up by the top three amazon reviews.
The fact that such a seriously researched book doesn’t turn up much that’s easily useful (a more careful or motivated reader might have found something) suggests that there may not be much practical advice in the tradition.
This is reminding me of Theodore Sturgeon’s complaint that Christianity told people to be more loving, but didn’t say anything about how. (From memory, I don’t have a cite.)
When it comes to accepting evolution, gut feelings trump fact
Seems to be classic System 1 vs. System 2. Also religion’s small impact didn’t surprise me.
A current thought experiment I’m pondering:
Scientists discover evidence that popularly discriminated against really does have all the claimed negative traits. The evidence is so convincing that everyone who hears it instantly agrees this is the case.
If you want to picture a group, I suggest the discovery that Less Wrong readers are evil megalomaniacs who want to turn you into paperclips.
How, if at all, does this affect your ideas of equality? Is it now okay to discriminate against them? Treat them differently legally? Not invite them to dinner?
I’ve heard Peter Singer says useful and interesting things about this, but it hasn’t yet reached the top of my bookqueue.
I’m puzzled that you describe this as a hypothetical.
For example, the culture I live in is pretty confident that five-year-olds are so much less capable than adults of acting in their own best interests that the expected value to the five-year-olds of having their adult guardians make important decisions on their behalf (and impose those decisions against their will) is extremely positive.
Consequently we are willing to justify subjecting five-year-olds to profound inequalities.
This affects my ideas of equality quite a bit, and always has. It is indeed OK to discriminate “against” them, and to treat them differently legally, and to not invite them to dinner, and always has been.
We are actually as a society ok with discriminating against the vast majority of possible social groups. If this was not the case life as we know it would simply become impossible because we would have to treat everyone equally. That would be a completely crazy civilization to live in. Especially if it considered the personal to be political.
You couldn’t like Alice because she is smart, since that would be cognitivist. You couldn’t hang out with Alice because she has a positive outlook on life, because that would discriminate against the mentally ill (those who are currently experiencing depression for starters). You couldn’t invite Alice out for lunch because you think she’s cute, because that would be lookist. ect. ect.
Without the ability to discriminate between the people who have traits we find desirable or useful and those we don’t, without a bad conscience, most people would be pretty miserable and perpetually repressed. Indeed considering humans are social creatures I’d say the repression and psychological damage would dwarf anything ever caused by even the most puritanical sexual norms.
See faul_sname’s comment below; “discrimination” should really be tabooed with “prejudice based on weak prior evidence without any personal contact” in this discussion.
“Discrimination” usually just means “applying statistical knowledge about the group to individuals in the group” and is a no-no in our society. If you examine it too closely, it stops making sense, but it is useful in a society where the “statistical knowledge” is easily faked or misinterpreted.
The problem is that one of the only ways to prove someone is indeed using statistical knowledge, on the handful of cases that we have forbidden it, is to analyse their patterns of behaviour, basically look at the recorded statistics of their interactions. Both the records and the results of such an analysis which can be easily faked and misinterpreted.
Which means that if the forbidden statistical knowledge is indeed useful and reliable enough to be economical to use it, and someone else is very very serious about preventing it from being used, the knowledge will both be employed in a clandestine way and most of the economic gains from it will be eaten up by the cost of avoiding detection. This leads to a net loss of wealth.
Say a for-profit company that spends 90% of the gains from forbidden knowledge on avoidance of detection, the governments spends half or a third of that amount to monitor the company. The company would be indirectly paying for government monitoring regardless if it used the knowledge or not. It is therefore irrational for the company to not use the particular forbidden set statistical knowledge in such a situation.
BTW To get the full suckiness hidden in the bland phrase “net loss of wealth” most people need some aid to fix their intuitions. Converting “wealth” to happy productive years or dead child currency sometimes works.
(nods) That certain simplifies the task of comparing it to the loss of happy productive years and/or the increase in dead children that sometimes follows from the bland phrase “using forbidden statistical knowledge.”
Once we convert everything to Expected Number of Happy Productive Years (for example), it’s easier to ask whether we’d prefer system A, in which Sum(ENoHPY) = N1 and Standard Deviation(ENoHPY) = N2, or system B where Sum(ENoHPY) = (N1 - X) and Standard Deviation(ENoHPY) = N2- Y.
That is kind of the point of being a utilitarian. And remembering to consider opportunity cost let alone estimate it often is the hard part when it comes to policy.
I read an interesting article on the legal side of this in the USA, annoyingly despite being sure I’d saved it I can’t find anything.
There are two problems: statistical knowledge being easily faked or misinterpreted and life being a multiple-repetition game.
It is hard to apply the knowledge of “many X are Y and it is bad” when X is easier to check than Y in such a way as to not diminish the return on investment of X who work hard to not be Y. The same with the positive case: if you think that MBA programs teach something useful and think “many MBAs have learnt the useful things from MBA program” then getting into the program and not learning starts making sense. And we have that effect!
http://www.freakonomics.com/2011/10/12/why-do-only-top-mba-programs-practice-grade-non-disclosure/
But don’t people talking about discrimination often claim that the statistical trends aren’t there?
Yes. For instance, the proportion of black Americans who use illegal drugs is well below the proportion of white Americans who do; however, black Americans are heavily overrepresented in illegal drugs arrests, convictions, and prison sentences. The arrest rates indicate that the law-enforcement system “believes” that black Americans use illegal drugs more — a statistical trend which isn’t there.
Another way of thinking about these issues, rather than talking about “discrimination against ”, is “privilege held by ”. This can describe the same thing but in terms which can cast a different (and sometimes useful) light on it.
For instance, one could say ” people are harassed by police when they hang out in public parks.” However, this could be taken as raising the question of what those people are doing in those parks to attract police attention — which would be privileging the hypothesis (no pun intended). Another way of describing the same situation, without privileging the hypothesis, is ” people get to hang out in public parks without the police taking interest.”
Where does the data about the actual proportion come from, since it can’t be the legal system’s data?
Having re-checked the above from, e.g. the National Survey on Drug Use and Health, done by the Department of Health & Human Services, I retract the claim that black Americans use drugs less than white Americans.
Rather, it appears to be the case that white Americans are well overrepresented in lifetime illegal drugs use, but black Americans are slightly overrepresented in current illegal drugs use; which is what would feed into arrests — after all, you don’t get arrested for snorting coke two decades ago. The white:black ratio in the population as a whole is 5.7, according to the Census. In lifetime illegal drugs use, 6.6; in last-month illegal drugs users, 5.1.
However, from the Census data on arrests, the white:black ratio in illegal drugs arrests is 1.9. Now, this doesn’t break down by severity of alleged offenses, e.g. possession vs. dealing; or quantities; or aggravating factors such as school zones.
Sorry, I don’t understand that. Does it simply mean that white people in general as seen here used to do more drugs some years/decades ago, but now their proportion dropped below that of blacks?
Maybe but not necessarily. It would be consistent with, for instance, there being proportionally more white people who tried illegal drugs once and didn’t continue using.
Illegal drugs are an interesting place to try some Bayescraft.
In fact your interpretation is wrong. It is not “the law-enforcement system “believes”″ that blacks use more. It is that blacks are more often dealers, and it is easier to get a conviction or plea bargain as a user than as a dealer, since the latter requires intent as well as possession and will be fought harder because of higher penalties.
I suspect that blacks are not over-represented as drug dealers. Rather, blacks live in urban areas, which can be policed at lower cost than rural areas for population density reasons.
Hell, that seems to be an understatement to me. There’s a particular reason that racial discrimination is by far the most taboo and reviled form of it, beyond the memory of Nazism; real current political groups—that are very nasty—are always hoping for the chance to pounce on the issue once they’re allowed to get close to it.
The practice in the US of alerting people in the neighbourhood to the presence of convicted child molesters (or was it rapists? I don’t remember) seems to indicate that at least some people think that it’s a great idea. I think that as we get better at testing people for sociopathy we’re likely to move towards certain types of legal discrimination towards them too.
None of this affects my personal ideas of equality though. I would prefer not to be friends with an evil megalomaniac in the same way that I would prefer not to be friends with a drug addict, but if I met an interesting person and then discovered that they were an evil megalomaniacal drug addict I wouldn’t necessarily cut them out of my life, either.
As vague context, the whole area of equality and discrimination is something that nags me at me as not making enough sense. I hope with enough pondering to come up with a clear view on things, but it’s failing so far.
What are some efficient ways to signal intelligence? Earning an advanced degree from a selective university seems rather cost intensive.
I figured someone would have said this by now, and it seems obvious to me, but I’m going to keep in mind the general principle that what seems obvious to me may not be obvious to others.
You said efficient ways to signal intelligence. Any signaling worth salt is going to have costs, and the magnitude of these costs may matter less than their direction. So one way to signal intelligence is to act awkwardly, make obscure references, etc.; in other words, look nerdy. You optimize for seeming smart at the cost of signaling poor social skills.
Some less costly ones that vary intensely by region, situation, personality of those around you, and lots and lots of things, with intended signal in parentheses:
Talk very little. Bonus: reduces potential opportunities for accidentally saying stupid things. (People who speak only to convey information are smarter than people for whom talking is its own purpose.)
Talk quickly.
Quote famous people all the time. (He quotes people; therefore he is well-read; therefore he is intelligent.)
In general, do things quickly. Eating, walking, reacting to fire alarms. (Smart people have less time for sitting around.)
During conversations, make fun of beliefs that you mutually do not hold. Being clever about it is better, but I don’t know how to learn cleverness. If you already have it, good. (He is part of my tribe and one of my allies. Therefore, because of the affect heuristic, he must be smart as well.)
Learn a little bit of linguistics.
Tutor people in things. (You have to be smart to teach other people things.)
It was not intentional that all of these related to conversation. Maybe that’s not a coincidence and I’ve been unconsciously optimizing for seeming smart my entire life.
Definitely this. Tutoring is a very strong signal of intelligence, but is really a matter of learned technique. I was able to tutor effectively in Statistics before I had taken any classes or fully understood the material by using tutoring techniques I had learned by teaching other subjects (notably Physics). The most common question I found myself asking was “what rule do we apply in situations like this,” a question you do not actually need to know the subject material to ask.
I’d be interested if you were to expand on this.
I’m not the OP of that comment, but as a linguistics student I can corroborate. I think there are a couple of reasons that occasionally throwing a relevant piece of linguistic information into a conversation can produce the smartness impression. Firstly, conversations never fail to involve language, so opportunities to comment on language are practically constant if you’re attuned to noticing interesting bits and pieces. This means that even occasional relevant comments mean you’re saying something interesting and relevant quite frequently. This is an advantage that linguistics has over, say, marine biology. Secondly, I have the impression that most people are vaguely interested in language and under the equally vague impression that they know just how it works—after all, they use it all the time, right? So even imparting a mundane little piece of extremely basic linguistics can create the impression that you’re delivering serious cutting-edge expert-level stuff: after all, your listener didn’t know that, and yet they obviously know a pretty decent amount about language!
It has worked for me. People are impressed when I point out their own sentence structure, things like how many phonemes are in the word “she”, etc. I don’t know if this also helps signal intelligence, but I also rarely get confused by things people say. Instead of saying, “What?” I say “Oh, I get it. You’re trying to say X even though you actually said Y.”
Also, I guess it seems like a subject only smart people are interested in. And not even most of them. Guess I got lucky in that regard.
It, of course, depends who you’re signalling to. These sound to me like ways of signalling that you are intelligent to the unintelligent. (If that. They’re good possibilities but I’m skeptical of about half of them.)
I perhaps should work on this one. It might improve my signal/noise ratio.
Your list is quite wisely written.
In a Dark-Arts-y way, glasses?
(A brief search indicates there are several studies that suggest wearing glasses increases percieved intelligence (e.g. this and this (paywall)), but there are also some that suggest that it has no effect (e.g. this (abstract only)))
There definitely exists a stereotype that people that wear glasses are more intelligent. The cause of this common stereotype is probably that people that wear glasses are more intelligent.
But what’s the purported effect size?
Here’s a few suggestions, some sillier than others, in no particular order:
Join organizations like Mensa
Look good
Associate yourself with games and activities that are usually clustered with intelligence, e.g. chess, Go, etc.
If your particular field has certifications you can get instead of a degree, these may be more cost-effective
Speak eloquently, use non-standard cached thoughts where appropriate; be contrarian (but not too much)
Learn other languages—doing so not only makes you more employable, it can be a big status boost
Much depends on the audience one is signalling to.
To stupid or average people, this is a signal of intelligence. To other intelligent people, my impression is that Mensa membership mostly distinguishes the subset of “intelligent and pompous about it” from the larger set of “intelligent people”.
Again this works as a signal to people who are at a remove from these activities, because the average player is smarter than the average human. People who themselves actually play, however, will have encountered many people who happen to be good at certain specific things that lend themselves to abstract strategy games, but are otherwise rather dim.
Agree with this one. It’s especially useful because it has the opposite sorting effect of the previous two. Other intelligent people will pick up on it as a sign of intelligence. Conspicuously unintelligent people will fail to get it.
This one seems like it might vary by geography. It’s a lot less of a distinction for a European than an American. In the US, the status signal from “speaks English and Spanish” is different from the status signal from “speaks English and some language other than Spanish”.
My experience seems to support this. The desire to signal intelligence is often so strong that it eliminates much of the benefits gained from high intelligence. It is almost impossible to have a serious discussion about something, because people habitually disagree just to signal higher intelligence, and immediately jump to topics that are better for signalling. Rationality and mathematics are boring, conspiracy theories are welcome. And of course, Einstein was wrong; an extraordinarily intelligent person can see obvious flaws in theory of relativity, even if they don’t know anything about physics.
Mensa membership will not impress people who want to become stronger and have some experience with Mensa. Many interesting people make the Mensa entry test, come to the first Mensa meeting… and then run away.
My experience with Mensa was similar to yours. I joined, read a couple issues of their magazine without having time to go to a meeting, and realized that if the meetings were like the magazine they weren’t worth the time. There was far less original thought in Mensa then I had expected.
Saying this about Mensa is a much better way to signal intelligence to other intelligent people than actually being a Mensa member.
Well, it’s worth being a little careful here. Saying dismissive things about an outgroup is an effective way to present myself as a higher-status member of the ingroup; that works as well for “us intelligent people” and “those Mensa dweebs” as any other ingroup/outgroup pairing. Which makes it hard to tell whether I’m really signalling intelligence at all.
Yes, and I knew that when I said it. But it’s also true.
Right now my question is: Is abandoning Mensa the most useful thing, or can it be used to increase rationality somehow?
Seems to me that the selection process in Mensa has two steps. First, one must decide to make a Mensa entry test. Second, one must decide to be a Mensa member, despite seeing that Mensa is only good for signalling—this is sometimes not so obvious to a non-Member. For example when I was 15, I imagined that Mensa would be something like… I guess like I now imagine the LW meetups. I expected there people who are trying to win, not only to signal intelligence to other members.
So I conclude that people who pass the first filter are better material than people who pass both filters. A good strategy could be this: Start a local rationalist group. Become a member of Mensa, so you know when Mensa does tests. Prepare a flyer describing your rationalist group and give it to everyone that completes the Mensa test—they will probably come to the first following Mensa meeting, but many of them will not appear again.
This is what I want to do, when I overcome my laziness. Also I will give a talk in Mensa about rationality and LW, though (judging by reactions on our facebook group) most members will not be really interested.
The best ways to signal intelligence are to write, say, or do something impressive. The details depend on the target audience. If you’re trying to impress employers, do something hard and worthwhile, or write something good and get it published. If you’re a techie and trying to impress techies, writing neat software (or adding useful features to existing software) is a way to go.
if you are asking about signalling intelligence in social situations, I suggest reading interesting books and thinking about them. Often, people use “does this person read serious books and think about them” as a filter for smarts.
Do something prohibitively difficult that not a lot of people are competent enough to do.
Of course, make sure it’s something people “know” is hard, like rocket science.
I have to admit, I’m mystified as to why this one got downvoted.
Likely because it could be read as a sarcastic remark resolving to “become intelligent for real, and you wouldn’t need to fake anything, you lazy cheating bastard”. I wouldn’t have downvoted for that, but such a reading had indeed occured to me at first, before I remembered that I’m at a website of a better sort.
I guess there are two different questions: signalling intelligence to top-intelligence people and signalling intelligence to people above average and higher.
In the first case, it is a good plan. In the second case, you would fail.
upvoted. I am also confused.
Be interested in lots of things that other people might not find interesting. I think it’s the way that I personally signal intelligence the most. For example, if someone has a herpolhoder on their desk, I try to ask intelligent questions about it. Or if the rain on the window is dripping in nice straight lines because of the screen occasionally pressing against the glass, notice that.
I have a different perspective on this compared with other commenters… Intelligence is very hard to fake.
What’s the best way to signal guitar playing skills? Play the guitar, and play it well!
The efficient way to signal intelligence is: to do worthwhile things, intelligently!
How can you tell if someone is doing things intelligently?
Fair question, but difficult to answer in brief, I might try to do this later. For now let me answer with a couple of questions:
How can you tell if someone is playing a guitar well?
In general, can YOU tell the difference between someone doing things intelligently, and doing things unintelligently?
a) Listen to them playing.
b) Do they have concerts, CDs, fans, other symbols of “being a successful guitar player”? Do they write blogs or books about guitar playing? Do people write guitar-playing-related blogs and books about them?
The second option is less reliable and easier to fake, but it is an option that even a deaf person can use.
Speaking as a guitar and piano player. I can do things on guitar and piano that are fairly easy, but look very impressive to someone who doesn’t play the instrument. You actually need to play an instrument before you can judge how good someone is accurately.
(Obviously, it’s pretty obvious if someone is distinctly bad. But distinguishing different levels of “good” is hard.)
First question: A good guitar player a steady rhythm and hit the appropriate notes with appropriate volume and tone. At a higher level, they improvise in a way that sounds good. Sounding good seems to involve sticking to a standard scale with only a few deviations, and varying the rhythms. At the level above that, I really don’t know.
Second question: I really don’t know, at least that generally. I think I may use proxies such as the ability to find novel (good) solutions to problems and draw on multiple domains, then aggregate them into one linear value that I call “intelligence”. I am probably also influenced by the person’s attractiveness and how close their solution is to the one I would have proposed. I would definitely like your take on this as well.
Why are you asking?
Depending on the selective university, an advanced degree might not cost much at all. Harvard, for example, only recently started paying the way of its undergraduates, but it has paid the way of its graduate students for a long time.
True, but free tuition or not, it’s plenty costly in terms of opportunity.
(This is true to an almost hilarious extent if you’re a humanities scholar like me: I’m not getting those ten (!!!!!!!) years of my life back.)
Is that the reason for “grouchy”musicologist?
Haha, no. I’m only grouchy because people occasionally say ill-informed things about musicology. Other than that, I really like my job and my chosen field. I rarely think I’d be much happier if I had chosen to pursue some lucrative but non-musicological career.
What’s it like being a musicologist? What do you spend your days doing?
How many instruments do you play?
What’s better out of Mozart’s Jupiter Symphony and Holst’s Jupiter movement?
Well, I wrote a bit about what musicologists do here. In terms of research areas, I myself am the score-analyzing type of musicologist, so I spend my days analyzing music and writing about my findings. I’m an academic, so teaching is ordinarily a large part of what I do, although this year I have a fellowship that lets me do research full-time. Pseudonymity prevents me from saying more in public about what I research, although I could go into it by PM if you are really interested.
I am (well, was—I don’t play much any more) what I once described as a “low professional-level [classical] pianist.” That is, I play classical piano really well by most standards, but would never have gotten famous. At a much lower level, I can also play jazz piano and Baroque harpsichord. I never learned to play organ, and never learned any non-keyboard instruments. Among professional musicologists, I’m pretty much average for both number of instruments I can play and level of skill.
As to pieces about Jupiter, I can only offer you my personal opinion—being a musicologist doesn’t make my musical preferences more valid than yours. Both pieces are great, and I had a special fondness for the Holst when I was a kid (I heard it in a concert hall when I was about 11, and spent the whole 40 minutes grinning hard enough I should have burst a blood vessel). But I’ll take the Jupiter Symphony without the slightest hesitation. Here you have one of the greatest works of one of the tiny handful of greatest composers ever, versus an excellent piece by a one-hit wonder among classical composers.
Really, though, I don’t much like picking favorites among pieces of music, and always want to preface my answers with “Thank goodness I don’t really have to choose!”
.
In Marcus Hutter’s list of open problems relating to AIXI at hutter1.net/ai/aixiopen.pdf (this is not a link because markdown is behaving strangely), problems 4g and 5i ask what Solomonoff induction and AIXI would do when their environment contains random noise and whether they could still make correct predictions/decisions.
What is this asking that isn’t already known? Why doesn’t the theorem on the bottom of page 24 of this AIXI paper constitute a solution?
I’ve been incubating some thoughts for a while and can’t seem to straighten them out enough to make a solid discussion post, much less a front page article. I’ll try to put them down here as succinctly as possible. I suspect that I have some biases and blindspots, and I invite constructive criticism. In other cases, I think my priors are simply different than the LW average, because of my life experiences.
Probably because of how I was raised, I’ve always held the opinion that the path to world-saving should follow the following general steps: 1) Obtain a huge amount of personal wealth. 2) Create and/or fund the types of organizations that you believe are likely to save the world.
Other pathways feel (to me) like attempts to be too clever. I admit a likely personal bias here, but it looks like it should be easier to become wealthy by any available means than it is to singlehandedly solve all the world’s important problems. If you do not agree with this assessment, I humbly suggest that perhaps you haven’t thought long enough about how easy it might actually be to become ultra-rich if you actually set out with that goal in mind. I think that generally speaking very few people are actually trying to become wealthy; most people just try to match their parents’ socioeconomic tier and then stop.
Might it not be even more effective to convince others to become ultra-rich and fund the organizations you want to fund? (Actually, this doesn’t seem too far off the mark from what SIAI is doing).
I agree completely. I stopped myself short of saying this in my first post because I wanted to keep it succinct. I would go a bit further to suggest that SIAI could be doing more than merely convincing people to take this path. For example, providing trustworthy young rationalists with a financial safety net in order to permit them to take more risks. (One tentative observation I’ve made is that nobody becomes wealthy without taking risk. The “self-made” wealthy tend to be risk-loving.)
This is likely worth doing, but I am fairly sure that LWers are for the most part not wealthy enough to create this financial safety net. This seems like a concept that is worth a discussion post: what would LWers do if they had a financial safety net?
Any arguments that legitimately push you towards that conclusion should be easily convertible into actual advice about how to become ultra-rich. I think you’re underestimating the difficulty of turning vague good-sounding ideas into effective action.
I think there’s plenty of available advice on how to become ultra-rich. Just look at the Business section of any bookstore. The problem is that this advice typically takes you from a 0.001% chance of becoming ultra-rich, through sheer lucky accident or lottery, to a 0.1% chance, through strategy and calculated risks.
I’m not arguing that it’s not really hard and really improbable. However, folks tend to assess P(becoming wealthy by any means) ~ P(winning the lottery).
What’s ultra-rich? This claim isn’t saying much unless you quantify it.
Intuitively, I find both your claims—that most people only try to match their parents’ tier, and that it’s easy to become ultra-rich if you focus on it—to be wrong, but it’d be interesting to see more arguments or evidence in their favor.
I don’t know, a billion dollars?
A quick Googling turns up a few papers which suggest that parental expectations largely define a child’s level of educational and financial achievement. On a more intuitive level, I can only point out that the clear majority of Americans either don’t go to college because their financial ambitions are satisfied by blue collar work, or they go to college in pursuit of a degree with a clear Middle Class career path attached to it. Do you know anybody whose stated goal is to be wealthy, rather than to be a doctor or an engineer or some specific career? I don’t.
Personally, I figure I’m not intelligent enough to research hard problems and I lack the social skills to be an activist, so by process of elimination the best path open to me for doing some serious good is making some serious money. Admittedly, some serious student loan debt also pushes me in this direction!
Doesn’t becoming very wealthy for the purpose of saving the world (and then actually saving the world) count as singlehandedly solving all the problems?
What I was getting at is that the cognitive effort required to actually solve a Millennium problem may be greater than the cognitive effort of making a billion dollars and hiring a thousand mathematicians to work in Millennium problems.
Who’s counting?
Is this a joke? (Serious question, I can’t tell. FWIW, I was using “count” as “fit the definition of”.)
Partly, but not entirely. I noticed that I was asking myself seriously if that counted, then wondered why it mattered if it fit the definition.
Wow, 66 comments in 1 day. It looks the idea of having a mid-month open thread was a good one.
Seems like an indication that a third tier of posts, possibly karma-free, might be a good idea. Something like Stupid Questions, or Beginner’s Corner, or Sandbox, or...
I’ve been sporadically trying to get something like this done for AGES. There was even a forum made, but without official endorsement it got like 5 members and died within days.
If you were to offer a tested contrib to the LW code base, Trike might agree to add it on a trial basis, provided EY&Co approve. Not sure what their policies are.
No idea how to do that, and wont have for the foreseeable future… I just don’t have the attention span for coding or hacking any more for medical reasons.
Stephen Law on his new book, Believing Bullshit:
Something has been bothering me about Newcomb’s problem, and I recently figured out what it is.
It seems to simultaneously postulate that backwards causality is impossible and that you have repeatedly observed backwards causality. If we allow your present decision to affect the past, the problem disappears, and you pick the million dollar box.
In real life, we have a strong expectation that the future can’t affect the past, but in the Newcomb problem we have pretty good evidence that it can.
Short answer: Yup. Because Omega is a perfect or near-perfect predictor, your decision is logically antecedent, but not chronologically antecedent, to Omega’s decision. People like Michael Vassar, Vladimir Nesov, and Will Newsome think and talk about this sort of thing more often than the average lesswronger.
You probably know this, but just in case:
In Newcomb’s problem Omega predicts prior to you choosing. Omega is just really good at this. The chooser doesn’t repeatedly observe backwards causality, even if they might be justified in thinking they did.
How is that observably different from backwards causality existing? Perhaps we need to taboo the word “cause”.
It seems very intuitive to me that being very good at predicting someone’s decision (probably by something like simulating the decision-process) is conceptually different from time travel. Plus, I don’t think Newcomb’s problem is an interesting decision-theory question if Omega is simply traveling (or sending information) backward in time.
This is intuitive to me as well, but I suspect that it is also wrong. What is the difference between sending information from the future of a simulated universe to the present of this universe and sending information back in the ‘same’ universe if the simulation is identical to the ‘real’ universe?
Aside from the fact that the state of the art in science suggests that one (prediction) is possible and the other (time travel) is impossible?
But I think the more important issue is that assigning time-travel powers to Omega makes the problem much less interesting. It is essentially fighting the hypothetical, because the thought experiment is intended to shed some light on the concept of “pre-commitment.” Pre-commitment is not particularly interesting if Omega can time-travel. In short, changing the topic of conversation, but not admitting you are changing the topic, is perceived as rude.
Newcomb’s problem doesn’t lose much of its edge if you allow Omega not to be a perfect predictor (say, it is right 95% of the time). This is surely possible without a detailed simulation that might be confused with backwards causation.
In the standard formulation (a perfect predictor) one-boxers always end up winning and two-boxers always end up losing, so there is no issue with causality, except in the mind of a confused philosopher.
How did Less Wrong get its name?
I have two disjunct guesses that are not mutually exclusive, but do not depend on each other:
It was Michael Vassar’s idea. He is my best guess for who came up with the name.
It was inspired by this essay. This is my best guess for what inspired the name.
I don’t know if either of these is true, or both, or whatever. I want to know the real answer.
Searching this site and Google has been useless so far.
EY polled Overcoming Bias readers on their favorite from a list of several options, and “Less Wrong” was the overwhelming winner. Not sure how the options were generated.
Source?
Memory.
I remember Eliezer’s post announcing LW. He didn’t give any explanation of why it was called that, he just said “tentatively titled Less Wrong.”
I’d be interested in hearing the answer to this. I suspect it was just a cool name that Eliezer came up with.
An unusual answer to Newcomb’s problem:
I asked a friend recently what he would do if encountering Newcomb’s problem. Instead of giving either of the standard answer, he immediately attempted to create a paradoxical outcome and, as far as I can tell, succeeded. He claims that he would look inside the possibly-a-million-dollars box and do the following: If the box contains a million dollars, take both boxes. If the box contains nothing, take only that box (the empty one).
What would Omega do if he predicted this behavior or is this somehow not allowed in the problem setup?
Not allowed. You get to look into the second box only after you have chosen. And even if both boxes were transparent, the paradox is easily fixed. Omega shouldn’t predict what will you do (because that’s assuming that you will ignore the content of the second box and Omega isn’t stupid like that) but what will you do if box B contains a million dollars. Then it would correctly predict that your friend would two-box in that situation, so it wouldn’t put the million dollars into the second box and your friend would take only the empty box according to his strategy. So yeah.
That’s a nice simple way to reword it. Thanks.
There actually is a variant where you’re allowed to look into the boxes—Newcomb’s problem with transparent boxes.
And yes, it is undefined if you apply the same rules. However, there are two ways to re-define it.
1: Reduce the scope of the inputs. For example, Omega could operate on the following program: “If the contestant would take only one box when the million dollars is there, put the million dollars there.” Before, Omega was looking at both situations, and now it’s only looking at one.
2: Increase the scope of the program. There are two possible responses in two possible situations for a total of four inputs, so you just need to define Omega’s response for all four. It’s interesting that Omega now treats you differently depending on your thoughts, not just depending on which box you take, so this changes the genre of the problem.
So I was reading a book in the Ender’s Game series, and at one point it talks about the idea of sacrificing a human colony for the sake of another species. It got me thinking about the following question. Is it rational to protect 20 “piggies” (which are morally equivalent to humans) and sacrifice 100 humans if the 20 piggies constitute 100% of their species’ population and the humans represent a very very small fraction of the human race. At first, it seemed obvious that it’s right to save the “piggies,” but now I’m not so sure. Having tried to think of why saving them is right (for a few minutes), all I came up with was that diversifying investments in intelligent life makes intelligent life safer from extinction. But is diversity of life inherently valuable? What makes a future with “piggies” and humans better than one with just one or the other?
While writing this, I noticed one other reason: the valuable information that the “piggies” have. If this is eliminated, is it still worth saving them? And how many human lives can the “good of diversity” and the “loss of information” overcome? These are basically rhetorical questions (i.e. I’m not looking for answers like “53,243 humans per ‘piggy’”), so I’m really just looking for your thoughts on this issue.
Depends on your goal… If it is the survival of the human colony, then no. If it is the survival of the human race an the piggies hold a key to it, then yes (they do not, in this story). If it is the survival of the pequenino race, then yes. It does not make sense to ask which of the goals is rational, unless you can measure them against something else.
Right. Let’s say that you just value “intelligent life,” though, rather than the humans or pequeninos in particular. Say you’re the hive queen. A piggy is equal to a human and the human race is equal to a human race.
(I worry that I’m still missing the point and the question is moot without first resolving whether you value “diversity” in it’s own right or not, and that such valuing is a preference independent of rational decision making. Still, I feel as if some preferences can be irrational.)
Does anyone know how one would go about suggesting a new feature for predictionbook.com? I think it would be better if you could tag predictions so that then you could see separate ratings for predictions in different domains. Like, “Oh look, my predictions of 100% certainty about HPMOR are correct 90% of the time but my predictions of 100% certainty about politics are right 70% of the time.” Also, you could look at recent predictions for only a specific topic, or see how well calibrated another user is in a specific area.
http://github.com/tricycle/predictionbook/issues
As Anubhav pointed out, PB is not important to Trike since it’s orders of magnitude less popular than LW (as useful as I may find it). If you really want tagging for per-domain calibration, you either need to get your hands dirty or put up a bounty.
PB has a severe manpower shortage. New features not coming any time soon, AFAICT.
Moore’s Law Won’t Fade for Business Reasons
Some writers have claimed that excess computing power will reduce the effort put into designing new and more powerful chips. Even when most users can’t make use of the additional power, fear of losing out to the competition will keep designers pushing. Eventually, it will become too expensive to keep developing the new technology, but we are a lot further from those limits.
This sounds like Marx’s “overproduction” thesis: competition drives producers to make more and more regardless of demand. Generally, that sort of thing hasn’t happened.
Specifically in the computer processing market: really, only gamers and datacenters buy the fastest available general-purpose processors. Other folks buy computers with an eye to convenience, portability, appearance, battery life, etc. rather than raw processing power.
Both home and datacenter markets seem to be shifting away from raw power and towards energy efficiency (i.e. maximizing computing power per watt) which increases battery life and decreases datacenter costs. This might actually end up propping up Moore’s law anyway, as the more efficient transistors get, the more of them can be put on the same chip without overheating.
This will bottom out too, eventually, when a battery charge lasts longer than the device itself, or datacenter power and cooling costs become negligible.
Depressing article opposing life extension research is depressing. Brief summary: In the least convenient possible world, human research trials would be unethically exploitative. And this is presented as an argument against attempting to end aging.
ZOMG, vaccines are part of the transhumanist agenda!! So are therefore unnatural and evil.
Spotted on Respectful Insolence.
I’ve found a video that would be really cool if it were true, but I don’t know how to judge its truth and it sounds ridiculous. This talk by Rob Bryanton deals with higher spatial dimensions, and suggests that different Everett branches are separated in the 5th dimension, universes with different physical laws are separated in the sixth dimension, etc. I can’t find much info about the creator online, but one site accuses him of being a crank.Can somebody who knows something about physics tell me if there is any grain of truth to this possibility?
That reminds me of Tegmark’s multi-level classification of multiverses, but that classification doesn’t make sense as a spatial set of dimensions, IIRC.
In what ways do Frequentists and Bayesians disagree?
For a Bayesian a random quantity is just an unknown one. For example a coin not yet flipped is random (because I don’t know which way it will land), and so is the population of Colorado (because I don’t know what it is). Frequentists treat randomness as an inherent property of things, so that the coin flip would still be random (because it’s not predetermined) but the population of Colorado isn’t (because it’s already fixed).
So given the problem of estimating the population of Colorado, a Bayesian would just hand you back a probability distribution (i.e. tell you how probable each population was). This option wouldn’t be available to the Frequentist, who would refuse to put a probability distribution on a variable that wasn’t random. Instead the Frequentist would give you an estimate and then tell you that the algorithm that generated the estimate had desirable properties, like being “unbiased”.
I am interested in guidance on coping with loved one’s irrationality.
I wish it to be known that the next person to sign on as a beta for my fiction is entitled to the designation “pi”.
I’d be happy to do so. I’m halfway through Summons at the moment, but will probably finish that today or tomorrow.
Though I’ve also never beta’d before, I’m up to date on Elcenia and would be happy to try.
If you want me to do so, just shoot me a PM. It’d also probably be a good idea to let me know what kind of feedback you’re looking for.
I recently discovered, and devoured, Luminosity. Thank you for contributing to the “rationalist fiction” genre!
I haven’t started Elcenia yet, but if/when I get caught up, I’ll let you know. I’ve never beta-d before, but I’d be happy to try!
What value is there in being this “pi”? Also, what’s this fiction?
(PS. Tau is the one true circle constant)
Pi is a popular Greek letter. In the past this was the fiction, which makes me consider it potentially relevant here (fan density) but lately it is this instead. I’ll designate a taubeta after acquiring a pibeta, rhobeta, and sigmabeta.
“My priors are different than yours, and under them my posterior belief is justified. There is no belief that can be said to be irrational regardless of priors, and my belief is rational under mine,”
“I pattern matched what you said rather than either apply the principle of charity or estimate the chances of your not having an opinion marking you as ignorant, unreasoning, and/or innately evil,”
“Wot evah! I [believe] what I want!”
Question regarding the quantum physics sequence:
This article tells me that the amplitude for a photon leaving a half mirror in each of the two directions is 1 and i (for straight and taking a turn, respectively) for an amplitude of 1 of a photon reaching the half-mirror. This must be a simplification, otherwise two half mirrors in a line would result in amplitude of i photon turning at the first mirror, an amplitude of i photon turning at the second mirror, and an amplitude of 1 of photon passing through both. This means that the squared-modulus ratio is 1:1:1 and all events are equally likely, and hence the existence of the second (possibly very distant) half mirror reduces the amount of light leaving the first half-mirror to 1⁄3 from 1⁄2 the intensity. I would be shocked to find that such a result is reality since it would, among other things, allow transmission of information faster than the speed of light.
Okay, so the obvious fix is to say that Eliezer simplified things and the real rule is that there is a factor of 1/sqrt(2) to each factor. Then the squared modulus ratio of the above example is 1/2:1/4:1/4 as expected.
But then I run into my second problem: suppose that there is a photon headed at a half-mirror. Turning at the half mirror leads to a detector. Going straight leads to a set of four mirrors which brings the photon back to the starting point. This introduces a loop in the system. What is the amplitude of the light reaching the detector? Intuitively, I would expect this to be 1 or possibly less than 1. Assuming that my above factor of 1/sqrt(2) is correct, then we get an infinite sum 1/sqrt(2) + 1/sqrt(4) + … which converges to 1 + sqrt(2). This seems very wrong—we would need a factor of 1⁄2 to converge to 1, but then the previous situation gives a square modulus ratio of 1/4:1/16:1/16 or 4:1:1 which is again unexpected.
So is there a factor on each term of the half-mirror and if so what is it? Since no factor would agree with both of these setups, what have I done wrong?
What dbaupp said. But in particular you square first and then add because arriving at a different time makes the possibilities distinguishable, and so there is no interference (you don’t add the complex amplitudes).
Ah good. This is a good explanation and I had been wondering how the different timing would affect it. Thanks to you and dbaupp.
To get the ratio, one needs to add the squared moduli, so 1/2+1/4+..., and that gives 1.
There are 2600 people signed up for the Reddit Godel Escher Bach group.
Why do you bring this up?
For what it’s worth my impression is that while there exist people who have genuinely benefited from the book; a very large majority of the interest expressed in the book is almost purely signaling.
It would be easier to discuss the merits (or lack) of the book if you specify something about the book you believe lacks merit. The opinion that the book is overly hyped is a common criticism, but is too vague to be refuted.
It was a bestseller. Of course many of those people who bought it are silly.
I wasn’t opening up discussion of the book so much as inquiring why you find the fact that you cite interesting.
Fair question, but not an easy one to answer.
I signed up for the reading group along with the 2600 Redditors. It was previously posted about here. The book is an entry point to issues of Artificial Intelligence, consciousness, cognitive biases and other subjects which interest me. I enjoy the book every time I read from it, but I believe I am missing something which could be provided in a group reading or a group study. As I stated in the previous thread, I am challenged by the musical references. The last time I read music notation routinely was when I sang in a choir in middle school; many of the Bach references and other music references to terms such as fugue, canon, fifths & thirds, &c are difficult for me to grasp.
If one of those 2600 redditors felt moved to build some youtube tutorials with a bouncing ball along and atop the Bach scores illustrating Hofstadter’s arguments, then I presume many others besides myself would enjoy seeing them.
Have you seen that Feynman video where he says he usually dislikes answering “why” questions? If not that, perhaps that Louis C. K. standup routine where he talks about his daughter asking “why?” It is a discussion prompt but it often does not point to anywhere. I have that feeling now that I am rambling.
I know Bach’s music quite well from a listener’s perspective though not from a theoretician’s perspective. I’d be happy to share some pieces recordings that I’ve enjoyed / have found accessible.
Your last paragraph is obscure to me and I share your impression that you started to ramble :-).
http://predictionbook.com/predictions/5015
Utility functions do a terrible job of modelling our conscious wants and desires. Our conscious minds are too non-continuous to be modeled effectively. But our total minds are far more continuous, radical changes are rare which is why “character” and “personality” are recognizable over time and often despite our conscious desires, even quite strong conscious desires.
What is the rational case for having children?
One can tell a story about how evolution made us not simply to enjoy the act that causes children but to want to have children. But that’s not a reason, that’s a description of the desire.
One could tell a story about having children as a source of future support or cost-controlled labor (i.e. farmhands). But I think the evidence is pretty strong that children are not wealth-maximizing in the modern era.
And if there is no case for having children, shouldn’t that bother us on “Our morality should add up to normal, ceteris parabis” grounds?
Rationality helps you map out the relations between actions and goals, and between goals and subgoals; and it can help us better understand the structure of the goals we already have. We can say that doing something is good because it helps achieve goals, or bad because it hinders them; and we can say that certain things are also goals (subgoals), if achieving them helps with our original goals. However, this has to bottom out somewhere; and we call the places where it bottoms out—goals that’re valued in and of themselves, not just because they help with some other goal—terminal values.
Rationality has nothing whatsoever to say about what terminal values you should have. (In fact, those terminal values are implicit when you use the word “should”.) For people who want children, that is usually a terminal value. You cannot argue that it’s good because it achieves something else, because that is not why people think it’s good.
You are right. And that’s at least the second time I’ve made that mistake, so hopefully I’ll learn from it.
Let me ask the sociological question I should have asked: It appears that many of the folks invested enough in “rationality” to be active participants in LW not only don’t have children, but think that having children is not a good goal. That constellation of beliefs suggests that there is some selection pressure that links those two beliefs. Should the existence of that selection pressure worry us on “Add up to normal” grounds?
This seems to be a near-consensus here at LessWrong. But I’m not convinced that “it bottoms out in goals that’re valued in and of themselves” follows from “this has to bottom out somewhere”. I grant the premise but doubt the conclusion. I doubt that where-it-bottoms-out needs to be, specifically, goals—it could be some combination of beliefs, habits, experiences, and/or emotions, instead.
But you say, we call the places where it bottoms out goals … (emphasis added). Of course, you can do that, and it’s even true that people will pretty well understand what you mean. You can call these things goals, and do so without doing terrible violence to the language, but I’m not convinced that this is the most felicitous way of speaking about motivation and ethical learning. Whether these bottom-level items are best described as goals, or habits, or beliefs, or something quite different, depends on psychological facts which may not yet be in (sufficient) evidence.