That’s a fair point, but I’ve never actually seen it mentioned explicitly. Maybe there should be a ‘tips on writing posts’ post.
marc
David Deutsch: A new way to explain explanation
I guess that quantum computers halve the doubling time, as compared to a classical computer, because every extra qubit squares the available state space. This could give the factor two in the exponential of Moore’s law.
Quantum computing performance currently isn’t doubling but it isn’t jammed either. Decoherence is no longer considered to be a fundamental limit, it’s more a practical inconvenience. The change that brought this about was the invention of quantum error correcting codes.
However experimental physicists are still searching for the ideal practical implementation. You might compare the situation to that of the pre-silicon days of classical computing. Until this gets sorted I doubt there will be any Moore’s law type growth.
Were this true it would also seem to fit with Robin’s theories on art as signalling. If you pick something bad to defend then the signal is stronger.
If you want to signal loyalty, for example, it’s not that good picking Shakespeare. Obviously everyone likes Shakespeare. If you pick an obscure anime cartoon then you can really signal your unreasonable devotion in the face of public pressure.
In a complete about turn though, a situation with empirical data might be sports fans. And I’m fairly certain that as performances get worse, generally speaking, the number of fans (at least that attend games) drops. This would seem to imply the opposite.
I agree that the quality of the argument is an important first screening process in accepting something into the rationality canon. In addition, by truly understanding the argument, it can allow us to generalise or apply it to novel situations. This is how we progress our knowledge.
But the most convincing argument means nothing if we apply it to reality and it doesn’t map the territory. So I don’t understand why I’d be crazy to think well of Argument screens off authority if reading it makes me demonstrably more rational? Could you point me towards the earlier comments you allude to?
Can you clarify?
Exactly which material are you referring to? What basis would you suggest that you’re assessing it on?
If you don’t attempt to do something while you develop your rationality then you’re not constraining yourself to be scored on your beliefs effectiveness. And we know that this makes you less likely to signal and more likely to predict accurately.
I agree for the most part with Tom. Here’s a quote from an article that I drafted last night but couldn’t post due to my karma:
“I read comments fairly regularly that certainly imply that people are less successful or less fulfilled than they might be (I don’t want to directly link to any but I’m sure you can find them on any thread where people start to talk about their personal situation). Where are the posts that give people rational ways to improve their lives? It’s not that this is particularly difficult—there’s a huge psychological literature on various topics (for instance happiness, attraction and influence) that I’m sure people here have the expertise to disseminate. And it would have obvious applications in making people more successful and fulfilled in their day to day lives.
It seems to me that the Less Wrong community concentrates on x-rationality, which is a larger and more intellectually stimulating challenge (and, cynically, better for signalling intellectual prowess) at the expense of simple instrumental rationality. It’s as if we think that because we’re thinking about black belt x-rationality, we’re above applying blue belt instrumental rationality.
In my life I’m constantly learning new and more accurate models with which to understand the world that don’t come near to determining whether to one or two box in pure complexity terms. They are useful more often, though.
This isn’t to denigrate x-rationality. Obviously its important but it currently seems like there’s no balance on LW between that and instrumental rationality. As a side benefit I’ll bet good money that the best way to get people interested in rationality is to simply show them how successful you are when applying it—something that would be more possible with instrumental rationality than x-rationality.”
I disagree with Tom over the terminology though. I quite like the terms x-rationality and instrumental rationality because they allow me to easily talk about two broad types of rational thought even though i would be hard pressed to draw a specific line between them.
I think that you can legitimately worry about both for good reasons.
Fast growth is something to strive for but I think it will require that our best communicators are out there. Are you concerned that rationality teachers without secret lives won’t be inspiring enough to convert people or that they’ll get things wrong and head into death spirals?
From a personal perspective i don’t have that much interest in being a rationality teacher. I want to use rationality as a tool to make the greatest success of my life. But I also find it fascinating and, in an ideal world, would stay in touch with a ‘rational community’ as both a guard against veering off into a solo death spiral and as a subject of intellectual interest. I’m sure that there must be other people like me that are more accomplished and could give inspiring lectures on how rationality helped them in their chosen profession. That would go some way to covering the inspiration angle.
As an aside i appreciate why you care about this; I’m always a bit suspicious of self help gurus who’s only measurable success is in the self help theory they promote. I wonder whether I’m selecting for people who effectively sell advice rather than effectively use advice.
I guess the failure mode that you’re concerned with is a slow dilution because errors creep in with each successive generation and there’s no external correction.
I think that the way we currently prevent this in our scientific efforts is to have both a research and a teaching community. The research community is structured to maximise the chances of weeding out incorrect ideas. This community then trains the teachers.
The benefits of this are that you get the people who are best at communicating doing the teaching and the people who are the best at research doing research.
Is it possible that having taught yourself you haven’t so directly experienced that there’s not necessarily a correlation between a persons understanding of a subject and their ability to teach it?
Is it possible that humans, with their limited simulation abilities, do not have the mental computational resources to simulate an irrational persons more effective beliefs?
This would mean that the ‘irrational’ course of action would be the more effective.
I definitely enjoyed the meet up.
In defence of my fairly poor estimate I was unconvinced by the assumption that all the maise in mexico was eaten by mexicans. It seemed that this was an uncontrolled assumption but i felt that i could put reasonable bounds on all the assumptions in the land area estimate (if you’re asking, yes, the final answer did fall within my 90-10 bounds :) ).
Hopefully with a bit more notice we can get a few extra people next time but i think it was a great idea to get the ball rolling. Thanks to Tomasz for organising.
What about cases where any rational course of action still leaves you on the losing side?
Although this may seem to be impossible according to your definition of rationality, I believe it’s possible to construct such a scenario because of the fundamental limitations of a human brains ability to simulate.
In previous posts you’ve said that, at worst, the rationalist can simply simulate the ‘irrational’ behaviour that is currently the winning strategy. I would contend that humans can’t simulate effectively enough for this to be an option. After all we know that several biases stem from our inability to effectively simulate our own future emotions, so to effectively simulate an entire other beings response to a complex situation would seem to be a task beyond the current human brain.
As a concrete example I might suggest the ability to lie. I believe it’s fairly well established that humans are not hugely effective liars and therefore the most effective way to lie is to truly believe the lie. Does this not strongly suggest that limitations of simulation mean that a rational course of action can still be beaten by an irrational one?
I’m not sure that even if this is true it should effect a universal definition of rationality—but it would place bounds on the effectiveness of rationality in beings of limited simulation capacity.
I’m in London
Have you really never seen this before? I actually find that I myself struggle with it. When you define yourself as the plucky outsider it’s difficult and almost unsatisfying when you conclusively win the argument. It ruins your self-identity because you’re now just a mainstream thinker.
I’ve heard of similar stories when people are cured of various terminal diseases. The disease becomes so central to their definition of self that to be cured makes them feel slightly lost.
I agree that there’s nothing new to people who have been on Overcoming Bias and Less Wrong for a few years (hence the cautionary statement at the start of the post) but I do think it’s important that we don’t forget that there are new people arriving all the time.
Not everyone would consider “the conjunction fallacy and how each detail makes your explanation less plausible” a standard point. We shouldn’t make this site inaccessible to those people. Credit where it’s due—Deutsch does a nice job of presenting this in a way that most people can understand.