Because we can have preferences over our preferences. For instance, I would prefer it if I preferred to eat healthier foods because that preference would clash less with my desire to stay fit and maintain my health. There is nothing irrational about wishing for more consistent (and thus more achievable) preferences.
mark_spottswood
Arguing over definitions is pointless, and somewhat dangerous. If we define the word “rational” in some sort of site-specific way, we risk confusing outsiders who come here and who haven’t read the prior threads.
Use the word “rational” or “rationality” whenever the difference between its possible senses does not matter. When the difference matters, just use more specific terminology.
General rule: When terms are confusing, it is better to use different terms than to have fights over meanings. Indeed, your impulse to fight for the word-you-want should be deeply suspect; wanting to affiliate our ideas with pleasant-sounding words is very similar to our desire to affiliate with high-status others; it makes us (or our ideas) appealing for reasons that are unrelated to the correctness or usefulness of what we are saying.
I think the idea of a nested dialogue is a great one. You could also incorporate reader voting, so that weak arguments get voted off of the dialogue while stronger ones remain, thus winnowing down the argument to its essence over time.
I wonder if our hosts, or any contributors, would be interested in trying out such a procedure as a way of exploring a future disagreement?
Useful practice: Systematize credibility assessments. Find ways to track the sincerity and accuracy of what people have said in the past, and make such information widely available. (An example from the legal domain would be a database of expert witnesses, which includes the number of times courts have qualified them as experts on a particular subject, and the number of times courts adopted or rejected their conclusions.) To the extent such info is widely available, it both helps to “sterilize” the information coming from untrustworthy sources and to promote the contributions that are most likely to be helpful. It also helps improve the incentive structure of truth-seeking discussions.
Sorry—I meant, but did not make clear, that the word “rationality” should be avoided only when the conversation involves the clash between “winning” and “truth seeking.” Otherwise, things tend to bog down in arguments about the map, when we should be talking about the territory.
Eliezer said: This, in turn, ends up implying epistemic rationality: if the definition of “winning” doesn’t require believing false things, then you can generally expect to do better (on average) by believing true things than false things—certainly in real life, despite various elaborate philosophical thought experiments designed from omniscient truth-believing third-person standpoints.
--
I think this is overstated. Why should we only care what works “generally,” rather than what works well in specific subdomains? If rationality means whatever helps you win, than overconfidence will often be rational. (Examples: placebo effect, dating, job interviews, etc.) I think you need to either decide that your definition of rationality does not always require a preference for true beliefs, or else revise the definition.
It also might be worthwhile, for the sake of clarity, to just avoid the word “rationality” altogether in future conversations. It seems to be at risk of becoming an essentially contested concept, particularly because everyone wants to be able to claim that their own preferred cognitive procedures are “rational.” Why not just talk about whether a particular cognitive ritual is “goal-optimizing” when we want to talk about Eliezer-rationality, while saving the term “truth-optimizing” (or some variant) for epistemic-rationality?
Pwno said: I find it hard to imagine a time where truth-seeking is incompatible with acting rationally (the way I defined it). Can anyone think of an example?
The classic example would invoke the placebo effect. Believing that medical care is likely to be successful can actually make it more successful; believing that it is likely to fail might vitiate the placebo effect. So, if you are taking a treatment with the goal of getting better, and that treatment is not very good (but it is the best available option), then it is better from a rationalist goal-seeking perspective to have an incorrectly high assessment of the treatment’s possibility of success.
This generalizes more broadly to other areas of life where confidence is key. When dating, or going to a job interview, confidence can sometimes make the difference between success and failure. So it can pay, in such scenarios, to be wrong (so long as you are wrong in the right way).
It turns out that we are, in fact, generally optimized to make precisely this mistake. Far more people think they are above average in most domains than hold the opposite view. Likewise, people regularly place a high degree of trust in treatments with a very low probability of success, and we have many social mechanisms that try and encourage such behavior. It might be “irrational” under your usage to try and help these people form more accurate beliefs.
It depends how much relative value you assign to the following things:
Increasing your well-being and life satisfaction.
Your reputation (drug users have low status, mostly).
Not having unpleasant contacts with the criminal justice system.
Viewing the world through your current set of perceptive and affective filters, rather than through a slightly different set of filters.