Proponents of spirituality and alternative medicine often use the argument “this has been practiced for 2000 years”, with the subtext “therefore it must work”. Does this argument have any validity?
At first glance I want to reject the argument entirely, but that might be premature. Are there situations where this kind of argument is valid or somewhat valid?
I was reminded of this question when I read Shaila Catherine’s book The Jhanas (about certain ecstatic meditation states mentioned in Buddhism) and she said something like: “Trust in the method. Buddhists have been practicing it for 2600 years. It works. Your mind is not the exception.” This argument did not seem valid to me, because AFAIK Buddhist monasteries do not publish records of how many of their monks achieve which states and insights—au contraire, I believe monks have a taboo against talking about their attainments. So I know of no evidence that most practitioners can achieve jhana. From what I know, it is entirely plausible that only a small fraction of practitioners ever succeed at these instructions, and that therefore their minds are the exception, not mine.
It obviously has ‘any’ validity. If an instance of ‘ancient wisdom’ killed off or weakened the followers enough, it wouldn’t be around. Also, said thing has been optimized for a lot of time by a lot of people, and the version we receive probably isn’t the best, but still one of the better versions.
While some will weaken the people a bit and stick around for sounding good, they generally are just ideas that worked well enough. The best argument for ‘ancient wisdom’ is that you can actually just check how it has effected the people using it. If it has good effects on them, it is likely a good idea. If it has negative effects, it is probably bad.
‘Ancient wisdom’ also includes a lot of ideas we really don’t think of that way. Including math, science, language, etc. We start calling it things like ‘ancient wisdom’ (or tradition, or culture) if only certain traditions use it, which would mean it was less successful at convincing people, and less likely to be a truly good idea, but a lot of it will still be very good ideas.
By default, you should probably think that the reasons given are often wrong, but that the idea itself is in some way useful. (This can just be ‘socially useful’ though.) ‘Alternative medicine’ includes a lot of things that kind of work for specific problems, but people didn’t figure out how to use in an incontrovertible manner. Some alternative medicines don’t work for any likely problem, some are more likely to poison than help, but in general they solve real problems. In many cases, ‘ancient wisdom’ medicine is normal medicine. They had a lot of bad ideas over the millenia medically, but many aso clearly worked. ‘Religion’ includes a lot of things that are shown scientifically to improve the health, happiness, and wellbeing of adherents, but some strains of religion make them do crazy / evil things. You can’t really make a blatant statement by the category.
When it comes to Buddhist practice, it’s worth noting that practicing techniques by the book is not how Buddhism was practiced for most of the time in the last 2500 years. It was mostly an oral tradition and as such the knowledge that’s passed down from teacher to student evolves over time in various ways.
Many modern Buddhist tradition put much more emphasis on meditation in contrast to ritualized behavior.
In Buddhism (and in Christanity for that matter) for thousands of years meditation was largely done in monasteries and not by lay-people. In many Buddhist communities “lay-people aren’t supposed to meditate” is something you could call “ancient wisdom”.
In someone convinces you in a Western context that following some practice is ancient wisdom, they are likely doing a lot of picking and choosing in a way that does not make it clear how ancient the thing they are promoting actually happens to be.
This is a great point! Generally, whenever someone says “let’s do this traditional thing”, you might want to check whether the thing actually is traditional… before getting distracted by the endless debates about whether “traditional things” are better than “modern things” (often too unspecific to be useful).
Adding my own too-unspecific-to-be-useful statement, I suspect that most things advertised as traditional are in fact not. Or that the tradition claiming to be millennia old actually started like hundred years ago, so kinda traditional, just not in the way the proponents claim.
I’d read Nassim Nicholas Taleb on the Lindy effect for the strongest defense of this proposition. Basically all ideas and culture are constantly fighting in a market of culture. For every Jhana there are 1000 different spiritual concepts, which try to occupy the same niche. There has to be something to Jhana’s that leads to it still being done today, while the rest became history. That something does not have to necessarily mean that the idea is true, for example meditation in general is known to be very good for you, so if Jhana’s work as the carrot on the stick to get people to meditate, then the idea would also stick around, as people start praising Jhana’s due to the benefits they got from meditation. But every idea that is old needs to have some sort of payload, something that helped it survive on the market of ideas and culture for millenia.
I would tend to give particular credence to any practice which pre-dates the printing press.
The reason is fairly straightforward. Spreading ideas was significantly more expensive, and often could only occur to the extent that holding the ideas made the carriers better adapted to their environments.
As the cost to spread an idea has become cheaper, humans can unfortunately afford to spread a great deal more pleasant (feel free to substitute reward hacking for pleasant) junk.
That doesn’t mean failing to examine the ideas critically, but there are more than a few ideas that I once doubted the wisdom of, which made a great deal more sense from this perspective.
As for the particular practice of meditation that you reference, I tend to view spiritual practices as somewhat difficult to analyze for this purpose, as the entire structure of the religion was what was transmitted, not only the particularly adaptive information. To use DNA as an analogy, it’s difficult to tell, which portions are of particularly high utility, analogous to the A, C, G, and T in DNA, and which serve as the sugar-phosphate backbone. Potentially useful in maintenance of the structure as a whole, but perhaps not of particular use when translated outside that context.
Which portions of Buddhism are which, I couldn’t tell you, I lack practice in the meditation methods mentioned, and lack deeper familiarity with the relevant social and historical context.
Ancient wisdom is not scientific, and it might even be false, but the benefits are very real, and these benefits sort of works to make the wisdom true.
The best example I can give is placebo, the belief that something is true helps make it true, so even if it’s not true, you get the benefits of it being true. The special trait ancient wisdom has is this: The outcome is influenced by your belief in the outcome. This tends to be true for psychological things, and advice like “Belief can move mountains” is entirely true in the psychological realm. But scientific people, who deal with reality, tend to reject all of this and consider it as nonsense, as the problems they’re used to aren’t influenced by belief.
Another case in which belief matters includes treating things with weight/respect/sacredness/divinity. These things are just human constructs, but they have very real benefits. Of course, you can be an obnoxious atheist and break these illusions all you want, but the consequences of doing this will be nihilism. Why? Because treating things as if they have weight is what gives them weight, and nihilism is basically the lack of perceived weight. There’s nothing objectively valid about filial piety, but it does have benefits, and acting as if it’s something special makes it so.
Ancient wisdom often gets the conclusions right, but get the explanations wrong, and this is likely in order to make people take the conclusions seriously. Meditation has been shown to be good for you. Are you feeling “Ki” or does your body just feel warm when you concentrate on it? Do you become “one with everything” or does your perception just discard duality for a moment? Do you “meet god” or do you merely experience a peace of mind as you let go of resistance? The true answer is the boring one, but the fantastical explanation helps make these ideas more contagious, and it’s likely that the false explanations have stuck around because they’re stronger memetically.
Ancient wisdom has one advantage that modern science does not: It can deal with things which are beyond our understanding. The opposite is dangerous: If you reject something just because you don’t understand why it might be good (or because the people who like it aren’t intellectual enough to defend it), then you’re being rational in the map rather than in the territory. Maybe the thing you’re dismissing is actually good for reasons that we won’t understand for another 20 years.
You can compare this with money, money is “real but not real” in a similar way. And this all generalizes far beyond my examples, but the main benefits are found, like I said, in everything human (psychological and spiritual) and in areas in which the consensus has an incomplete map. I belive that nature has its own intelligence in a way, and that we tend to underestimate it.
Edit: Downvotes came fast. Surely I wrote enough that I’ve made it very easy to attack my position? This topic is interesting and holds a lot of utility, so feel free to reply.
With regards to placebo, the strength of the effect has actually been debated here on Less Wrong— Why I don’t believe in the placebo effect argues that the experimental evidence is quite weak and in some cases plausibly an artifact of poor study design
I don’t think I can actually deliberately believe in falsity it’s probably going to end up in a belief in a belief rather than self deception.
Beside having false ungrounded beliefs are likely to not be utility maximising in the long run its a short term pleasure kind of thing.
Beliefs inform our actions and having false beliefs will lead to bad actions.
I would agree with the Chesterton fence argument but once you understand the reasons for the said belief’s psychological nature than truthfulness holding onto to it is just rationalisation.
Ancient wisdom is more of it works until it doesn’t kind of wisdom, you have heuristics which reach certain benign conclusions but then they fail miserably when they do.
Besides someone thought about such wisdom, and it’s been downhill ever since with people forgetting it and reinventing it. Science on other hand progresses with each generation.
But when you do have a verdical gears level model on the other hand then you can be damn sure the thing will work.
Some false beliefs can lead to bad actions, but I don’t think it’s all of them. After all, human nature is biased, because having a bias aided in survival. The psyche also seems like it deceives itself as a defense mechanism fairly often. And I think that “believe in yourself” is good advice even for the mediocre.
I’m not sure which part of my message each part of your message is in response to exactly, but some realizations are harmful because they’re too disillusioning. It’s often useful to act like certain things are true—that’s what axioms and definitions are, after all. But these things are not inherently true or real, they become so when we decide that they are, but in a way it’s just that we created them. But I usually have to not think about that for a while before these things go back to looking like they’re solid pieces of reality rather than just agreements.
Ancient wisdom can fail, but it’s quite trivial for me to find examples in which common sense can go terribly wrong. It’s hard to fool-proof anything, be it technology or wisdom.
Some things progress. Math definitely does. But like you said, a lot of wisdom is rediscovered periodically. Science hasn’t increased our social skills nor our understanding of ourselves, modern wisdom and life advice is not better than it was 2000 years ago. And it’s not even because science cannot deal with these. The whole “Be like water” thing is just flexibility/adaptability. Glass is easier to break than plastic. What’s useful is that somebody who has never taken a physics class or heard about darwinism can learn and apply this principle anyway. And this may still apply to some wisdom which accidently reveals something which is beyond the current standard of science.
As for that which is not connected to reality much (wisdom which doesn’t seem to apply to reality), it’s mostly just the axioms of human cognition/nature. It applies to us more than to the world. “As within, so without”, in short, internal changes seem to cause external changes. If you’re in a good mood then the external world will seem better too. A related quote is “As you think, so you shall become” which is oddly simiar to the idea of cognitive behavioural therapy.
Hard disagree, there’s an entire field of psychology, decision theory and ethics using reflective equilibrium in light of science.
Well some things go wrong more often than other things, wisdom goes wrong a lot of time, it isn’t immune to memetic selection, there is not much mechanism to prevent you from falling for false memes. Technology after one point goes wrong wayyy less. A biology textbook is much more likely to be accurate and better on medical advice than a ayurvedic textbook.
Yes it’s a metaphor for adaptiveness, but I don’t understand where it may apply other than being a nice way to say “be more adaptive”. It’s like logical model like maths but for adaptiveness you import the idea of water-like adaptiveness into situations.
You know what might be an axiom of human cognition? Bayes rule and other axioms in statistics. I have found that I can bypass a lot of wisdom by using these axioms where others are stuck without a proper model in real life due to ancient wisdom. Eg; I stopped taking ayurvedic medication which contained carcinogens; when people spend hours thinking about certain principles in ethics or decision theory I know the laws to prevent such confusion etc
Honestly I agree with this part, I think this is the biggest weakness of rationalism. I think the failure to general purpose overcome akrasia is a failure of rationality. I find it hard to believe that there would be a person like david goggins but a rationalist. The obsession with accuracy doesn’t play well with romanticism of motivation and self-overcoming, it’s a battle you have to fight and figure out daily, and under the constraints of reality it becomes difficult.
There’s an entire field of psychology, yes, but most men are still confused by women saying “it’s fine” when they are clearly annoyed. Another thing is women dressing up because they want attention from specific men. Dressing up in a sexy manner is not a free ticket for any man to harass them, but socially inept men will say “they were asking for it” because the whole concept of selection and stardards doesn’t occur to them in that context. And have you read Niccolò Machiavelli’s “The Prince”? It predates psychology, but it is psychology, and it’s no worse than modern books on office politics and such, as far as I can tell. Some things just aren’t improving over time.
You gave the example of the ayurvedic textbook, but I’m not sure I’d call that “wisdom”. If we compare ancient medicine to modern medicine, then modern medicine wins in like 95% of cases. But for things relating to humanity itself, I think that ancient literature comes out ahead. Modern hard sciences like mathematics are too inhuman (autistic people are worse at socializing because they’re more logical and objective). And modern soft sciences are frankly pathetic quite often (Gardner’s Theory of Multiple Intelligences is nothing but a psychological defense against the idea that some people aren’t very bright. Whoever doesn’t realize this should not be in charge of helping other people with psychological issues)
It’s a core concept which applies to all areas of life. Humans won against other species because we were better at adapting. Nietzsche wrote “The snake which cannot cast its skin has to die. As well the minds which are prevented from changing their opinions; they cease to be mind”. This community speaks a lot of “updating beliefs” and “intellectual humility” because thinking that one has all the answers, and not updating ones beliefs over time, leads to cognitive inflexibility/stagnation, which prevents learning. Principles are incredibly powerful, and most human knowledge probably boils down to about 200 or 300 core principles.
Would I be right to guess that ancient wisdom fails you the most in objective areas of life, and that it hasn’t failed you much in the social parts? I don’t disagree that modern axioms can be useful, but I think there’s many areas where “intelligent” approaches leads to worse outcomes. For the most part, attempting to control things lead to failure. I’ve had more unpleasant experiences on heavily moderated platforms than I have had in completely unmoderated spaces. I think it’s because self-organization can take place once disturbance from the outside ceases. But we will likely never know.
You could put it like that. I’d say something like “The rules of the brain are different than those of math, if you treat the brain like it’s supposed to be rational, you will always find it to be malfunctioning for reasons that you don’t understand”. Too many geniuses have failed at living good lives for me to believe that intelligence is enough. I have friends with IQs above 145 who are depressed because they think too rationally to understand their own nature. They reject the things which could help them, because they look down on them as subjective/silly/irrational.
David Goggings story is pretty interesting. I can’t say I went through as much as him, but we do have things in common. This might be why I have the courage to criticize science on LW in the first place.
I think majority of people aren’t aware of psychology and various fields under it. Ethics and decision theory kind of give a lot of clarity into such decisions when you analyse the payoff matrix. I haven’t The prince but have read excerpts from it in self-improvement related diaspora, I am not denying the value which such literature gives us, I just think we should move on by learning from it and developing on top in light of newer methods.
Beside I am more of a moral anti-realist so lol. I don’t think there is universally compelling arguments for these ethical things, but people with enough common psychological and culture grounds can cooperate.
Well it depends on your definition of inhuman, my_inhuman =/= your_inhuman value is a two place function, my peers when I was in high school found at least one of the hard sciences fun. Like them I find hard sciences pretty cool to learn about for fulfilling my other goals.
Agreed. Some fields under psychology as pathetic. But the fields like cognitive biases etc are not.
Well astrology has clearly failed me, my mom often had these luddite-adjacent ideas about what I am meant to do in life because her entire source of ethics was astrology. Astrology in career advice is like rolling a dice and assigning the all the well known professions to a number rather than actual life satisfaction or value fulfillment.
I would strongly disagree on the front of intelligence . More Rational as in cognitive algorithms which tend to lead to systematic optimality in this case truth seeking/achieving goals is indeed possible and pretty much is a part of growth.
I would weakly disagree on the front of Internal family subsystems (with the internal double crux special case being extremely useful) and other introspective reductionist methods where you break down your emotional responses and process into parts and understand what you like/dislike and the various attempts to bridge the two. On this front there are plethora of competing theories due to easy problem of consciousness and trying to understand experience functionally.
And for brain not working as I want to be when I model other parts of this brain, I find it being emotionally engaged in things which aren’t optimal for some of my goals and it isn’t contradictory with rationality to acknowledge or deal with these feelings.
I was praising goggins because he’s more of the type who is willing to fight himself and in more than half of the introspective models that without acknowledgement is bordering on self-harm. I find his strategy to be intuitively much better lol.
Where I would agree is that if you don’t understand something then your theory is probably wrong. There are not confusing facts only models which are confused by facts.
I think growth is important, I like to think of it in intelligence being compute power and growth and learning being more of changing algorithms. Besides there is a good amount of coorelations with IQ you might want to look into, I think this area is very contentious (got a system1 response to check for the social norms due to past bans lol) , but we’re on lesswrong, so you can continue.
You’re welcome, maybe you should read sequence highlights to get introduced with LW’s POV to understand other people’s positions here.
I don’t think there’s a reason for most people to learn psychology or game theory, as you can teach basic human behaviour and such without the academic perspective. I even think it’s a danger to be more “book smart” than “street smart” about social things. So rather than teaching game theory in college, schools could make children read and write a book report on “How to Win Friends & Influence People” in 4th grade or whatever. Academic knowledge which doesn’t make it to 99% of the population doesn’t help ordinary people much. But a lot of this knowledge is simple and easier than the math homework children tend to struggle with.
I don’t particularly believe in morality myself, and I also came to the conclusion that having shared beliefs and values is really useful, even if it means that a large group of people are stuck in a local maximum. As a result of this, I’m against people forcing their “moral” beliefs on foreign groups, especially when these groups are content and functional already. So I reject any global consensus of what’s “good”. No language is more correct than another language, and the same applies for cultures and such.
It’s funny that you should link that post, since it introduces an idea that I already came up with myself. What I meant was that people tend to value what’s objective over what’s subjective, so that their rational thinking becomes self-destructive or self-denying in a sense. Rationality helps us to overcome our biases, but thinking of rationality as perfect and of ourselves as defect is not exactly healthy. A lot of people who think they’re “super-humans” are closer to being “half-humans”, since what they’re doing is closer to destroying their humanity than overcoming or going beyond it. And I’m saying this despite the fact that some of these people are better at climbing social hierarchies or getting rich than me. In short, the objective should serve the subjective, not the other way around. “The lenses which sees its own flaws” merely conditions itself to seeing flaws in everything. Some of my friends are artists, and they hate their own work because they’re good at spotting imperfections in it, I don’t consider this level of optimization to be any good for me. When I’m rational, it’s because it’s useful for me, so I’m not going to harm myself in order to become more rational. That’s like wanting money thinking it will make me happy, and then sacrificing my happiness in order to make money.
I’ll agree as long as these fields haven’t been subverted by ideologies or psychological copes against reality yet (as that’s what tend to make soft sciences pathetic). The “Tall poppy syndrome” has warped the publics perception of the “Dunning kruger effect”, so that it becomes an insult you can use against anyone you disagree with who are certain of themselves, especially in a social sitaution in which a majority disagree.
Astrology is wrong and unscientific, but I can see why it would originate. It’s a kind of pattern reocgnition gone awry. Since everything is related, and the brain is sometimes lazy and thinks that correlation=causation and that X implies Y is the same as Y implies X, they use patterns to predict things, and assume that recreating the patterns will recreate the things. This is mostly wrong, of course, but not always. People who are happy are likely to smile, but smiling actually tends to make you happier as well. Do you know the tragic story behind the person who invented handwashing? He found the right pattern, and the results were verifiable, but because his idea sounded silly, he ended up suffering.
If you had used astrology yourself, it might have ended better, as you’d be likely to intrepret what you wanted to to be true, and since your belief that your goal in life was fated to come true would help against the periodic doubt that people face in life.
Intelligent is not something you are, it’s something you have. Identifying with your intelligence is how you disown 90% of yourself. Seeing intelligence as something available to you rather than as something you are helps eliminate internal conflict. All “gifted kid burnout” and “depressed intelligent person” situation I have seen was partly caused by this dangerous identification. Even if you dismiss everything else I’ve said so far, I want to stress the importance of this one thing. Lastly, “systematic optimality” seems to suffer from something like Goodhart’s law. When you optimize for one variable, you may harm 100 other variables slightly without realizing it (paperclip optimizers seem like the mathematical limit of this idea). Holistic perspectives tend to go wrong less often.
I like the Internal Family Systems view. I think the brain has competing impulses whose strength depends on your physical and psychological needs. but while I think your brain is rational according to what it wants, I don’t think it’s rational according to what you want. In fact, I think peoples brains tend to toy them them completely. It creates suffering to motivate you, it creates anxiety to get you to defend yourself, it creates displeasure and tells you that you will be happy if you achieve your goals. Being happy all the time is easy, but our brain makes this hard to realize so that we don’t hack our own reward systems and die. If you only care about a few goals, your worldview is extremely simple. You have a complex life with millions of factors, but you only care about a few objective metrics? I’m personally glad that people who chase money or fame above all end up feeling empty, for you might as well just replace humanity with robots if you care so little for experiencing what life has to offer.
Oh, I know, I have a few bans from various websites myself (and I once got rate limited on here). And intelligence correlates with nihilism, meta-thinking, systemization, and anxiety (I know a study found the correlaton to mental illness to be false. But I think the correlation is negative until about 120 IQ and then positive after). But why did Nikola Tesla’s intelligence not prevent him from dying poor and lonely? Why was Einstein so awkward? Why do some many intelligent people not enjoy life very much? My answer is that these are consequences of lacking humanity / healthy ways of thinking. It’s not just that stupid people are delusional. I personally like the idea that intelligence comes at the cost of instinct. For reference, I used to think rationally, I hated the world, I hated people, I couldn’t make friends, I couldn’t understand myself. Now I’m completely fine, I even overcame depression. I don’t suffer and I don’t even dislike suffering, I love life, I like socializing. I don’t worry about injustice, immorality or death.
I just found a highlight of the sequences, and it turns out that I have read most of the posts already, or just discovered the principles myself previously. And I disagree with a few of the moral rules because they decrease my performance in life by making me help society. Finally, my value system is what I like, not what is mathematically optimal for some metric which people think could help society experience less negative emotions (I don’t even think this is true or desirable)
Honestly I don’t know enough about people to actually tell if that’s really the case for me book smart becomes street smart when I make it truly a part of me.
That’s how I live anyways. For me when you formalise streetsmart it becomes booksmarts to other people, and the latter is likely to yield better prediction aside from the places where you lack compute like in case of society where most of people don’t use their brain outside of social/consensus reality. So maybe you’re actually onto something here, along the lines of “don’t tell them the truth because they cannot handle it” lol.
Well since I wanted to dismantle the chesterton fence I did reach the similar conclusions as yours regarding why it came to be and why they (the ancients) fell for it, the correlation causation one is the general purpose one. One major reason was agriculture where it was likely to work well due to common cause of seasons and relative star movement. So it can also be thought of as faulty generalisation.
That’s false I wouldn’t have socially demotivated my mom using apathy from wasting too much money on astrology, if I had been enthusiastic about it it would have fueled into her desire. Astrology is like that false hope of lottery, waste of emotional energies.
I would have been likely to fall for other delusions surrounding astrology instead of spending that time learning about things for example going on to pilgrimage for few weeks before exams etc .
Besides astrology predicts everything on the list of usual human behavior and more or less ends up predicting nothing.
Well more or less rational is w.r.t. to cognitive algorithms, you tend to have one variable, achieving goals. And cognitive algorithms which are better at reaching certain goals are more rational w.r.t. to that goal.
There is a distinction made better truth oriented epistemic rationality and day-to-day life goal oriented instrumental rationality but for me they’re pretty similar that for epistemic the goal is truth.
I think the distinction was made because there’s significant amount of epistemics in rationality.
If your goal is optimising 100 variables then go with it. For a rationalist truth tends to be their highest instrumental value, that’s the main difference imo between a rationalist or say a post-rationalist or a pre-rationalist. They can have other terminal values above that like life,liberty and pursuit of happiness etc.
If you’re not aware with the difference between terminal and instrumental.
I think it again depends on value being 2 place function. Some people may find fulfillment from that. I have met some of them who’re like that. I think quite a bit of literature on the topic is a bit biased in favour of common morality.
I think you would need to provide evidence for such claims, my prior is set against such claims given the low amount of evidence I have encountered and I cannot update it just because some cultural wisdom said so, because cultural wisdom is often wrong.
Then you weren’t thinking rationally. To quote;
Also check firewalling the rational from the optimal and feeling rational .
Also check no one can exempt you from laws of rationality.
Well then you can be mathematically optimal for the other metric. Laws of decision theory don’t stop working if you change your utility function. Unless you want to get money pumped lol , in that case your preferences are circular. Yes you might argue that we’re not knowledgeable enough to figure out what our values will be in various subject areas, and there’s a reason we have an entire field of AI alignment due to various such issues, and there are various problems with inferring our desires, limits of introspection.
There’s a lot to unfold for this first point:
Another issue with teaching it academically is that academic thought, like I already said, frames things in a mathematical and thus non-human way. And treating people like objects to be manipulated for certain goals (a common consequence of this way of thinking) is not only bad taste, it makes the game of life less enjoyable.
Learning how to program has harmed my immersion in games, and I have a tendency to powergame, which makes me learn new videogames way faster than other people, also with the result that I’m having less fun than them. I think rationality can result in the same thing. Why do people dislike “sellouts” and “cars salesmen” if not for the fact they they simply optimize for gains in a way which conflicts with taste? But if we all just treat taste like it’s important, or refuse to collect so much information that we can see the optimal routes, then Moloch won’t be able to hurt us.
If you want something to be part of you, then you simply need to come up with it yourself, it will be your own knowledge. Learning other peoples knowledge however, feels to me like consuming something foreign.
Of course, my defense of ancient wisdom so far has simply been to translate it into an academic language in which it makes sense. “Be like water” is street-smarts, and “adaptability is a core component of growth/improvement/fitness” is the book-smarts. But the “street-smarts” version is easier to teach, and now that I think about it, that’s what the bible was for.
Most things that society waste its time discussing are wrong. And they’re wrong in the sense than even an 8-year-old should be able to see that all controversies going on right now are frankly nonsense. But even academics cannot seem to frame things in a way that isn’t riddled with contradictions and hypocrisy. Does “We are good, but some people are evil, and we need to fight evil with evil otherwise the evil people will win by being evil while we’re being good” not sound silly? A single thought will get you karl poppers “paradox of tolerance” and a single thought more will make you realize that it’s not a paradox but a kind of neutrality/reflexivity which make both sides equal, and that “We need to fight evil” means “We want our brand of evil to win” as long as people don’t dislike evil itself but rather how it’s used. Again, this is not more complicated than “I punched my little brother because I was afraid he’d punch me first, and punching is bad” which I expect most children to see the problem with.
The thought experiment I had in mind was limited to a single isolated situation, you took it much further, haha. My point was simply “If you use astrology for yourself, the outcomes are usually alright”. Same with tarot cards, as far as I’m concerned, it’s a way to talk with your subconsciousness without your ego getting in the way, which requires acting as if something else is present. Even crystal balls are probably a kind of Rorschach test, and should not be used to “read other people” for this reason. Finally, I don’t disagree with the low utility of astrology, but false hope gives people the same reassurance as real hope. People don’t suffer from the non-existence of god, but from the doubt of his existence. The actual truth value of beliefs have no psychological effects (proof: Otherwise we could use beliefs to measure the state of reality).
I disagree as I know of counter-examples. It’s more likely for somebody to become rich making music if their goal is simply to make music and enjoy themselves, than if their goal is to become rich making music. You see similar effects for people who try to get girlfriends, or happiness for that matter. If X resuls in Y, then you should optimize for X and not for Y. Many companies are dying because they don’t realize such a simple thing (they try to exploit something pre-existing rather than making more of what they’re exploiting, for instance the trust in previous IPs). Ancient wisdom tackles this. Wu Wei is about doing the right by not trying to do it. I don’t know how often this works, but it sometimes does.
I have to disagree that anyones goal is truth. I’ve seen strong evidence that knowledge of an environment is optimal for survival, and that knowledge-optimizing beats self-delusion every time, but even in this case, the real goal is “survival” and not “truth”. And my proof is the following: If you optimize for truth because it feels correct or because you believe it’s what’s best, then your core motivation is feelings or beliefs respectively. For similar reasons, non-egoism is trivially impossible. But the “Something to protect” link you sent seems to argue for this as well?
And truth is not always optimal for goals. The belief that you’re justified and the belief that you can do something are both helpful. The average person is 5⁄10 but tend to rate themself as 7⁄10, which may the around the optimal bias.
By the way, most of my disagreements so far seem to be “Well, that makes sense logically, but if you throw human nature into the equation then it’s wrong”
I find myself a little doubtful here. People usually chase fame not because they value it, but because other people seem to value it. They might even agree cognitively on what’s valuable, but it’s no use if they don’t feel it.
How many great peoples autobiographies and life stories have you read? The nearer you get to them, the more human they seem, and if you get too close you may even find yourself crushed by pity. About Isaac Newton, it was even said “As a man he was a failure; as a monster he was superb”. Boltzmann committed suicide, John Nash suffered from skizophrenia. Philosophy is even worse off, titles like “suicide or coffee?” do not come from healthy states of mind. And have you read the Vasistha Yoga? It’s basically poison. But it’s ultimately a projection, a worldview does not reveal the world, but rather than person with the worldview.
But what saved me was not changing my knowledge, but my interpretation of it. I was right that people lie a lot, but I thought it was for their own sake, when it’s mostly out of consideration for others. I was right that people were irrational, but I didn’t realize that this could be a good thing.
That seems like it’s saying “I define rationality as what’s correct, so rationality can never be wrong, because that would mean you weren’t being rational”. By treating rationality as something which is discovered rather than created (by creating a map and calling it the territory), any flaw can be justified as “that wasn’t real rationality, we just didn’t act completely rationally because we’re flawed human beings! (our map was simply wrong!)”.
There can be no universal knowledge, maps of the territory are inherently limited (and I can prove this). As far as rationality uses math and verbal or written communication, it can only approximate something which cannot be put into words “The dao of which can be spoken is not the dao” simply means “the map is not the territory”.
By the way, I think I’ve found a big difference between our views. You’re (as far as I can tell) optimizing for “Optimization power over reality / a more reliable map”, while I’m optimizing for “Biological health, psychological well-being and enjoyment of existence”.
And they do not seem to have as much in common as rationalists believe.
But if rationality in the end worships reality and nature, that’s quite interesting, because that puts it in the same boat as Taoism and myself. Some people even put Nature=God.
Finally, if my goal is being a good programmer, then a million factors will matter, including my mood, how much I sleep, how much I enjoy programming, and so on. But somebody who naively optimizes for progamming skills might practice at the cost of mood, sleep, and enjoyment, and thus ultimately end up with a mediocre result. So in this case, a heuristic like “Take care of your health and try to enjoy your life” might not lose out to a rat-race like mentality in performance. Meta-level knowledge might help here, but I still don’t think it’s enough. and the tendency to dismiss things which seem unlikely, illogical or silly is not as great as a heuristic as one would think, perhaps because any beliefs which manage to stay alive despite being silly have something special about them.
Yes intuitions can be wrong welcome to reality. Beside I think schools are bad at teaching things.
Yes the trick for that is to delete the piece of knowledge you learnt and ask the question, how could I have come up with this myself?
That just sounds to me like “we need wisdom because people cannot think” . Yes I would agree considering when you open reddit, twitter or any other platform you can find many biases being upvoted. I would agree memetic immune system is required for a person unaware of various background literature required to bootstrap rationality. I am not advocating for teaching anything I don’t have plans for being an activist or having will to change society. But consider this, if you know enough rationality you can easily get past all that.
Sure a person should be aware when they’re drifting from the crowd and not become a contrarian since reversed stupidity is not intelligence and if you dissent when you have overwhelming reason for it you’re going to have enough problems in your life
I would agree on the latter part regarding good/evil. Unlike other rationalist this is why I don’t have will to change society. Internet has killed my societal moral compass for good/evil however you may like to put it for being more egoistic. Good just carries a positive system 1 connotation for me, I am just emoting it, but I mostly focus on my life. Or you have to be brutally honest about it, I don’t care about society as long as my interests are being fulfilled.
Agreed, map is not the territory, it feels same to be wrong as it feels to be right.
Yes if someone isn’t passionate about such endeavours they may not have the will to sustain it. But if a person is totally apathetic to monetary concerns they’re not going to make it either. So a person may argue on a meta level it’s more optimal to be passionate about a field or choose a field you’re passionate about in which you want to do better , to overcome akrasia and there might be some selection bias at play where a person who’s good at something is likely to have positive feedback loop about the subject.
Yes, exactly, truth is in highest service to other goals if my phrasing of “highest instrumental value” wasn’t clear. But you don’t deliberately believe false things that’s what rationality is all about, truth is nice to have but usefulness is everything.
Believing false things purposefully is impossible either ways, you’re not anticipating it with high possibility. That’s not how rationalist belief works. When you believe something that’s how reality is to you, you look at the world through your beliefs.
Not many, but it would be unrepresentative to generalise from that.
Ethically yes, epistemically no. Reality doesn’t care, this is what society gets wrong, if I am disagreeing with your climate denial or climate catastrophism I am not proposing a what needs to be done, there is a divide between morals and epistemics.
Yes, finally you get my point. We label those things rationality, the things which work. Virtue of empiricism. Rationality is about having cognitive algorithms which have higher returns systematically on whatever is that thing you want.
I would disagree, physics is more accurate than intuitive world models. The act of guessing a hypothesis is reverse engineering experience so to speak, you get a causal model which is connected to you in form of anticipations (this link is part of a sequence so there’s a chance there’s lot of background info).
When you experience something your brain forms various models of it, and you look at the world through your beliefs.
That’s misrepresentation of my position I said truth is my highest instrumental value not highest terminal value. Besides good portion of hardcore rationalists tend to have something to protect, a humanistic cause, which they devote themselves to, that tends to be aligned with their terminal values however they may see fit. Others may solely focus on their own interests like health,life and wellbeing.
To reiterate, you only seek truth as much as it allows you to get what you want but you don’t believe in falsities. That’s it.
Rationality doesn’t necessarily have nature as a terminal value, rationality is a tool, the set of cognitive algorithms which work for whatever you want with truth being highest instrumental value. As you might have read in the something to protect article.
Rationalists tend to have heavy respect for cognitive algorithms which allow us to systematically get us what we desire. They’re disturbed if there’s a violation in the process which gets us there.
None of that is incompatible with rationality, rather rationality will help you get there. Heuristics like “take care of your health and try to enjoy life” seem more of vague plans to fulfill your complex set of values which one may discover more about. Values are complex and there are various posts you can find here which may help you model yourself better and reach reflective equilibrium which is the best you can do either ways both epistemically and morally (former (epistemics) of which is much more easily reached by focusing on getting better with w.r.t. your values than focusing solely on it as highlighted by the post since truth is only instrumental) .
Edit: added some more links fixed some typos.
But these ways of looking at the world are not factually wrong, they’re just perverted in a sense.
I agree that schools are quite terrible in general.
That helps for learning facts, but one can teach the same things in many different ways. A math book from 80 years ago may be confusing now, even if the knowledge it covers is something that you know already, because the terms, notation and ideas are slightly different.
In a way. But some people who have never learned psychology have great social skills, and some people who are excellent with psychology are poor socializers. Some people also dislike “nerdy” subjects, and it’s much more likely that they’d listen to a ted talk on budy language than read a book on evolutionary psychology and non-verbal communication. Having an “easy version” of knowledge available which requires 20 IQ points less than the hard version seems like a good idea.
Some of the wisest and psychologically healthy people I have met have been non-intellectual and non-ideological, and even teenagers or young adults. Remember your “Things to unlearn from school” post? Some people may have less knowledge than the average person, and thus have less errors, making them clear-sighted in a way that makes them seem well-read. Teaching these people philosophy could very well ruin their beautiful worldviews rather than improve on them.
I don’t think “rationality” is required. Somebody who has never heard about the concept of rationality, but who is highly intelligent and thinks things through for himself, will be alright (outside of existential issues and infohazards, which have killed or ruined a fair share of actual geniuses).
But we’re both describing conditions which apply to less than 2% of the population, so at best we have to suffer from the errors of the 98%.
I’m not sure what you mean by “when you dissent when you have an overwhelming reason”. The article you linked to worded it “only when”, as if one should dissent more often, but it also warns against dissenting since it’s dangerous.
By the way, I don’t like most rational communities very much, and one of the reasons is that is that they have a lot of snobs who will treat you badly if you disagree with them. The social mockery I’ve experienced is also quite strong, which is strange since you’d suspect intelligence to correlate with openness, and for the high rate of autistic people to combat some of the conformity.
I also don’t like activism, and the only reason I care about the stupid ideas of the world is that all the errors are making life harder for me and the people that I care about. Like I said, not being an egoist is impossible, and there’s no strong evidence that all egoism is bad, only that egoism can be bad. The same goes for money and power, I think they’re neutral and both potentially good/bad. But being egoistic can make other people afraid of me if I don’t act like I don’t realize what I’m doing.
I think this is mostly correct. But optimization can kill passion (since you’re just following the meta and not your own desires). And common wisdom says “Follow your dreams” which is sort of naive and sort of valid at the same time.
I think believing something you think is false, intentionally, may be impossible. But false beliefs exist, so believing in false things is possible. For something where you’re between 10% and 90% sure, you can choose if you want to believe in it or not, and then using the following algorithm:
Say “X is true because” and then allow your brain to search through your memoy for evidence. It will find them.
The articles you posted on beliefs is about the rules of linguistics (belief in belief is a valid string) and logic, but how belief works psychologically may be different. I agree that real beliefs are internalized (exist in system 1) to the point that they’re just part of how you anticipate reality. But some beliefs are situational and easy to consciously manipulate (example: self-esteem. You can improve or harm your own self esteem in about 5 minutes if you try, since you just pick a perspective and set of standards in which you appear to be doing well or badly). Self-esteem is subjective, but I don’t think the brain differentiates subjective and objective things, it doesn’t even know the difference.
And it doesn’t seem like you value truth itself, but that you value the utility of some truths, and only because they help you towards something you value more?
You may believe this because a worldview will have to be formed through interactions with the territory, which means that a worldview cannot be totally unrelated to reality? You may also mean this: That if somebody has both knowledge and value judgements about life, then the knowledge is either true or false, while the value judgements are a function of the person. A happy person might say “Life is good” and a depression person might say “Life is cruel”, and they might even know the same facts.
Online “black pills” are dangerous, because the truth value of the knowledge doesn’t imply that the negative worldview of the person sharing it is justified. Somebody reading the vasistha yoga might become depressed because he cannot refute it, but this is quite an advanced error in thinking, as you don’t need to refute it for its negative tone to be false.
But then it’s not about maximizing truth, virtue, or logic.
If reality operates by different axioms than logic, then one should not be logical.
The word “virtue” is overloaded, so people write like the word is related to morality, but it’s really just about thinking in ways which makes one more clear-sighted. So people who tell me to have “humility” are “correct” in that being open to changing my beliefs makes it easier for me to learn, which is rational, but they often act as if they’re better people than me (as if I’ve made an ethical/moral mistake in being stubborn or certain of myself).
By truth, one means “reality” and not the concept “truth” as the result of a logic expression. This concept is overloaded too, so that it’s easy for people to manipulate a map with logical rules and then tell another person “You’re clearly not seeing the territory right”.
Physics is our own constructed reality, which seems to a act a lot like the actual reality. But I think an infinite amount of physics could exist which predicts reality with a high accuracy. In other words, “There’s no one true map”. We reverse engineer experiences into models, but experience can create multiple models, and multiple models can predict experiences.
One of the limitation is “there’s no universal truth”, but this is not even a problem as the universe is finite. But “universal” in mathematics is assumed to be truly universal, covering all things, and it’s precisely this which is not possible. But we don’t notice, and thus come up with the illusion of uniqueness. And it’s this illusion which creates conflict between people, because they disagree with eachother about what the truth is, claiming that that conflicting things cannot both be true. I dislike the consensus because it’s the consensus and not a consensus.
My bad for misrepresenting your position. Though I don’t agree that many hardcore rationalists care for humanistic causes. I see them as placing rationality above humanity, and thus prefering robots, cyborgs, and AIs above humanity. They think they prefer an “improvement” of humanity, but this functionally means the destruction of humanity. If you remove negative emotions (or all emotions entirely. After all, these are the source of mistakes, right?), subjectivity, and flaws from humans, and align them with eachother by giving them the same personality, or get rid of the ego (it’s also a source of errors and unhappiness) what you’re left with is not human. It’s at best a sentient robot. And this robot can achieve goals, but it cannot enjoy them.
I just remembered seeing the quote “Rationality is winning”, and I’ll admit this idea sounds appealing. But a book I really like (EST: Playing the game the new way, by Carl Frederick) is precisely about winning, and its main point is this: You need to give up on being correct. The human brain wants to have its beliefs validated, that’s all. So you let other people be correct, and then you ask them for what you want, even if it’s completely unreasonable.
I meant nature as its source (of evidence/truth/wisdom/knowledge). “Nature” meaning reality/the dao/the laws of physics/the universe/GNON. I think most schools of thought draw their conclusions from reality itself. The only kind of worldviews which seems disconnected from reality is religions which create ideals out of what’s lacking in life and making those out to be virtue and the will of god.
What I dislike might not be rationality, but how people apply it, and psychological tendencies in people who apply it. But upvotes and downvotes seem very biased in favor of a consensus and verifiability, rather than simply being about getting what you want out of life. People also don’t seem to like being told accurate heuristics which seem immoral or irrational (the colloquial definition that regular people use) even if they predict reality well. There’s also an implicit bias towards alturism which cannot be derived from objective truth.
About my values, they already exist even if I’m not aware of them, they’re just unconscious until I make them conscious. But if system 1 functions well, then you don’t really need to train system 2 to function well, and it’s a pain to force system 2 rationality onto system 1 (your brain resists most attempts at self-modification). I like the topic of self-modification, but that line of studies doesn’t come up on LW very often, which is strange to me. I still believe that the LW community downplays the importance of human nature and psychology. It may even underevaluate system 1 knowledge (street smarts and personal experiences) and overevaluate system 2 knowledge (authority, book-smarts, and reasoning)
Honestly majority of the points presented here are not new and already been addressed in
https://www.lesswrong.com/rationality
or https://www.readthesequence.com/
I got into this conversation because I thought I would find something new here. As an egoist I am voluntarily leaving this conversation in disagreement because I have other things to do in life. Thank you for your time.
The short version is that I’m not sold on rationality, and while I haven’t read 100% of the sequences it’s also not like my understanding is 0%. I’d have read more if they weren’t so long. And while an intelligent person can come up with intelligent ways of thinking, I’m not sure this is reversible. I’m also mostly interested in tail-end knowledge. For some posts, I can guess the content by the title, which is boring. Finally, teaching people what not to do is really inefficient, since the space of possible mistakes is really big.
Your last link needs an s before the dot.
Anyway, I respect your decision, and I understand the purpose of this site a lot better now (though there’s still a small, misleading difference between the explanation of rationality and in how users are behaving. Even the name of the website gave the wrong impression).
Survival of a meme for a long time is a weak evidence of its truth. It’s not zero evidence, because true memes have advantage over false ones, but neither it’s particularly strong evidence, because there are other reasons for meme virulence instead of truth, so the signal to noise ratio is not that great.
You should, of course, remember that Argument Screens Off Authority. If something is true there have to be some object level arguments in favor of it, instead of just vague meta-reasoning about “Anscient Wisdom”.
If all the arguments for a particular thing are appeals to tradition, if you actually look into the matter and it turns out that even the most passionate supporters of it do not have anything object-level to back up their beliefs, if the idea has to shroud itself in ancestry and mystery, lest it will lack any substance, then it’s a stronger evidence that the meme is false.
Claims about “if you keep doing this thing, after a lot of hard work you will achieve these amazing results” seem memetically useful regardless of their truth value. It gives people motivation to join the group and work harder; and whenever someone complains about working hard but not getting the advertised results, you can dismiss them as doing it wrong, or not working hard enough.
Also, consider the status incentives. Claiming to achieve the results after a lot of hard work is high-status; admitting to not achieving the results is low-status; and the claims are externally unverifiable anyway.
I suspect this rule appeared as a consequence of many monks following the status incentives too obviously. Letting them continue doing so would be good for them but bad for the group, so the groups that made the taboo were more successful.
(Cynically speaking, the actual rule seems to be: Low-status people are not allowed to talk about their attainments. If you are high-status, others will make assumptions about your attainments, and you can just smile mysteriously and speak some generic wise words, or otherwise confirm it in a plausibly deniable way.)
Age and popularity of an idea or practice have some predictive power as to how useful it has been. Old and surviving is some evidence. Popular is some evidence. Old and NOT popular is conflicting evidence—it’s useful (or at least not very harmful) to some, perhaps limited by context or covariant factors that don’t apply elsewhere.
Whether your interpretation of a practice will get benefits for you should probably be determined by more specific analysis than “it worked for a small set of people in a very different environment, and never caught on universally”.
Some stuff just works but for reasons unknow to the practioner. Trail and error is a very powerful tool if used over over many generations to “solve” a particular problem. But that do not mean anyone know WHY it works.
I’m curious why you were downvoted, for you hit the nail on the head. For a short an concise answer, yours is the best.
Does anyone know? Otherwise I will just assume that they’re rationalists who dislike (and look down on) traditional/old things for moral reasons. This is not very flattering of me but I can’t think of better explanations.
Don’t overthink it. Two downvotes (or maybe one strong downvote) just means that there were one or two people who didn’t like the answer, and the rest either didn’t notice it or didn’t care enough to vote.
I understand that it sucks, but in general, if few people vote on a thing, there is a lot of noise.
Maybe people care way less about the difference between the two kinds of downvotes than I do. Even if the comment was bad or poorly communicated, I don’t think the disagree downvote is appropriate as long as the answer is correct. I see the votes as being “subjective” and “objective” respectively. I agree about the noise thing
Other people have seen different evidence than you, and don’t necessarily agree about which answers are correct.
Then, I’d argue, they’re being wrong or pedantic. Since I don’t believe my evidence is wrong, it’s at most incomplete, and one could argue that an incomplete answer is incorrect in a sense, not because it says anything wrong, but because it doesn’t convey the whole truth. If either reason applied to anyone reading that comment, I’d have loved to discuss it with them, which is why I wrote that initial comment in a slightly provocative or cocky way (which I belive is not inappropriate as it reflects my level of confidence quite accurately). This may conflict with some peoples intellectual virtues, but I think a bit of conflict is healthy/necessary for learning
Why do you expect difficulty thinking of explanations to correlate with the only one you can think of being correct? It seems obvious to me that if you have a general issue with thinking of explanations, the ones you do think of will also be worse than average.
I worded that a bit badly, I meant I had a hard time thinking of better (meaning kinder) explanations, not better (meaning more likely) explanations. Across all websites I’ve been on in my life, I have posted more than 100000 comments (resulting in many interactions), so while things like psychoanalyzing people, assuming intentions, and making stereotypes is “bad”, I simply have too much training data, and too few incorrect guesses not to do this. I do, however, intentionally overestimate people (since I want to talk to intelligent people, I give people the benefit of doubt for as long as possible) but this means that mistakes are attributed to their intentions, personality or values, rather than careless mistakes or superficial heuristics. In this situation, I’ve assumed that they’re offended by the idea that traditional socities rival the science method in some situations. But it may be something more superficial like “I find short comments to be effortless”, “somebody else already said that” or “I didn’t understand your explanation and I consider it your fault”. But like I said in another comment, I remember the first downvotes being disagreements (red X) rather than regular downvotes, so I took it as meaning “this is wrong” rather than “I don’t like this comment”. Not that any of this matters very much, admittedly
On the contrary, your guess did not take context into account and was bad. They were downvoted for answering in a way which didn’t answer the question, had many typos, and otherwise took more effort to read than the information it contained was worth.
Your comment “added more heat than light” for no reason, with no prompting. I explicitly am only making this comment because you posted a long paragraph explaining why you are so sure that you did a good job analyzing when it looks like you did a very poor one. Perhaps people in the past have not given useful feedback, so I will give a short piece: do not do this psycho-analysis until you have generated at least 2 alternate theories for what happened.
They did answer the question, there’s just a little bit of deduction required? I understood it at a glance and didn’t even notice any typos. Situations in which agents can learn something without understanding the reasons behind what they learn are quite common, it’s not a novel idea, it just raises a red flag in people who are used to scientific thinking. The general bias in society against tradition/spirituality/religion is too strong compared to the utility (even if not correctness) of these three.
That useless extra text in my previous comment saves a future comment or to by taking things into account in advance. I even wrote the “I didn’t understand the explanation” reaction above (as something one might have thought before downvoting the comment), so it’s not that I didn’t think of it, I just considered it an unlikely reaction as I disagree with it
The explanation is bad both in the sense of being unkind and in the sense of being unlikely. There are many explanations which are likelier, kinder, and simpler. I think you overestimate your skill at thinking of explanations, and commented for that reason. (Edit: that is, I think you should, if your likeliest explanation is of this quality, consider yourself not to know the true explanation, rather than believing the one you came up with).
I don’t see it as unkind, and I don’t think “trial and error” is a wrong explanation either. It seems very unlikely that ideas which are strictly harmful stick around for a very long time. So much that it must necessarily tend in the other direction (I won’t attempt to prove this though)
I’m good at navigating hypothesis space, so any difficulties are likely related to theory of mind of people who are very different from myself (being intelligent but out of sync in a way). Still, I don’t buy the idea that people can’t or shouldn’t do this. You’re even guessing at my intentions right now, and if somebody is going to downvote me for acting in bad faith, they’ll also need to guess at my intentions. So this seems like a common and sensible thing to do in moderation, rather than an intellectual sin of sorts
Sorry, to clarify, your explanation is the one I’m talking about, not Anders’.
You don’t think the entire western world is biased in favor of science to a degree which is a little naive? In addition to this, I think that people idolize intelligence and famous scientists, that they largely consider people born before the 1950s to have repulsive moral values, that they dislike tradition, that they consider it very important to be “educated”, that they overestimate book smarts and underestimate the common sense of people living simple lives, and that they believe that things generally improve over time (such that older books are rarely worth bothering with), and I believe that social status in general make people associate with newer ideas over older ones. There’s also a lot of people who have grown up around old, strict and religious people and who now dislike these. It doesn’t help it that more intelligent people are higher in openness in general, and that rationalism correlates with a materialistic and mechanical worldview.
Many topics receive a lot more hostility than they deserve because of these biases, and usually because they’re explained in a crazy way (for instance, Carl Jungs ideas are often called pseudoscience, and if you take the bible literally then it’s clearly wrong) or because people associate them with immorality (say, the idea that casual sex is disliked by traditional because they were mean and narrow-minded, and not because casual sex caused problems for them, or because it might cause problems for us)
A lot of things are disliked or discarded despite being useful, and a lot of wisdom is in this category. All of this was packed in the message that “people dislike old things because it sounds irrational or immoral” (people tend to dislike long comments)
… No, I mean I’m discussing your statement “I’m curious why you were downvoted.… I will just assume that they’re rationalists who dislike (and look down on) traditional/old things for moral reasons. This is not very flattering of me but I can’t think of better explanations.” I think the explanation you thought of is not a very likely one, and that you should not assume that it is true, but rather assume that you don’t know and (if you care enough to spend the time) keep trying to think of explanations. I’m not taking any position on Anders’ statement, though in the interests of showing the range of possibilities I’ll offer some alternative explanations for why someone might have disagree-voted it.
-They might think that stuff that works is mixed with stuff that doesn’t
-They might think that trial and error is not very powerful in this context
-They might think that wisdom which works often comes with reasonably-accurate causal explanations
-They might think that ancient wisdom is good and Anders is being unfairly negative about it
-They might think that ancient wisdom doesn’t usually apply to real problems
Et cetera. There are a lot of possible explanations, and I think being confident it’s the specific one you thought of is unwarranted.
I’ve reread the comment thread and I think I’ve figured out what went wrong here. Starting from a couple posts ago, it looks like you were assuming that the reason I thought you were wrong was that I disagreed with your reasons for believing that people sometimes feel that way, and were trying to offer arguments for that point. I, on the other hand, found it obvious that the issue was that you were privileging the hypothesis, and was confused about why you were arguing the object-level premises of the post, which I hadn’t mentioned; this led me to assume it was a non-sequiter and respond with attempted clarifications of the presumed misunderstanding.
To clarify, I agree that some people view old things negatively. I don’t take issue with the claim that they do; I take issue with the claim that this is the likeliest or only possible explanation. (I do, however, think disagree-voting Anders’ comment is a somewhat implausible way for someone to express that feeling, which for me is a reason to downweight the hypothesis.) I think you’re failing to consider sufficient breadth in the hypothesis-space, and in particular the mental move of assuming my disagreement was with the claim that your hypothesis is possible (rather than several steps upstream of that) is one which can make it difficult to model things accurately.
That sounds about right. And “people sometimes feel that way” is a good explanation for the downvote in my opinion. I was arguing the object-level premises of the post because the “disagree” downvote was factually wrong, and this factual wrongness, I argue, is caused by a faulty understanding of how truth works, and this faulty understanding is most common in the western world and in educated people, and in the ideologies which correlate with western thought and academia.
If you disagree with something which is true, I think the only likely explanations are “Does not understand” and “Has a dislike of”, and the bias I pointed out covers both of these possibilities (the former is a “map vs territory” issue and the latter is a “morality vs reality” issue).
I think you figured out what went wrong nicely, but in the end the disagreement remains. I still consider my point likely. If somebody comes along and tells me that they disagreed with it for other reasons, I might even argue that they’re lying to themselves, as I’m way to disillusioned to think that a “will to truth” exists. I think social status, moral values and other such things are stronger motivators than people will admit even to themselves.
I refered to that too (specifically, the assumption). By true I meant that the bias which I think is to blame certainly exists, not that it was certain to be the main reason (but I’d like to push against this bias in general, so even if this bias only applies to some of the people to see my comment, I think it’s an important topic to bring up, and that it likely has enough indirect influence to matter)
To address your points:
1: Of course it’s mixed. But the mixed advice averages out to be “wise”, something generally useful.
2: I think it’s necessarily trial and error, but a good question is “does the wisdom generalize to now?”.
3: This of course depends on the examples that you choose. A passage on the ideal age of marriage might generalize to our time less gracefully than a passage on meditation. I think this goes without saying, but if we assume these things aren’t intuitive, then a proper answer would be maybe 5 pages long.
4: Would interpreting it as “negative” not mean that it has been misunderstood? That one can learn without understanding is precisely why they could prosper with a level of education which pales to that of modern times. We learned that bad smells were associated with sickness way before we discovered germs. If our tech requires intelligence to use, then the lower quartile of society might struggle. And with the blind approach you can use genius strategies even if you’re mediocre.
5: along with 4, I think this is an example of the bias that I talked about above. What we think of as “real” tends to be sufficiently disconnected from humanity. Religion and traditional ways of living seem to correlate with mental health, so the types of people who think that wealth inequality is the only source of suffering in the world are too materialistic and disconnected. Not to commit the naturalistic fallacy, but nature does optimize in its own way, and imitating nature tends to go much better than “correcting” it.
When it comes to being up- or down-voted its a real gamble. There are of course the same mechanism at play here just as on other social (media) platforms, i.e., there are certain individuals, trends and thought patterns that are hailed and praised and vice versa without any real justification. But hey, that is what makes us human, these unexplainable things called feelings.
PS perhaps a new hashtag on X would be appropriate #stopthedownvoting
Yeah I’m asking because downvotes are far too ambigious. I think they’re ambigious to the point that they don’t make for useful feedback (You can’t update a worldview for the better if you don’t know what’s wrong with it). I don’t think downvotes are necessarily bad as a concept though. And about humanity—sure, and on any other website I’d largely have agreed with your view, but when I talk about intellectual things I largely push my own humanity to the side. And even if somebody downvotes because of irrational feelings, I’m interested in what those feelings are.
But I know that people on here frequently value truth, and I’m quite brutal to those values as I think truth is about as valid of a concept as a semicolon (the language is just math/logic rather than English). And if we are to talk about Truth with a capital T, then we’re speaking about reality, which is more fundamental than language (the territory, reality, is important. But I rarely see any good maps, even on this website. So when taoists seem to suggest throwing the map away entirely, I do think that’s a good idea for every day life. It’s only for science, research and tech that I value maps). That makes me an outlier though, haha.
I didn’t downvote it, but didn’t upvote it either. This answer was posted after other answers that contained this answer as a subset, with deeper analysis.
That makes sense, I just evaluated the comment in isolation. But I believe that the first few downvotes were as “incorrect” (the red X) rather than regular downvotes (down arrow), which is why the feedback occured to me as simply mistaken (as the comment is not false).
I’ve noticed, by the way, that most comments posted tend to get downvoted initially and then return to 0 over time. There may be a few regular, highly active users with high standards or something, and less casual users with lower standards which balance them out over time. I’ve gone to −10 and back before.
I understood why you asked, I am also interested in general why people up or down vote something. I could be a really good information and food for thought.
Yeah, who doesn’t want capital T truth… But I have come to appreciate the subjective experience more and more. I like science and rational thinking, it has gotten us pretty far, but who am I the question someones experience. If someone met ‘the creator’ on an ayahuasca journey or think that love is the essence of universe, who am I to judge. When I see the statistics on the massive use of anti-depressants it is obvious to me that we can´t use rational and logical thinking to think our way out from our feelings. What are rationality and logical thinking good for if it in the end can’t make us feel good?
It seems plausible to me that there is a sort of selection process in which people are creating ostensible-wisdom all the time, but only some of that wisdom gets passed along to the next generation, and the next, and so forth, while a lot of it gets discarded. If some example of wisdom is indeed ancient, then you can by virtue of that have at least some evidence that it has passed through this selection process.
To what extent this selection process selects for wisdom that actually earns that designation I’ll leave as an exercise for the reader.