Why Truth?
The goal of instrumental rationality mostly speaks for itself. Some commenters have wondered, on the other hand, why rationalists care about truth. Which invites a few different answers, depending on who you ask; and these different answers have differing characters, which can shape the search for truth in different ways.
You might hold the view that pursuing truth is inherently noble, important, and worthwhile. In which case your priorities will be determined by your ideals about which truths are most important, or about when truthseeking is most virtuous.
This motivation tends to have a moral character to it. If you think it your duty to look behind the curtain, you are a lot more likely to believe that someone else should look behind the curtain too, or castigate them if they deliberately close their eyes.
I tend to be suspicious of morality as a motivation for rationality, not because I reject the moral ideal, but because it invites certain kinds of trouble. It is too easy to acquire, as learned moral duties, modes of thinking that are dreadful missteps in the dance.
Consider Spock, the naive archetype of rationality. Spock’s affect is always set to “calm,” even when wildly inappropriate. He often gives many significant digits for probabilities that are grossly uncalibrated.1 Yet this popular image is how many people conceive of the duty to be “rational”—small wonder that they do not embrace it wholeheartedly.
To make rationality into a moral duty is to give it all the dreadful degrees of freedom of an arbitrary tribal custom. People arrive at the wrong answer, and then indignantly protest that they acted with propriety, rather than learning from their mistake.
What other motives are there?
Well, you might want to accomplish some specific real-world goal, like building an airplane, and therefore you need to know some specific truth about aerodynamics. Or more mundanely, you want chocolate milk, and therefore you want to know whether the local grocery has chocolate milk, so you can choose whether to walk there or somewhere else.
If this is the reason you want truth, then the priority you assign to your questions will reflect the expected utility of their information—how much the possible answers influence your choices, how much your choices matter, and how much you expect to find an answer that changes your choice from its default.
To seek truth merely for its instrumental value may seem impure—should we not desire the truth for its own sake?—but such investigations are extremely important because they create an outside criterion of verification: if your airplane drops out of the sky, or if you get to the store and find no chocolate milk, its a hint that you did something wrong. You get back feedback on which modes of thinking work, and which don’t.
Another possibility: you might care about whats true because, damn it, you’re curious.
As a reason to seek truth, curiosity has a special and admirable purity. If your motive is curiosity, you will assign priority to questions according to how the questions, themselves, tickle your aesthetic sense. A trickier challenge, with a greater probability of failure, may be worth more effort than a simpler one, just because it’s more fun.
Curiosity and morality can both attach an intrinsic value to truth. Yet being curious about whats behind the curtain is a very different state of mind from believing that you have a moral duty to look there. If you’re curious, your priorities will be determined by which truths you find most intriguing, not most important or most useful.
Although pure curiosity is a wonderful thing, it may not linger too long on verifying its answers, once the attractive mystery is gone. Curiosity, as a human emotion, has been around since long before the ancient Greeks. But what set humanity firmly on the path of Science was noticing that certain modes of thinking uncovered beliefs that let us manipulate the world—truth as an instrument. As far as sheer curiosity goes, spinning campfire tales of gods and heroes satisfied that desire just as well, and no one realized that anything was wrong with that.
At the same time, if we’re going to improve our skills of rationality, go beyond the standards of performance set by hunter-gatherers, we’ll need deliberate beliefs about how to think—things that look like norms of rationalist “propriety.” When we write new mental programs for ourselves, they start out as explicit injunctions, and are only slowly (if ever) trained into the neural circuitry that underlies our core motivations and habits.
Curiosity, pragmatism, and quasi-moral injunctions are all key to the rationalist project. Yet if you were to ask me which of these is most foundational, I would say: “curiosity.” I have my principles, and I have my plans, which may well tell me to look behind the curtain. But then, I also just really want to know. What will I see? The world has handed me a puzzle, and a solution feels tantalizingly close.
1 E.g., “Captain, if you steer the Enterprise directly into that black hole, our probability of surviving is only 2.234%.” Yet nine times out of ten the Enterprise is not destroyed. What kind of tragic fool gives four significant digits for a figure that is off by two orders of magnitude?
- Something to Protect by 30 Jan 2008 17:52 UTC; 212 points) (
- A Sense That More Is Possible by 13 Mar 2009 1:15 UTC; 160 points) (
- Outside the Laboratory by 21 Jan 2007 3:46 UTC; 150 points) (
- Newcomb’s Problem and Regret of Rationality by 31 Jan 2008 19:36 UTC; 149 points) (
- Zombies! Zombies? by 4 Apr 2008 9:55 UTC; 114 points) (
- What Cost for Irrationality? by 1 Jul 2010 18:25 UTC; 93 points) (
- LessWrong FAQ by 14 Jun 2019 19:03 UTC; 90 points) (
- New User’s Guide to LessWrong by 17 May 2023 0:55 UTC; 87 points) (
- What’s with all the bans recently? by 4 Apr 2024 6:16 UTC; 64 points) (
- New Less Wrong Feature: Rerunning The Sequences by 11 Apr 2011 17:01 UTC; 49 points) (
- 11 Aug 2007 1:53 UTC; 40 points) 's comment on Your Strength as a Rationalist by (
- A Suggested Reading Order for Less Wrong [2011] by 8 Jul 2011 1:40 UTC; 38 points) (
- Feedback Requested! Draft of a New About/Welcome Page for LessWrong by 1 Jun 2019 0:44 UTC; 29 points) (
- Self-modification, morality, and drugs by 10 Apr 2011 0:02 UTC; 24 points) (
- Help us Optimize the Contents of the Sequences eBook by 19 Sep 2013 4:31 UTC; 18 points) (
- 26 Feb 2012 19:31 UTC; 17 points) 's comment on Selfism and Partiality by (
- Rationality Reading Group: Introduction and A: Predictably Wrong by 17 Apr 2015 1:40 UTC; 16 points) (
- Notes on Rationality by 16 Jan 2022 19:05 UTC; 16 points) (
- 2 Feb 2013 20:37 UTC; 14 points) 's comment on Rationality Quotes February 2013 by (
- [SEQ RERUN] …What’s a bias, again? by 21 Apr 2011 19:18 UTC; 12 points) (
- [SEQ RERUN] Why truth? And... by 20 Apr 2011 19:20 UTC; 11 points) (
- Introduction to the Sequence Reruns by 19 Apr 2011 19:39 UTC; 10 points) (
- 29 Dec 2007 22:02 UTC; 7 points) 's comment on To Lead, You Must Stand Up by (
- Confidence Confusion by 16 Feb 2018 2:00 UTC; 6 points) (
- 13 Mar 2009 17:13 UTC; 6 points) 's comment on A Sense That More Is Possible by (
- 22 Feb 2016 3:43 UTC; 6 points) 's comment on Open Thread Feb 22 - Feb 28, 2016 by (
- 25 Feb 2011 19:40 UTC; 6 points) 's comment on Making Beliefs Pay Rent (in Anticipated Experiences) by (
- 8 Jun 2009 7:12 UTC; 4 points) 's comment on Dissenting Views by (
- 9 Mar 2012 12:38 UTC; 4 points) 's comment on Rationally Irrational by (
- 16 May 2012 13:33 UTC; 4 points) 's comment on Open Thread, May 16-31, 2012 by (
- 11 Mar 2009 21:35 UTC; 4 points) 's comment on Beginning at the Beginning by (
- 24 Dec 2010 0:39 UTC; 3 points) 's comment on I’m scared. by (
- 31 May 2009 19:27 UTC; 3 points) 's comment on A social norm against unjustified opinions? by (
- 2 Nov 2009 12:13 UTC; 3 points) 's comment on Open Thread: November 2009 by (
- 24 Jan 2017 16:57 UTC; 3 points) 's comment on Thoughts on “Operation Make Less Wrong the single conversational locus”, Month 1 by (
- 15 Jan 2010 3:14 UTC; 2 points) 's comment on Back of the envelope calculations around the singularity. by (
- Meetup : Chicago Rationality Training Group—Meeting 1: “Why Rationality?” and Using the Inner Simulator by 27 Mar 2015 15:04 UTC; 2 points) (
- Meetup : Frankfurt Meet-Up by 4 Jul 2015 16:45 UTC; 2 points) (
- 17 May 2012 16:00 UTC; 2 points) 's comment on Open Thread, May 16-31, 2012 by (
- Meetup : Frankfurt Meetup Revival by 20 Jun 2015 11:00 UTC; 2 points) (
- Introduction to the Sequence Reruns by 19 Apr 2011 4:48 UTC; 2 points) (
- 11 Mar 2009 22:22 UTC; 2 points) 's comment on Beginning at the Beginning by (
- Meetup : West LA Meetup 08-23-2011 by 19 Aug 2011 17:45 UTC; 1 point) (
- Meetup : Baltimore / UMBC Meetup—usefulness and meaning of “truth” by 9 Feb 2017 20:26 UTC; 1 point) (
- Meetup : Baltimore / UMBC Meetup—trying something new! by 3 Feb 2017 0:09 UTC; 1 point) (
- 23 Jul 2022 0:12 UTC; 1 point) 's comment on David Udell’s Shortform by (
- 20 Jul 2015 8:45 UTC; 1 point) 's comment on Rational vs Reasonable by (
- 6 Aug 2009 19:55 UTC; 1 point) 's comment on Rationality Quotes—August 2009 by (
- 8 Sep 2011 13:12 UTC; 0 points) 's comment on [Question] What’s your Elevator Pitch For Rationality? by (
- 5 Sep 2013 23:58 UTC; 0 points) 's comment on The genie knows, but doesn’t care by (
- 22 Apr 2016 12:55 UTC; 0 points) 's comment on Suppose HBD is True by (
- 11 Aug 2011 15:25 UTC; 0 points) 's comment on Judging the intent of others favorably by (
Yet nine times out of ten the Enterprise is not destroyed. What kind of tragic fool gives four significant digits for a figure that is off by two orders of magnitude?
One who doesn’t understand the Million To One Chance principle that operates in fictional universes. If the Star Trek universe didn’t follow the laws of fiction, the Enterprise would have been blown up long ago. ;)
See also: Straw Vulcan
Maybe in ninety-eight universes out of 100 it does blow up and we just see the one that’s left; and he’s actually giving an accurate number. :P
The TV show version of the anthropic principle: all the episodes where the Enterprise does blow up aren’t made.
Except one.
In the “Star Trek: Judgement Rites” game there’s a spot where Spock gives ridiculously precise odds, and Kirk comments that they seem “better than usual.” Spock then clarifies that he has begun factoring Kirk’s history of prevailing when the odds are against him into the calculations.
And do keep in mind that the audience doesn’t necessarily see all the times that low-odds plans don’t work out.
Does this sentence contain a typo?
“If the iron approaches your face, and you believe it is hot, and it is cool, the Way opposes your fear. If the iron approaches your face, and you believe it is cool, the Way opposes your calm.”
i like this but have no idea what it means since the determiner this has a split direction: either the sentence it is embedded or the following...
can’t spot the mistake in either :s
Fixed.
Thanks, Eliezer!
“Are there motives for seeking truth besides curiosity and pragmatism?”
I can think of several that have showed up in my life. I’m offering these for consideration, but not claiming these are good or bad, pure or impure etc. Some will doubtless overlap somewhat with each other and the ones stated.
As a weapon. Use it to win arguments (sometimes the point of an argument is to WIN, never mind learning the truth. I’ve got automatic competitiveness I need to keep on a short leash). Use it to win bar room bets. Acquire knowledge about the “buttons” people have, and use it to manipulate them. Use it to thwart opposition to my plans, however sleazy. (“What are we going to do tonight, Brain?” … )
As evidence that I deserve an A in school. Even if I never have a pragmatic use for the knowledge, there is (briefly) value in demonstrably having the knowledge.
As culture. I don’t think I have ever found a practical use for the facts of history ( of science, of politics, or of art ), but they participated in shaping my whole world view. Out of that, I came out of retirement and dedicated myself to saving humanity. Go figure.
As a contact, as in, “I know Nick Bostrom.” (OK, that’s a bit of a stretch, but it is partly informational.) 5, As pleasure & procreation, as in, “Cain knew his wife.” ;-)
“To make rationality into a moral duty is to give it all the dreadful degrees of freedom of an arbitrary tribal custom. People arrive at the wrong answer, and then indignantly protest that they acted with propriety, rather than learning from their mistake.” Yes. I say, “Morality is for agents that can’t figure out the probable consequences of their actions.” Which includes me, of course. However, whenever I can make a good estimate, I pretty much become a consequentialist.
Seeking knowledge has, for me, an indirect but huge value. I say: Humanity needs help to survive this century, needs a LOT of help. I think Friendly AI is our best shot at getting it. And we’re missing pieces of knowledge. There may be whole fields of knowledge that we’re missing and we don’t know what they are.
I would not recommend avoiding lines of research that might enable making terribly powerful weapons. We’ve already got that problem, there’s no avoiding it. But there’s no telling what investigations will produce bits of information that will trigger some human mind into a century-class breakthrough that we had no idea we needed.
The significant digit anecdote reminds me: why does the Dow Jones giver their average with 2 decimal points?
I do have a couple of problems, though
1) It is written: “The first virtue is curiosity.” - Written by whom? 2) …curiosity is an emotion… - says who? 3) To seek truth merely for its instrumental value may seem impure… – Why? To whom? 4) If we want the truth, we can most effectively obtain it by thinking in certain ways – and if you think the way I tell you to think, you’ll wind up with my truth
By Eliezer.
From TVtropes:
Sigh.
I know! Is the world not more beautiful when one can understand how it works?
Lets not forget, arguably the most important reason.
Because it makes us feel good.
We can feel superior to others, because we can do something that few other people can. We can collect instances where our approach is beneficial and use that to validate our self worth. And we can form a community that validates our strengths and ignores our weaknesses. All perfectly reasonable motivations (provided our satisfaction is a reasonable goal).
In my own field (Computer Vision), there are those who pursue it rationally (with rigorous mathematical analysis) and those who pursue it heuristically (creating a variety of systems and testing them on small samples). These approaches seem to mirror the determined search for truth and the pragmatic “go with what feels like it works” approaches. Without rigorously analysing them (although this may be possible) both approaches seem to deliver benefit with no clear winner in terms of delivering approaches that are practically applied or used as the basis for further work. I think it is interesting to apply this meta analysis to reason, i.e. can we scientifically determine whether approaching problems reasonably conveys advantage? Is there an optimal balance?
“Rationality” is what I would call the meta-analysis which concludes that both approaches are equally valid in this field.
By “most important reason” do you mean “most compelling justification” or “predominant cause”?
I would suggest both, and I would add that I don’t think this inherently diminishes the value of pursuing truth. I am increasingly of the belief that in order to be content it is necessary to pick ones community and embrace its values. What I love about this community is its willingness to question itself as much as the views of others. I think it’s useful to acknowledge what we really enjoy and be hesitant of explanations that attribute objective value to enjoyable activities. Doing so risks erasing self doubt and can lead to the adoption of strong moral values that distort our lives to such an extent that they ultimately make us miserable.
Morality doesn’t need to have anything to do with society or duty. Consider the case of an rational ethical egoist, to whom acting in one’s self-interest and for one’s own values is virtuous.
If that person is a human, and thinks that ethical egoism does not have anything to do with society or duty, then they are mistaken.
Why?
Maintaining interpersonal relationships is vital to the human condition. As Aristotle put it, “The solitary life is perhaps suitable for a god or a beast, but not for a man”. Friendships are a necessary part of flourishing for humans, and aside from that we are almost always in a context where our success depends upon our interactions with others.
I’d guess because humans often contain concepts of duty and the like, and have experiences vastly contingent on social / societal contexts.
There is more discussion of this post here as part of the Rerunning the Sequences series.
I’ll be honest, I have a serious problem with hypocrites, and so I warn everyone I know if they start heading down that path. In your article, you say that morality is perhaps the most suspect method of rationality. Yet, you yourself, by putting up these articles and arguing that everyone should use rational thought, seem to have a moral motivation for rationality. I am not saying that this is your only motivation, but it seems to be the motivation behind these posts. However, I do appreciate that you respect morality by mentioning how important it is in pursuing paths that will not result in horrible consequences. I think that maybe you should allow yourself to admit that morality is a good motivator if used with other good types of motivations to seek truth.
Here’s an interesting take on the “morality” side: It may be morally incumbent on some to look behind the curtain, and not for others. Since knowing about biases can hurt people, it may well be that those who are “fit” to look behind the curtain are in fact required to be the guardians of said curtain, forbidding anyone without the proper light and knowledge from looking behind it, but acting upon the knowledge gained for the benefit of society.
..… Hence, the Conspiracy.
I am trying to win an argument, and I am having trouble defeating the following claim:
It can, under certain scenarios, be instrumental (in the sense of achieving values) to believe in something which is false—usually by virtue of a placebo effect. For example: believing you are more likely to get a job offer than you really are, so you are confident at the job interview.
The counterargument I want to make, in my head, is that if you have the ability to deceive yourself to that extent—to make yourself believe something that is false—then you have the ability to believe that you won’t get the job interview, but pretend that you think you will. I don’t feel like that’s a very solid or reassuring argument, though.
I think the best response to the argument for instrumentally useful false beliefs is to think a little about the causal mechanism. Surely it is not the case that Omega reads your minds, sees your false confidence, and orders you hired.
As you noted, a more plausible mechanism is that the false confidence causes changes in affect (i.e. appearing confident) that are beneficial for the task. Or perhaps false over-confidence cancels false under-confidence that would have caused anxiety that would be detrimental for the task.
Once the causal chain is examined, the next thing to ask is whether the beneficial intermediate effects can be caused by something other than false belief. If so, you have answered the claim you are responding to. If not, examining why one doesn’t believe it possible needs to be examined.
You should also examine the costs of each method of achieving the intermediate effects. Even if there are other ways available, maybe self-deception is the easiest, and the costs of that particular incorrect belief are small.
This quote conflates “true beliefs” and what we may call “correct beliefs”. True beliefs are ones which assign high probability to the truth, i.e. the actual state of things. Correct beliefs are ones which follow from an agent’s priors and observations. The former are objective, the latter subjective but not irrational. If the iron has been cool the last 107 times it has approached your face, but hot this 108th time, your belief that it is cool is correct but false (perhaps better terms are needed).
Also, a belief is not binary. You may be 99.8% sure that the iron is hot and still rationally fear it. A hot iron on your face is far more costly than a needless avoidance.
There’s an interesting duality between morality as “the belief that truthseeking is pragmatically important to society” and morality as the result of social truthseeking, which is closer to the usual sense, or rather what the usual sense would ideally be. I’d like to see this explored further if anyone has a link in mind.
The LesssWrong FAQ indicated that there is value in replying to old content, so I’m posting anyway. Context might be in order, so here’s what we are talking about:
You and I had a similar take on this bit of Yudkowsky’s post. Maybe you would call my stance “truthseeking as the result of morality” instead of your “morality as the result of social truthseeking”.
The problem Yudkowsky is describing sounds like it comes from entangling the “logical” archetype with “morality”. This means any behavior which differs from this archetype becomes “immoral”, regardless of whether it is actually Bayesian reasoning or not. Personally, I would phrase this as “declaring rationality to be (a) moral value”. This specifically excludes cases where people place intrinsic value on some specific result, and then place instrumental moral value on rationality, as a tool to achieve the desired results. This is much what effective altruism is doing, after all.
Hmm, couldn’t find a link directly on this site. Figured someone else might want it too (although a google search did kind of solve it instantly).
I’m not convinced that this post actually says anything. If seeking the truth is useful for any specific reason, then people who see some benefit from it will do so and if it isn’t useful then they won’t. Actually writing this out has made me think both this post and my comment haven’t really said much, but I think that’s because this discussion is too abstract to have any real use/meaning. Ideas which are true/work will work, ideas that aren’t won’t, and that’s all that needs to be said, never mind this business about rationality and truth and curiosity.
Would that this were true.
Indeed, if that were all there was to it, nothing would need to be said at all, as that’s a tautology. But people manage to fail at noticing when things do / don’t work anyway, and false ideas stick around a very long time.
I just find it very unlikely that the specifics of how this post is constructed have much of an effect on correcting this issue.
Ah, but the seeker needs to find out if the answer—the truth—is beneficial. You can’t not know the truth and make a decision without knowing the answer. That’s just guessing.
My friend argues that believing in an afterlife (i.e. religion) is beneficial for some people because it gives them a (patently false!) sense of “security”. So why tell them it’s wrong to believe such a thing?
My answer is a) the fact that there’s no afterlife is the truth, as far as humans know (i.e. as far as the evidence—or lack of evidence—shows); and b) it’s wrong to believe in such a falsehood—in the sense that most people with such a belief tend to be either less ethical/moral (because they’ll fix up the imbalance ‘later’), or irrationally over-moral or hyper-ethical because they don’t want to risk their slot in eternity’s gravy train. Either way, they act irrationally and abnormally, and for the wrong reasons!
I can’t think of much in life that could be worse than that. What a horrible life!
It is instructive to review this essay after reading the sequence regarding metaethics, morlaity, and planning algorithms. It lets you receive a deeper insight into how “morality as a motivation” might have come about and what its flaws are.
Perhaps this is a minor nitpick or technicality, but that’s probably not the best example, because keeping the same probability estimate actually makes sense in this instance. To alter it would be a form of survivorship bias. This is because there is no way he could have have observed the opposite during the previous 10 attempts, since he would no longer be alive to have those memories if he had.
“For this reason, I would also label as “morality” the belief that truthseeking is pragmatically important to society...”
This seems like a naive understanding of what morality is. It seems like you are referring to a certain subset of ethics, in this case utilitarianism (do what promotes the greatest good among the greatest number). But this is just one part of a class of normative ethical theories. The class to which I’m referring to is consequentialism where essentially, the end justifies the means. I’d rather not get off topic here and simply state that a morality-driven pursuit of truth does not necessarily mean that the person is motivated by the “greater good”.
Also, Spock’s calculation is off by one order of magnitude, not two. He predicts, roughly, a 98% chance of destruction yet you say in practice, the Enterprise is destroyed 10% of the time. That’s just about one order of magnitude off.
I think you’re misinterpreting Yudkowsky. He’s not saying that all ethics is pragmatic. He’s saying that pragmatics is ethics. Previously in the paragraph, he listed other, non-pragmatic ethical reasons to seek truth.
As for the orders of magnitude, it’s log(.9) - log(.02234) = 1.6 orders of magnitude. That’s closer to 2 than to 1.
Remember that that’s a 11 year old post you’re replying to.
Hey, eleven year old posts are just posts that lack life experience.
“Curiosity, as a human emotion, has been around since long before the ancient Greeks.”
Is that a reference to Pandora’s Box or am I off base?
Yes it is.
I am guessing that the link what truth is. is meant to be http://yudkowsky.net/rational/the-simple-truth
Thanks, fixed as well!
Please restore apostrophees...
“our probability of surviving”—probably extrapolated from other similar objects going through black holes. Enterprise, because fictional laws, eschews the odds, but it may only mean that some other ships get destroyed even somewhat more frequently and Enterprise has “five points… for sheer dumb luck!”
I think its great that the apostrophes were left out. Apart from possessive apostrophes, which I think should be used, apostrophes are an extra effort (especially when texting) that add no extra meaning or clarification.
I mean, there are minimal pairs (mostly in cases where possessive apostrophees are for some reason not used, like its—it’s, who’s—whose). But overall it just helps readability (speaking as a non-native).
I am not sure, but there seem to be a couple of apostrophes missing in the sentence
Truth is important because it is instrumental to all areas of life. By increasing our overall epistemic rationality, we will understand the world better, and so be able to act (or withold action) in ways that increase our quality of life. Without epistemic rationality, instrumental rationality may be incoherent and misdirected, seeking goals that are counterproductive to the agent’s and/or common wellbeing. For example, a person might highly value outcome X, and practice instrumental rationality to achieve that outcome. However, if they had a better understanding of epistemic rationality, they might no longer value outcome X and instead more highly value different outcomes. Epistemic rationality allows us to “optimize” our values.
Optimizing our values and behaviour increases common wellbeing, therefore I think truth seeking and epistemic rationality is a moral imperative for everyone. I believe that the desire for increased wellbeing is actually the most important reason for truth seeking, and since it affects everyone, it is a moral/civil duty.
Regarding the Spock probability reference, I’ve always imagined that TV shows and movies either take place in the parallel universe where very specific events happen to take place (e.g. the universe where the ‘bad guys’ miss the ‘good guys’ with all of their bullets despite being trained soldiers), or in the case of the Enterprise, the camera follows the adventures of the one ship that is super lucky. Perhaps the probability of survival really is 2.234 %, the Enterprise is just the 1 in 1,000 ship that keeps surviving (because who wants the camera to follow those other ships?).
Most apostrophe removals didnt cause any problems, but the “were” in the paragraph before the last one had me confused for a split second.
For everytime I am curious about “how the things are?”, I would like to be curious also about “what to do?” then. (Curious pragmatism)
What about moral duty to be curious?
Thanks Eliezer,
I am surprised that you only have three motivations for seeking out truth in your conclusion. Moral duty, pragmatism and curiosity. Even though you talk about manipulating the world while talking about curiosity.
I would separate curiosity, where the benefit is enjoyment at understanding, and power seeking, which allows shaping the world more efficiently.
Certainly in the scientists I know, those motivations are often mixed. The search for exotic particles in physics is closer to curiosity and the “pleasure of finding things out”.
The quest of seeking the truth in applied physics to build a nuclear bomb has more to do about power seeking.
Two very different motivations, no ?
This might seem obvious (whether right or wrong) to some of you, but broadly I think curiosity must be a significant factor that came along with human (likely back to early hominid) evolution. Animals are curious, sure, but don’t really have a drive for knowledge outside of what is immediately practical. Humans might be the only species with even the brainpower and self-awareness to seek facts for the pleasure of knowledge, just as we have progressed into other subjective enjoyments like food and music.
Yes. Morality changes every few years. What we were taught as children does not hold that much value in this day and age. Instead of morality being the base of rationality, We should keep Rationality as the base of morality. But I doubt that too.
So, the motive should be to gain the ability to manipulate the world?
I think “should” isn’t quite the right frame here. You can have whatever motivations you want. But, it’s a if-then fact about the world that science was helpful for people seeking to affect the world.
(Also note “manipulate” sometimes has negative connotations which isn’t really what’s meant here)
Science is our tool to manipulate the world. Science is the instrument of truth. If you can not manipulate the meaning of reality through definition of words such as ‘rational’, what it is and it isn’t. Where science is the substitute for truth and rationality is the substitute for truth also. As self-evident the definition of rationality is substituted for science and this forms our basic definition for rationality moving forward. Care not to define science as in rationality, science is a tool as is rationality.
In rationality, as in science. ‘Curiosity, pragmatism, and quasi-moral injunctions’ are injected into research questions and colour our understanding of both the world and truth.
To use science as a tool is to obtain the truth, to use rationality is to obtain the truth. We must apply a scientific approach and method to our rationality.