Rationality Quotes May 2014
Another month has passed and here is a new rationality quotes thread. The usual rules are:
Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
Do not quote yourself.
Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you’d like to revive an old quote from one of those sources, please do so here.
No more than 5 quotes per person per monthly thread, please.
Provide sufficient information (URL, title, date, page number, etc.) to enable a reader to find the place where you read the quote, or its original source if available. Do not quote with only a name.
- May 8, 2014, 7:50 AM; 14 points) 's comment on Rationality Quotes May 2014 by (
- May 31, 2014, 11:06 AM; 1 point) 's comment on Against Open Threads by (
When another asserted something that I thought an error, I deny’d myself the pleasure of contradicting him abruptly, and of showing immediately some absurdity in his proposition; and in answering I began by observing that in certain cases or circumstances his opinion would be right, but in the present case there appear’d or seem’d to me some difference, etc.
I soon found the advantage of this change in my manner; the conversations I engag’d in went on more pleasantly. The modest way in which I propos’d my opinions procur’d them a readier reception and less contradiction; I had less mortification when I was found to be in the wrong, and I more easily prevail’d with others to give up their mistakes and join with me when I happened to be in the right.
Benjamin Franklin
Unfortunately this self-debasing style of contradiction has become the norm, and the people I talk to can instantly notice when I am pouring sugar on top of a serving of their own ass. Perhaps they are simply noticing changes in my tone of voice or body language, but in sufficiently intellectual partners I’ve noticed that abruptly contradicting them startles them into thinking more often, though I avoid this in everyday conversation with non-intellectuals for fear of increasing resentment.
I would love to hear what Richard Dawkins would say in reply to this quote.
Personally, I think it’s great advice—challenging people immediately and directly is often not a good long-term strategy.
Dawkins, in arguments with theists, homeopaths, etc., is not trying to convince his interlocutors; nor are most of the other well-known atheist public figures. The aim to convince bystanders — the private atheist who is unsure whether to “come out”, the theist who’s all but lost his faith but isn’t sure whether atheism is a position one may take publicly, the person who’s lukewarm on religious arguments but has always had a rather benign and respectful view of religion, etc.
In private conversations with someone whose opinions are of concern to you, Franklin’s advice make sense. The public arguments of Dawkins & Co. are more akin to performances than conversations. I think he achieves his aim admirably. I, for one, have little interest in watching people get on a public stage and have exchanges laden with “in certain cases or circumstances...” and other such mealy-mouthed nonsense.
I don’t know of nontrivial cases and circumstances where homeopaths are right about homeopathy (and where their statements are taken as normally understood).
We could imagine cases where people underwent homeopathic treatments and saw improvements in their symptoms for other reasons. For example, colds usually stick around for 3-4 days and dissipate without treatment, so you take a homeopathic medicine and two days your cold vanishes and you think “It worked.” The correlation-causation error that might seem obvious to skeptics, but it isn’t to the homeopath believers.
As I interpret the Franklin quote, you provisionally accept (don’t immediately and explicitly challenge) the claim that the homeopathic medicine made the cold go away, so you can establish a further dialogue with some chance (let’s just say 10%) of causing doubt in the other person. If you immediately say “There is no way that the homeopathic medicine had any effect,” the person will get angry at you. You’ll probably have a smaller chance of changing their mind, and they won’t like you, which generally doesn’t help you accomplish goals.
With Franklin’s approach, I think it doesn’t even matter that there are no merits to a homeopaths treatments (or insert whichever group); you need to cede some ground to keep negotiations open and to get people to like you because it’s helpful later.
We could even imagine cases where people underwent homeopathic treatments and saw improvements in their symptoms for that reason. The placebo effect is often a real thing, and is most effective when you don’t believe what you’re taking is a placebo.
If it were possible to keep homeopathy from being inexplicably muddled up with non-evidence-based naturopathy (where your treatment may have negative side effects), unfortunately mixed up with anti-”allopathy” (where you forgo a more medically-effective treatment), or inescapably tied to anti-epistemology in general, it might even be a net good on its own.
If anyone has found that the placebo effect isn’t real, making scientific history by publishing your discovery might be of higher utilty than downvoting my outdated information.
The placebo effect is complicated. See e.g. this.
True. But am I just being biased when I interpret that as support for my claim? “Sham acupuncture” and even placebo pills given to people who are told they’re taking placebos both show significant positive effects. I’d be very surprised if placebo pills given to people who are told they’re taking real “homeopathic” medicine didn’t show real effects too.
What is your claim, precisely?
Sure, giving homeopathic pills to people is likely to make them feel better via placebo. But by the same reasoning, this will also work for voodoo rituals, holy water, and mind rays from outer space.
I’m not sure I know what point you meant to make by this.
I read Franklin’s advice as applying, and intending to be applied, quite readily in those cases where one’s interlocutor is totally and clearly wrong. The idea is that you take a certain roundabout approach to telling them that they’re wrong, without quite coming out and saying it straight out. The fact that they are wrong need not be in question; it’s merely a matter of which tactics are effective in convincing them. (The assumption, of course, is that you’re interested in convincing them.)
In any case, I am unsure in what sense your comment is a response to what I said… could you clarify?
The way I read Franklin’s quote is that if someone says “well, (factual statement X) is true, and from it I draw (unwarranted conclusion Y)”, we should claim to agree with him (because we agree with X) and act as though drawing conclusion Y is a minor flaw in his theory that doesn’t negate the fact that he’s basically correct.
But he’s not basically correct. He did invoke X, and X is true, but to say that he’s right, or even partially right, means he’s right about a substantial part of his argument, not that he’s based it on at least one statement that is true. A homeopath doesn’t become partly right just because he says “well, vaccines work by using a tiny amount of something to protect against it, so perhaps homeopathy can also use a tiny amount of a substance to protect against it”, even if the statement about vaccines is literally correct.
What do you think of the following?
‘If the data is good, but the argument is not, argue the argument (e.g. by showing that it doesn’t hold water). Don’t argue about the conclusion and point to the bad argument as evidence.’ (not a rationality quote, just curious about your reaction)
I think that is not what Franklin was saying.
David Chess
I don’t think there’s such a thing as “unmediated experience of the world”.
(I like the quotation a lot for giving a plausible, lucid reason why Zen might spurn the usual sort of analytical discourse. But it’s so clear an explanation of an idea that I think it’s revealed a basic problem with the idea, namely that it points towards a non-existent goal.)
There is such a thing as a less mediated experience of the world.
Can you give some examples of more and less mediated experiences?
That’s an interesting question—“mediated” should probably be modified by “of what?” and “by what?”.
It’s definitely possible for perceptions to become less mediated by focusing on small details so that prototypes aren’t dominant. It’s possible to become a lot more perceptive about color, and Drawing on the Right Side of the Brain is about seeing angles, lengths, shading, curves, etc. rather than objects and thus being able to draw accurately.
If you get some distance on your emotions through meditation and/or CBT, is your experience of your emotions less mediated? More mediated? Wrong questions? I think meditators assume that the calm you achieve is already there—you just weren’t noticing it until you meditated enough, so your emotions are more mediated and your calm is less mediated, but now that I’ve put it into words, I’m not sure what you would use for evidence that the calm was always there rather than created by meditation.
Thank you for the evidence that it’s possible to get 12 karma points for something that doesn’t exactly make sense.
Reasoning inductively rather than deductively, over uncompressed data rather than summaries.
Mediated: “The numbers between 3 and 7” Unmediated: “||| |||| ||||| |||||| |||||||”
It’s like neutrality on Wikipedia. You’ll never attain neutrality, but there is such a thing as less and more, and you want to head in the “more” direction.
I think I see what you mean; if I mentally substitute “is closer to an” for “involves the”, and “that state would have” for “that state has”, the practice the quotation describes makes more sense to me. (I’m leery of the idea that it’s better to head in the direction of less mediation — taking off my glasses doesn’t give me a clearer view of the world — but that’s a different objection.)
So while the original quotation talked about not thinking at all, your revised version urges that we think as little as possible. How does it qualify as a “rationality quote”?
It can be rationally beneficial to realise now much mediation is involved in perception, in the same way it is useful to replace naive ealism with scientific realism.
Relatively unmediated perception is also aesthetically interesting, and therefore of terminal value to many.
You tell me; I have to squint pretty hard to make it read as telling me something useful about rationality.
Because? People who claim it are lying? You dont have it, and your mind is typical?
Or maybe they and satt mean different things by “unmediated”.
Because causal mechanisms to relay information from the world to one’s brain are a necessary prerequisite for “experience of the world”, so one’s “experience of the world” is always mediated by those causal mechanisms.
And it’s not possible for just the cognitive mechanisms to shut down, and leave the perceptual ones?
If you shut down the cognitive mechanisms completely, would you even remember what you have perceived? Or even that you have perceived something?
Maybe not. That matches some reports of nonordinary experience.
I doubt it’s possible. I’m sceptical that one can cleanly sort every experience-related bodily mechanism into a “cognitive” category xor a “perceptual” category. Intuitively, for example, I might think of my eyes as perceptual, and the parts of my brain that process visual signals as cognitive, but if all of those bits of my brain were cut out, I’d expect to see nothing at all, not an “unmediated” view of the world — which implies my brain is perceptual as well as cognitive. So I expect the idea of just shutting down the cognitive mechanisms and leaving the perceptual mechanisms intact is incoherent.
(Often there’re also external physical mechanisms which are further mediators. You can’t see an object without light going from the object to your eye, and you can’t hear something without a medium between the source and your ear.)
So are people who claim unmediated experience lying?
Or using a different definition of “unmediated”, or confused about their experience, or...
My best guess is that the vast majority of them are sincere. Being correct vs. being a liar is a false dichotomy.
So are they sincerely ,mistaken about that they think unmediated experience is, or about what you think it is?
(Presumably your first “that” is meant to be a “what”?) That question implies a false dichotomy too. The mistaken people might not be mistaken about what anyone thinks unmediated experience is; perhaps everyone pretty much agrees on what it is, and the mistaken people are simply misremembering or misinterpreting their own experiences.
This conversation might be more productive if you switch from Socratic questioning to simply presenting a reasonable definition of “unmediated experience” according to which unmediated experience exists. After all, your true objection seems to be that I’m using a bad definition.
Anybody can be wrong about anything, That isn’t an interesting observation, because it is general. Earlier you gave a specific reason, which you think is empirical, and I think is partly conceptual.
There are also people who claim that they feel God’s presence in their heart, you know.
I believe them. I don’t believe in God, but I do believe that it’s possible to have the subjective experience of a divine presence—there’s too much agreement on the broad strokes of how one feels, across cultures and religions, for it to be otherwise. Though on the other hand, some of the more specific takes on it might be bullshit, and basic cynicism suggests that some of the people talking about feeling God’s presence are lying.
Seems reasonable to extend the same level of credulity to claims about enlightenment experiences. That’s not to say that Buddhism is necessarily right about how they hash out in terms of mental/spiritual benefits, or in terms of what they actually mean cognitively, of course.
I don’t disagree with any of that. Who knows, could be even one and the same experience which people raised in one culture interpret as God’s presence, and in another as enlightenment.
The research summarized in this book seems to suggest that this is indeed the case.
And people who claim to see cold fusion and canals on mars.
There is a happy medium between treating empirical evidence as infallible, and dismissing it as not conforming to your favourite theory.
Words are used to point to places. The thing that comes to your mind when you hear the words “unmediated experience of the world” might not exist. That doesn’t mean that there aren’t using people who use that phrase to point to something real.
Couldn’t you say exactly that to anyone who doubts the existence of anything?
You could. And the way to resolve a dispute over the existence of, say, unicorns, would be to determine what is being meant by the word, in terms of what observations their existence implies that you will be more likely to see. Then you can go and make those observations.
The problem with talk of mental phenomena like “unmediated perception” is that it is difficult to do this, because the words are pointing into the mind of the person using them, which no-one else can see. Or worse, the person isn’t pointing anywhere, but repeating something someone else has said, without having had personal experience. How can you tell whether a disagreement is due to the words being used differently, the minds being actually different, or the words and the minds being much the same but the people having differing awareness of their respective minds?
This is a problem I have with pretty much everything I have read about meditation. I can follow the external instructions about sitting, but if I cannot match up the description of the results to be supposedly obtained with my experience, there isn’t anywhere to go with that.
The assumptions in that sentence are interesting. It presupposes that a debate is an interaction where you compete against other person by proving them wrong. I rather want to offer friendly way to improve understanding. Whether or not the other person accept it is their choice.
In cases like this it’s very useful to think about what people mean with words and not go with your first impression of what they might mean.
I don’t think so. I just meant to point out that what you said was a triviality. If you intended it as a protreptic triviality, that’s fine, I have no objection and that’s justification enough for me.
Could you define what you mean with “triviality”?
I mean something which follows from anything. I don’t intend it as a term of disapprobation: trivialities are often good ways of expressing a thought, if not literally what was said. If you intended this: “In cases like this it’s very useful to think about what people mean with words and not go with your first impression of what they might mean” then I agree with you, and with the need to say it. I just missed your point the first time around (and if you were to ask me, you put the point much better when you explained it to me).
Yes, that roughly what I mean. However there might be no way for you to know what they mean if you lack certain experiences.
If a New Agey person speaks about how the observer effect in Quantum physics means X, his problem is that he doesn’t have any idea what “observer” means for a physicist. Actually getting the person to understand what “observer” means to a physicist isn’t something that you accomplish in an hour if the person has a total lack of physics background. .
The same is true in reverse. It’s not straightforward for the physicist to understand what the New Agey person means. Understanding people with a very different mindset then you is hard.
You seem to be saying two things here:
This entails that it is possible to simply explain what you mean, even across very large inferential gaps.
Yet here you seem to entertain the idea that it’s sometimes impossible to explain what you mean, because a certain special experience is necessary.
I endorse the first of these two points, and I’m extremely skeptical about the second. It also seems to me that physicists tend to hold to the first, and new agers tend to hold to the second, and that this constitutes much of the difference in their epistemic virtue.
I said impossible in an hour not impossible in general. It simple might take a few years. There a scene in Neuromancer where at the end one protagonist asks the AI why another acted the way they did. The first answer is: It’s unexplainable. Then the answer is, it’s not really unexplainable but would take 37 years to explain. (my memory on the exact number might not be accurate)
On the other hand the idea that teaching new phenomenological primitives is extremely hard. It takes more than an hour to teach a child that objects don’t fall because they are heavy but because of gravity. Yes, you might get some token agreement but when you ask questions the person still thinks that a heavy object ought to fall faster than a light one because they haven’t really understand the concept on a deep level. In physics education it’s called teaching phenomenological primitives.
You can’t explain a blind man what red looks like. There are discussions that are about qualia.
No, they think that a heavy object ought to fall faster than a light one because that’s how it actually works for most familiar objects falling through air.
If you’ve just been telling without demonstrating, this is pure reliance on authority.
(Or taking a hypothetical seriously.)
An important factor is just understanding the details of how everything supposedly fits together. Even if you don’t know from observation that it’s the way things work in our world, there is evidence in seeing a coherent theory, as opposed to contradictory lies and confusion. Inventing a robust description of a different world is hard, more likely it’s just truth about ours.
Empty water bottles don’t exactly fall faster than full water bottles.
But my point isn’t about whether you rely or authority or don’t but on how people actually make decisions. There literature on phenomenological primitives in physics.
The one time we tested the theory of gravity experimentally in school I did not get numbers that the Newtonian formula predicted. At the same time I don’t think those formula are wrong. I believe them because smart people tell me that they are true and I don’t care enough about physics to investigate the matter further.
Through air full water bottles do fall faster than empty ones.
LOL. “Who are you going to believe, me or your lying eyes?”
A bit maybe but I think they should have roughly the same speed. How much faster do you think they would fall?
Sometimes you have to make hard choices...
There was a time were I thought it was about picking sides and being for empiricism or against it. I’m well past that point. There are times when believing the authority is simply the right choice.
If the fall is sufficiently long, they reach different terminal velocities, which are proportional to the square root of their masses.
According to the Teh Interwebz, an average 0.5 litre empty plastic bottle weights about 13 g. A full bottle weights 513 g. Therefore, at terminal velocity it falls about 6.3 times faster.
What does sufficiently long mean in practice?
It depends on the drag coefficient and forward projected surface area of the bottle. My mildly informed guess is that it would take between 20 and 30 seconds.
EDIT:
Actually, I’ve just tried dropping 1.5 litre bottles from an height of about 1.8 m. Even if the fall lasts perhaps one second, the empty bottle starts to tumble much more than the full one, and hits the ground a noticeably later.
In epistemic matters? I don’t think so.
Information isn’t free and there are many cases where gathering more information is too expensive and who have to go with the best authority that’s available.
On the other hand it’s worthwhile to be conscious of the decision that one makes in that regard. Most people follow authorities for all the wrong reasons.
“Because of gravity” isn’t any better an explanation than “because they are heavy”. Why does “gravity” accelerate all masses the same? Really thinking about that leads to general relativity, so it actually takes many years to explain why things fall, and it can’t be done without going through calculus, topology, and differential geometry.
Cf. Feynman on explanations (07:10–09:05).
Just being able to recite “because of gravity” is not enough for many purposes. I myself did well in physics at school and finished best in class in it but I haven’t studied any physics since then and I’m well aware that I don’t understand advanced physics.
It’s not perfect but it is better. Airplanes fly well based on Newtonian physics.
You can construe the goal as non existent, but that is an uncharitable reading.
Whether the goal exists is an empirical question, no...? I don’t understand where (a lack of) charity enters into it.
The principle of charity relates to what people mean by what they say. Unmitigated experience might be empirically nonexistent under one interpretation of unmediated but not under another. If someone claims to have had unmediated experience , that is evidence relating to what they mean by their words.
I see. What more charitable interpretation of “unmediated experience” would you prefer?
Maybe the PoC would be an easier sell if it were phrased in terms of the “typical semantics fallacy”.
- Herman Chernoff (pg 34 of Past, Present, and Future of Statistical Science, available here)
Actually, if you do this with something besides a test, this sounds like a really good way to teach a third-grader probabilities.
Experiments can fail if they are executed or planned improperly. If both the control and the experimental group are given sugar pills, for example, or the equipment fails in a shower of sparks, the experiment has provided no evidence by which one can update. It is a small quibble, and probably not what the quote meant to illustrate (I’m guessing that the experiment provided evidence which downgraded the probability of the hypothesis), but something to note nonetheless: experiments are not magic knowledge-providers.
I think Ferguson would call those “results,” and from those you would have learned about performing experiments, not about the original hypothesis you were interested in.
If anything, I think a really failed experiment is one that makes you think you’ve learned something that is in fact wrong, which is the result of flaws in the experiment that you never become aware of.
Ferguson’s proposed new language is a downgrade. Being unable to identify something as a failure when the outcome sucks is fatalism and not particularly useful.
— Robert Morris, quoted in Brian Snow’s “We Need Assurance!”
I tend to disagree.. I have done some things which I thought was experimenting with but did not come up with any clear conclusion after the experiment and analysis. On rewriting the thesis it turned out there were a lot more implicit assumptions inside the hypothesis that I was not aware of. I think it was a badly designed experiment and it was rather unproductive in retrospective analysis. I suppose one could argue that it brought to light the implicit assumptions and that was a useful result. Somehow(not sure how or why) I find that a low standard to consider something an experiment.
An experiment is supposed to teach you the truth. If you run the experiment badly and, say, get a false positive, then the experiment failed.
“Man is not going to wait passively for millions of years before evolution offers him a better brain.”
--Corneliu E. Giurgea, the chemist who synthesized Piracetam and coined the term ‘Nootropic’
-- Scott Aaronson
The same is true for a lot of intellectual concepts outside of math.
If only we could put together, say, a four-year college degree course intended to have this effect …
I think that’s a super idea. I’d like to design it and I’d like to take it. The ideas that underlie everything else. Like a whole university course devoted to A-level maths, but covering every simple underlying idea. We should start by trying to work out what the syllabus should be.
(one 16 lecture course on each topic, and we’ll have three courses per term so that’s 36 courses in total)
Off the top of my head we should have: groups, calculus, dimensional analysis, estimation, probability (inc bayes), relativity, quantum mechanics, electronics, programming, chemistry, evolution, evolutionary psychology, heuristics and biases, law, public speaking, creative writing, economics, logic, game theory, game-of-life, how-to-win-friends-and-influence people, history, cosmology, geography, atomic theory, molecular biology …
All taught with immediate direct applications to actual things in the immediate environment and if you can’t come up with simple examples that a child would find interesting and could understand then it doesn’t make the cut.
Any more suggestions? If we get loads let’s make a post on ‘The ideal 4-year university course’.
The joke was that this is precisely what a liberal arts degree was meant to be; the main problem is that liberal arts degrees haven’t kept up with the times.
Here’s a related post, though it doesn’t have that many suggestions: http://lesswrong.com/lw/l7/the_simple_math_of_everything/
What like?
For my part, I’ve found the economic notions of opportunity cost and marginal utility to be like this.
That’s maths too.
The specific application of the math does add value.
Most obviously for the opportunity costs, on the math side you only have to understand the “minus” symbol, which pretty much everyone already does. With marginal utility you have to understand the “derivative”, but you still have to apply it in a situation ouside of math class.
It’s applied math, not the pure math that the OP was talking about. Furthermore, these can be useful ideas even when used purely qualitatively; then it’s not even applied math (except in a sense that everything is math, if we make the math sufficiently imprecise).
“Nothing in Biology Makes Sense Except in the Light of Evolution”
— Theodosius Dobzhansky
The fact that a theory that can be stated in ten words frames an entire discipline is quite incredible. Compared to group theory and probability, it sure seems like an easier uploading process as well.
What are the ten words or less in which evolution can be stated?
“Multiply, vary, let the strongest live and the weakest die.”
-Charles Darwin, The Origin of Species
I think that Darwin would himself acknowledge that “fittest” is a more accurate rendition than “strongest,” but whether the quote can be rendered in this way without breaking the ten words constraint comes down to a question of whether “unfittest” counts as a legit word.
I think “fit” has become a free-floating standard rather than meaning “fitting into a particular environment”.
Maladapted, as an adjective? Though I suppose that’s cheating a bit since it’s a sense of adaptation that draws on an evolutionary metaphor.
warped by random change
what replicates stays around
always evolving
(More constraints! More constraints!)
change without motion
the lament of the red queen
coevolution
Natural Selection: the differential survival of replicators with heritable variation.
“We have what replicated better; noise permanently affects replicative ability”?
“Mathematics is about proving theorems based on axioms and other theorems” also frames a whole discipline.
A frame tells you something about a disciple but it doesn’t tell you everything.
A good deal of the sequences seem to fall in this category. Conservation of expected evidence, for instance.
When it comes to general concepts cybernetics is something to which a lot of people on LW don’t have much exposure and cybernetics as central as knowing probability theory for understanding how the world works.
Basically any subject in which I invested a decent amount of thought produces lessons that are applicable to other topics. I even learned a lot in an activity like Salsa dancing that’s useful in other contexts.
What introductory material about it would you recommend?
Unfortunately I don’t have a good recommendation. Formally I learned about it in a physiology lecture at university and the professor said that there isn’t a good textbook that he could use to teach us.
While searching around I found An Introduction of Cybernetics by Ross Ashby. It’s might not be perfect but I think it’s probably a good enough introduction.
-- Joseph P. Simmons, The Reformation: Can Social Scientists Save Themselves
Voted up for the linked article more than for the quote.
Captain James Tiberius Kirk dodging an appeal to nature and the “what the hell” effect, to optimize for consequences instead of virtue.
That clip is a brilliant example of Shatner’s much-mocked characteristic acting-speak.
-- Carlos Bueno, Mature Optimization, pg. 14. Emphasis mine.
“I refuse to answer that question on the grounds that I don’t know the answer.”
― Douglas Adams
I like this quote, but it occurs to me that “I don’t know” is often a reasonable answer to a question.
How about this:
“I refuse to answer that question on the grounds that I can’t think of an answer which I am confident will not put me in a negative light.”
That just seems like overly honest politicking to me.
“All the world is a political campaign. And the men and women are merely politicians.”
-- me right now
P.S. “overly honest” is kinda the point of the joke.
— Errol Morris
-- Alan Lightman
Every 100 million years or so, an asteroid or comet the size of a mountain smashes into the earth, killing nearly everything that lives. If ever we needed proof of Nature’s indifference to the welfare of complex organisms such as ourselves, there it is. The history of life on this planet has been one of merciless destruction and blind, lurching renewal.
Sam Harris, Mother Nature is Not Our Friend, in response to the Edge Annual Question 2008
http://www.samharris.org/site/full_text/the-edge-annual-question-20081#sthash.IBMyMOQN.dpuf
Accident, n. An inevitable occurrence due to the action of immutable natural laws.
Ambrose Bierce, The Enlarged Devil’s Dictionary, complied and edited by Ernest J. Hopkins
“The power of accurate observation is commonly called cynicism by those who have not got it.” -George Bernard Shaw
Or naivety, depending on how cynical the critic is.
And of course, inaccurate observations are commonly called cynical and/or naive as well...
Real probabilities about the structure and properties of the cosmos, and its relation to living organisms on this planet, can be reach’d only by correlating the findings of all who have competently investigated both the subject itself, and our mental equipment for approaching and interpreting it — astronomers, physicists, mathematicians, biologists, psychologists, anthropologists, and so on. The only sensible method is that of assembling all the objective scientifick data of 1931, and forming a fresh chain of partial indications bas’d exclusively on that data and on no conceptions derived from earlier and less ample arrays of data; meanwhile testing, by the psychological knowledge of 1931, the workings and inclinations of our minds in accepting, connecting, and making deductions from data, and most particularly weeding out all tendencies to give more than equal consideration to conceptions which would never have occurred to us had we not formerly harboured provisional and capricious ideas of the universe now conclusively known to be false. It goes without saying that this realistic principle fully allows for the examination of those irrational feelings and wishes about the universe, upon which idealists so amusingly base their various dogmatick speculations.
-- H.P. Lovecraft, Selected Letters, 1932-1934.
What’s with bas’d and dogmatick? Is Lovecraft aiming at some antique effect, or did he write in a non-standard dialect?
Yes and yes. Lovecraft was writing in early 20th century New England, but he typically affected the forms of late 1700s British English, or at least tried to. Partly this was for stylistic effect, but I get the sense that he also thought of his native idiom as intellectually debased.
The aesthetics of tradition were kind of a thing with Lovecraft, although in other ways he was thoroughly modern. Not that these affectations were exclusive to Lovecraft by any means; William Hope Hodgson for example wrote The Night Land (a seminal 1912 horror/SF story and notable Lovecraft influence) in an excruciating pseudo-17th-century dialect.
Good god, he did write everything like that!
Consider my priors for knowledge of Bayes-fu by wise predecessors to be significantly raised.
This is from Greg Egan’s 1999 novel Teranesia; since there are no hits for ‘Teranesia’ in the Google custom search, I’m inferring that it hasn’t been posted before.
Here’s a little background. This is a spoiler for some events early in the novel, but it is early; it’s not a spoiler for the really big stuff (not even in this chapter). So Prabir lives alone with his father (‘Baba’) and mother (and baby sister Madhusree who is not in this scene), and their garden has been sown with mines for some very interesting reasons that needn’t concern us, and Baba has discovered this by being blown up by one. But he’s still alive, so mother and Prabir have laid a ladder atop some boxes across the garden, and she’s crawled along the ladder to rescue Baba without setting off more mines. But this is harder than anticipated.
(taken from the American hardback edition, pages 50&51)
[Edit: grammar in the text written by me]
It is a good quote, and it works in context, but often it pays to (temporarily) believe that “what you’d like to be true” actually is and do your hardest (or even impossible) to figure out how you got there. “Yes, we can do it.” could be the first step toward figuring out the “how” part.
Blindsight by Peter Watts
Wikipedia:Don’t stuff beans up your nose
There is a shorter version :-)
“Kids, while we’re away, don’t lock the cat in the fridge”, said the parents.
“Ooooh, that’s a great idea”, said the kids...
That’s not necessarily a bad result. If he’s busy stuffing beans up his nose, then this might keep him out of greater trouble; everything else that’s listed before (and which apparently he did before) seems worse. That might be just what his mother planned.
i once had to go to the doctor so he could fish a lego out of my nose. So, that was worse than eating all the cabbage or spilling all the milk I think. More scary, and probably more expensive, depending on how the insurance worked out.
I think that shape, hardness, and solubility would all make a Lego brick worse than a bean.
Really, the only way to tell is probably to try it out. Who wants to volunteer for an experiment?
Camus, The Myth of Sisyphus
The worker is paid for his work, and with this money he obtains a roof over his head, food on the table, and the wherewithal to raise a family and to pursue other activities when he is not working. Sisyphus works for nothing and does nothing but work. That Camus sees, or affects to see, no difference between their situations says something about Camus, but nothing about work.
Is it truly different to work because the Gods have forced you, compared to working because the threat of starvation and homelessness has forced you?
I thought the quote was suggesting both tasks are equally arbitrary and pointless, though, rather than discussing compensation. It seems more interesting.
Yes, it is.
Some people have it harder than others, but we all work because the threat of starvation and homelessness forces us; except for those relying on the charity of friends and family (including deceased ones), or of institutions. The meat machines we live in require sustenance and shelter, without which we die, and these resources are provided either by our own work or by that of others. Death is free. Life has to be worked for.
Some are fortunate enough to have the abilities, health, energy, and social environment to be confident of always finding people to pay for whatever it is we want to direct our efforts towards. The wolves are so very far from our door that we can forget, or never realise, that they are out there, inching closer when we rest and retreating when we work.
So you can apply the story of Sisyphus to all of us, but only in the larger sense that we are forced to run all the while just to stay alive, and that only for 70 years or so. It applies just as much to Camus (whose Wiki page is rather uninformative about how he actually earned a living) as to the lowest factory worker.
We may, of course, daydream of a future in which we need care no more to clothe and eat. We may work to bring such a future about. But that is not the world we live in today, nor has it ever been, nor will it be for a very long time.
It is suggesting that, and, I say, it is wrong.
(One might argue that “the workman of today” is less likely to accomplish something meaningful, in the course of earning their living.)
Even if everything was meaningless—which it isn’t, in my opinion, but Camus does seem to have thought so—and everyone must work or starve—which, as you note, is not true because people are compassionate—surely that merely makes the comparison to Sisyphus that much more relevant? How does it undermine the quote?
Indeed, if it’s that hard to escape, surely comparing starvation to the inescapable will of the gods is that much more accurate?
That depends on the strength of one’s transhumanist faith. :)
One can repurpose Camus as much as Camus repurposes Sisyphus, but the original passage does go on to say, “Sisyphus, proletarian of the gods, powerless and rebellious...” So Camus is not talking about us all, certainly not intellectuals like himself, but about the proles.
I think there’s a non-negligible difference between “I push the same rock around every day, and there it is back in the exact place it started again” and e.g. “I push the same kinds of rock around every day, but last year’s are now embedded in the building we just finished.”
Camus may answer along the lines of “since [any ascribing of meaning] is absurd in the first place, if you think there’s objectively more meaning in the building you built than in the rock you pushed up, you’re not taking the premise seriously”. In a way we’re whistling in a dark forest.
Steven Pinker
This lacks a ring of truth for me.
A lot of folks seem to expect the science of human beings to reinforce their bitterness and condemnation of human nature (roughly, “people are mostly crap”). I kinda suspect that if you asked “sophisticated people” (whoever those are) to name some important psychology experiments, those who named any would come up with Zimbardo’s Stanford prison experiment and Milgram’s obedience experiments pretty early on. Not a lot of emotional uplift there.
As for the arts — horror films where everyone dies screaming seem to be regarded as every bit as lowbrow as feel-good comedies.
It’s not obvious that one is better off with the truth. Assume that for some desirable thing X:
P(X|I believe X will happen) = 49%
P(X|I believe X won’t happen) = 1%
It seems I can’t rationally believe that X will happen. Perhaps I would be better off being deluded about it.
Sorry, I don’t understand—why does sum of probabilities not equal 100% in your example? Assume that you missed “5” in “P(X|I believe X won’t happen) = 1%”
But for what reason?
These probabilities are not required to sum to 1, because they are not incompatible and exhaustive possible outcomes of an experiment. More obvious example to illustrate:
P(6-sided die coming up as 6 | today is Monday) = 1⁄6
P(6-sided die coming up as 6 | today is not Monday) = 1⁄6
1⁄6 + 1⁄6 != 1
I think your example is not suitable for situation above—there I can see only two possible outcomes: X happen or X not happen. We don’t know anything more about X. And P(X|A) + P(X|~A) = 1, isn’t so?
No. You may have confused it with P(X|A) + P(~X|A) = 1 (note the tilda). In my case, either 6-sided die comes up as 6, or it doesn’t.
Yes, either X happens or X doesn’t happen. P(X) + P(~X) = 1, so therefore P(X | A) + P(~X | A) = 1. Both formulations are stating the probability of X. But one is adjusting for the probability of X given A; so either X given A happens or X given A doesn’t happen (which is P(~X | A) not P(X | ~A)).
When Pinker said “better off”, I assumed he included goal achievement. It’s plausible that people are more motivated to do something if they’re more certain than they should be based on the evidence. They might not try as hard otherwise, which will influence the probability that the goal is attained. I don’t really know if that’s true, though.
The thing may be worth doing even if the probability isn’t high that it will succeed, because the expected value could be high. But if one isn’t delusionally certain that one will be successful, it may no longer be worth doing because the probability that the attempt succeeds is lower. (That was the point of my first comment.)
There could be other psychological effects of knowing certain things. For example, maybe it would be difficult to handle being completely objective about one’s own flaws and so on. Being objective about people you know may (conceivably) harm your relationships. Having to lie is uncomfortable. Knowing a completely useless but embarrassing fact about someone but pretending you don’t is uncomfortable, not simply a harmless, unimportant update of your map of the territory. Etc.
I’m not saying I know of any general way to avoid harmful knowledge, but that doesn’t mean it doesn’t exist.
― Terry Pratchett, The Wee Free Men (Discworld, #30)
If you want to use your selfishness to help others, then you’re not selfish.
Do we really need to go into the question what “selfishness” actually means? In ordinary situations I’d say that “the actual altruist [is] whichever one actually holds open doors for little old ladies”; maybe in certain situations we need different words to specify whether they do so because it’s in their own utility function or because of religious/game-theoretical/superrational/acausal/whatever-they-call-it-these-days reasons, but...
I don’t think this is just a problem with definitions. This is fake morality.
She’s giving a fake justification for helping others as her own self interest. Someone who finds a way to justify buying a million dollar laptop is clearly just being selfish and doesn’t really care about their claimed morality of altruism. Similarly, someone who tries to justify helping others is clearly just being altruistic and doesn’t really care about their claimed morality of selfishness.
Of course you’re not. But human nature is supposedly selfish, and if your true goals are altruistic, you will have to find a way to turn it around.
Emphasis on “supposedly”, since the popular hypotheses about “selfish human nature” are far too simplistic to reflect any actual results of psychological research.
Of course they are. Unlike those about Pratchett’s witches, though. They reflect the ‘locally-selfish-globally-altruistic’ concept surprisingly well.
Selfishness seems to be referred to as primarily a a mindset or attitude. Helping others as an outcome. I think they can co-exist at the same time, for example Adam Smith’s invisible hand in capitalism.
I’m not saying that your selfishness can’t result in others being helped. I’m saying that if you’re trying to figure out how to use your selfishness to help others, then helping others is clearly your goal, which proves you’re not selfish. If you’re willing to game the system to help others, then you’d be willing to help others without gaming the system.
If you are selfish (this usually will cash out as “you alieve that selfishness is good”) but believe it is virtuous or beneficial to act unselfishly, then you would rightly seek ways to act in ways that feel locally selfish but have unselfish consequences.
You have a left parenthesis but no matching right parenthesis.
I have now fixed this serious issue. (Is this sarcasm? You Decide!)
Shouldn’t that be
?
I considered that but decided it was needlessly cruel. And now you did it for me, so I get the best of both worlds.
Now that I can understand your sentence:
If you are selfish, but believe it is virtuous to act unselfishly, then you’ll seek ways to act in ways that look unselfish, but have selfish consequences.
Tiffany seems to be an altruist who thinks she’s supposed to be selfish, and is trying to justify acting altruistically as somehow being selfish.
You’re describing someone who believes it is beneficial to look unselfish but not be unselfish.
If you are selfish, but have reasoned out that helping others is the correct goal to have, you would believe not that it is beneficial to look unselfish, but that it is beneficial to act unselfishly. And if you believe that but do not alieve it, System 2 would look for ways to do unselfish things that System 1 would perceive as selfish, so as to better motivate yourself toward those goals.
Making your identity small Is wisdom...
Making your identity large is....?
...witchcraft?
No. Try a monosyllable.
″...on”?
Damn, that’s tricky. Only boring monosyllables come to mind, like “good”. ”...power” is almost a monosyllable if you say it fast enough, though.
Oh! I know.
″...life”.
Getting warmer …
Oh, so I am to seek the one true answer, not optimise for the most badass one? Belch… I’ve got nothing. Why is this conversation getting downvoted anyway?
CEV, in this case.
Scott Aaronson in reply to the statements like “A stone is conscious to the “inputs” of gravity and electrostatic repulsion”
I’m not sure Scott isn’t just falling victim to the sorites paradox here. There are lots of macroscale definitions which seem to break down at their smallest application, and it’s not immediately obvious that consciousness couldn’t be one of them.
The question is whether to interpret such a falling apart of a definition (which I take to mean that related decision problems cannot be clearly answered anymore) as an inherent or even necessary attribute of concepts which ‘live’ at a macroscale, or as a weakness of said definition, as a sign that we’re mistaking a fuzzy word cloud for a precisely defined set.
Hmm. I see his point, I thinks, but I … think it does mean that, actually. Without fully understanding the definition, you should be less sure that a better understanding wouldn’t classify them differently.
Picture a slave-owner saying something similar about a slave, for instance. Slave-owners were even more confused than we are about personhood, and I think it’s clear that they weren’t “crystal clear on what [isn’t a person”, in retrospect.
Sure, there are debatable cases. But there are also clear-cut ones, like a bacteria, while alive, has no personhood”, and if your model predicts that it has more personhood than a human (as IIT does for consciousness for a certain 2D configuration), then you should not call whatever your model describes a bacteria has more of as “personhood”.
It reminds me of Justice Potter Stewart: “I know it when I see it!”
Well, it’s the converse, which seems a lot more useful a criterion to me.
Nassim Taleb
--Kevan Lee
Patrick McKenzie on why having a publication date on your blog entry devalues it.
(Link to the, er, “content”.)
And yet books always have a publication date.
ETA: as do scientific articles, of course, and the date really matters, not because of being “up to date” but because the date gives some context to whatever it is.
As far as books go:
I’m curious if showing a date is as bad as he thinks; he doesn’t mention ever A/B testing the claim himself. (I’d test it on my site, except the date is already buried in the sidebar to the point where many people miss it, so I wouldn’t expect much of a difference.)
I predict yes, but if I’m reading his position right showing the date is just a symptom of not having a Long Content focus, which is what he’s really arguing for in that article (and which your site already has in spades).
If the problem is focusing on short-term writing which becomes worthless quickly, then simply hiding or showing dates shouldn’t much affect how long readers stay on the page: most short-term stuff shows its colors very quickly. (How many sentences does it take to figure out you’re not interested in a rant about John Kerry from 2004?)
I think McKenzie’s argument is that using a date can turn long content into short content, which many people do on accident, and while he doesn’t quantify it (which would be the value of A/B testing) I think he has enough evidence to establish the direction of the effect. Not using a date is obviously not sufficient to turn short content into long content, but I do think it may be helpful at getting one into the right state of mind, as it focuses the attention on sorting things by content rather than time. (Imagine trying to find all of Robin Hanson’s writing on construal level theory- yes, you can use the nearfar tag on Overcoming Bias, but that’s sorted by date, and there’s no solid introduction.)
That’s a good example of how weak date markers are: if the dates were deleted completely from every OB post, people would still find them incomprehensible because there’s only one post which could be considered an overview of the concept, and is a needle in the haystack until and unless Hanson in some way synthesizes all his scattershot posts and allusions into a single Near-Far page.
The posts need some sort of organization imposed; the lack of that organization is what kills them, not some date markers. If my essays were broken up into 500-word chunks, and sorted either randomly or by date, they wouldn’t look much better.
To expand on this a bit: he gives the following supporting example:
I wanted to give this a fair shake, but it reads like McKenzie has never heard of journalism.
He’s writing for an audience that sells software as a service (SaaS). Why would he give journalism more than a disclaimer (which he does include)?
He might be writing for an SaaS audience, but he’s writing about the blog format, which is built to facilitate crowdsourced magazine journalism or editorial-style content. Now, he’s quite right that the format’s poorly suited to long-form or reference-style content, but starting a post with “let’s talk about blogging” and proceeding to talk about all the ways it sucks for those content types, without much more than a word for its intended purpose, strikes me as a pretty serious omission.
If instead he’d framed it as “blogs are often misused”, then we wouldn’t be having this conversation. But that’s not where we’re standing.
What makes it serious? What purpose does including journalism in the article serve?
-- Max H. Bazerman
Context? I can randomly replace elements of this by their opposites and get something that sounds just as truthy.
Try it!
“[Because/although] [positive/negative] [illusions/perceptions] provide a [short/long]-term [benefit/cost] with [larger/smaller] [long/short]-term [costs/benefits], they can [become/avoid] a form of [emotional/intellectual] [procrastination/spur to action].”
It’s from a book on decision-making, in a section on motivational biases. Bazerman discusses the evidence that positive illusions help (’[research] suggest[s] that positive illusions enhance and protect self-esteem, increase personal contentment, help individuals to persist at difficult tasks, and facilitate coping with aversive and uncontrollable events” is a short sample), talks about clusters (unrealistically positive views of the self, unrealistic optimism, illusion of control, self-serving attributions, and positive illusions in groups and society), and then the quote is from a section labeled “Are Positive Illusions Good for You?”. Here’s the full paragraph it is from:
It looks to me like doing an odd number of flips is often silly. (“Because positive illusions typically provide a long-term cost with larger long-term costs, they can avoid a form of emotional procrastination.” What?)
“Because positive illusions provide a short-term benefit with smaller short-term benefits, they can become a form of intellectual procrastination.”
-- Lucien Zell (can’t find an authoritative attribution)
I’m really not clear on what this is actually supposed to be a metaphor for.
It’s clearly not something you would literally want to do, since the night is temporary and the light provided by the map is dim and brief. But maybe this is a metaphorical long-lasting night and bright burning map?
Destroying something that would be useful ir even necessary in the future so that you can better get through or perhaps survive the present.
Going to the same college as your high school sweetheart for example. Perhaps it will work out and you won’t need the map.
I’m sure this has been discussed before, but my attempts at searches for those discussions failed, so...
Why is this thread in Main and not Discussion?
Tradition, I guess.
In the Age of Sequences, Eliezer sometimes posted rationality quotes, in the article text (1, 2, 3, etc.). Things written by Eliezer in that era are probably automatically considered Main-level. And the new Rationality Quotes threads don’t seem worse than the traditional ones—if we look at the highly voted quotes.
Well, discussion didn’t exist back than.
Last month I posted the rationality quotes in discussion. Someone complained and said it belonged in main so I moved it there. This month I just started it in Main.
-- Marcus Aurelius, Meditations, pg. 76
-- Scott Adams
Context: In a short video, a woman throws out an old desk lamp. The music and cinematography are contrived such that the viewer feels tempted to feel sorrow on behalf on the lamp. Then a man walks up and addresses the camera with:
Ikea commercial. Video here
A good example of the difference between fuzzies and utilons.
In an opinion piece in the Boston Globe called “At MIT, the humanities are just as important as STEM” by Deborah K. Fitzgerald, Apr 30, 2014
The slashdot poster AthanasiusKircher goes on to ask
See slashdot post
Some of these things are not like the others...
Which are the odd ones out?
To a first approximation:
{ critical thinking skills; an ability to work with and interpret numbers and statistics; a willingness to experiment, to open up to change }
vs.
{ knowledge of the past and other cultures; access to the insights of great writers and artists }
Then you’ve got this one by itself because what the heck does it even mean:
{ the ability to navigate ambiguity }
{ the ability to navigate ambiguity }
I think this is one of the most important skills you get from the humanities. I have a friend who’s a history professor. He’s very used to hearing 20 different accounts of the same event told by different people, most of whom are self-serving if not outright lying, and working out what must actually have gone on, which looks like a strength to me.
He has a skill I’d like to have, but don’t, and he got it from studying history, (and playing academic politics).
How did he know that his judgment of what actually had gone on was correct? How did he verify his conclusion?
Statistics is precisely that, but with numbers.
That only works if you have numbers.
Luckily, you can make numbers.
“Making numbers” is unlikely to produce useful numbers.
Not necessarily.
Relevant Slate Star Codex post: “If It’s Worth Doing, It’s Worth Doing With Made-Up Statistics”
“Making” is not “making up”.
When you flip a coin a bunch of times and decide that it’s fair, you’ve made numbers. There are no numbers in the coin itself, but you reasonably can state the probability of the coin coming up heads and even state your certainty in this estimate. These are numbers you made.
As a more general observation, in the Bayesian approach the prior represents information available to you before data arrives. The prior rarely starts as a number, but you must make it a number before you can proceed further.
No, those are numbers you found. The inherent tendency to produce numbers when tested in that way (“fairness/unfairness”) was already a property of the coin; you found what numbers it produced, and used that information to derive useful information.
Making numbers, on the other hand, is almost always making numbers up. Sometimes processes where you make numbers up have useful side-effects
but that doesn’t mean that making numbers is at all useful.
Basically, I think it’s important to distinguish between finding numbers which encode information about the world, and making numbers from information you already have. Making numbers may be a necessary prerequisite for other useful processes, but it is not in itself useful, since it requires you to already have the information.
I don’t think this is a useful distinction, but if you insist...
You said: “That only works if you have numbers.” Then the answer is: “Luckily, you can find numbers.”
Finding relevant numbers is significantly difficult in most circumstances.
That phrase is so general as to be pretty meaningless.
I do not subscribe to the notion that anything not expressible in math is worthless, but “in most circumstances” the inability to find any numbers is a strong indication that you don’t understand the issue well.
Yes, that’s the whole point. There aren’t always numbers you can find, even when there are, finding them is nontrivial, and you often have to deal with the ambiguous situation or problem regardless.
What you said here is a vast oversimplification; if you have gotten to the point where you can find relevant numbers, you have already successfully navigated most of the ambiguity.
Is there still an inferential gap here? I thought I made my point clear about three comments ago, but this is clearly not as obvious a distinction as I expected it to be.
And that’s where you are being misled by your insistence on “finding” numbers instead of “making” them.
It’s pretty easy to construct estimates. The problem is that without good data these estimates will be too wide to the point of uselessness. But you can think, and find some data, and clean some existing data, and maybe narrow these estimates down a bit. Go back to 1. and repeat until you run out of data or the estimate is narrow enough to fit its purpose.
Ambiguity isn’t some magical concept limited to the humanities. The whole of statistics is dedicated to dealing with ambiguity. In fact, my standard definition of statistics is “a toolbox of methods to deal with uncertainty”.
I understand your point, I just think it’s mistaken.
I consider all the things you’ve said to be my best arguments why you’re wrong, so there’s clearly something wrong here. But I’ve run out of novel arguments and can’t figure out where the disconnect is.
What is that statement of mine to which you are assigning the not-true value?
You seem to think that it is generally easy to turn arbitrary ambiguities into numbers in a way amenable to using statistics to resolve them. I find that to be obviously, blatantly false.
Where you see things like this:
I see something more like
Where the difficult part is gather data. If you can gather data that is relevant, then statistics are useful. But often, you can’t, and so they aren’t. I outlined the exact same process as you, I’m just significantly more pessimistic about how often and how well it works.
Yes, I do.
No, I do not. I said nothing about “resolving” things.
When I say “numbers” in the context of statistics, I really mean probability distributions, often uncertain probability distributions. For example, the probability of anything lies somewhere between zero and one—see, we don’t have any information, but we already have numbers.
You’re likely thinking that when I am turning ambiguities into numbers, I turn them into nice hard scalars, like “the probability of X is 0.7”. No, I don’t. I turn them into wide probability distributions, often without any claims about the shape of these distributions. That is still firmly within the purview of statistics.
If you have no data, nothing is useful. Remember, the original context was how humanities teach us to deal with ambiguity. But if you have no data, humanities won’t help and if you do, you can use numbers.
I’m not saying that everything should be converted to numbers. My point is that there are disciplines—specifically statistics—that are designed to deal with uncertainty and, arguably, do it better than handwaving common in the humanities.
Your confidence in your ability to do statistics to everything is clearly unassailable, and I have no desire to be strawmanned further.
This is part of critical thinking. Taking a vaguely defined or ambiguous problem, parsing out what it means and figuring out an approach.
I’m rather curious;
If you take people across a big swath of humanities, and ask them about subjects where there is a substantial amount of debate and not a lot of decisive evidence—say, theories of a historical Jesus—how many of those people are going to describe one of those theories as more likely than not?
Like, if you have dozens of theories that you’ve studied and examined closely, are we going to see people assigning >50% to their favored theory? Or will people be a lot more conservative with their confidence?
BTW, the probability that the Jesus character in the four Gospels was based on a real person would be a great question to ask in the next LW census/survey.
Was Bram Stoker’s Dracula “based on” a real person ? Possibly, given an extremely weak interpretation of “based on”.
What does it take for a fictional character to be based on a real person? Does it suffice to have a similar name, live in a similar place at a similar time? Do they have to perform similar actions as well? This has to be made clear before the question can be meaningfully answered.
That’s an extraordinarily weak “based on”. The Dracula/Tepes connection in Bram Stoker’s work doesn’t go much beyond Stoker borrowing what he thought was a cool name with exotic, ominous associations (and that “exotic” is important; Eastern Europe in Stoker’s time was seen as capital-F Foreign to Brits, which comes through quite clearly in the book). Later authors played on it a bit more.
The equivalent here would be saying that there was probably someone named Yeshua in the Galilee area around 30 AD.
Was Yeshua that uncommon of a name? You’re setting the bar pretty low here. (That being said, my understanding is that there’s a strong scholarly consensus that there was a Jew named Yeshua who lived in Galilee, founded a cult which later became Christianity, and was crucified by the Romans controlling the area. So these picky ambiguities about “based on” aren’t really relevant anyway)
Not that uncommon, no. I’m exaggerating for effect, but the point should still have carried if I’d used “Yeshua ben Yosef” or something even more specific: if you can’t predict anything about the character from the name, the character isn’t meaningfully based on the name’s original bearer.
There also is a strongly scholar consensus that anthropogenic global warning is occurring, and yet plenty of LW census respondents put in there numbers not very close to 100%.
That is true, and intentional. It is far from obvious that the connection between the fictional Jesus and the (hypothetical?) historical one is any less tenuous than that (1) . The comparison also underscores the pointlessness of the debate : just as evidence for Vlad Dracul’s existence is at best extemely weak evidence for the existence of vampires, so too is evidence for a historical Jesus at best extremely weak evidence for the truth of Christianity.
(1) Keep in mind that there are no contemporary sources that refer to him, let alone to anthing he did.
I predict you’d get a minority of people using it as a proxy for atheism, another minority favoring it simply because it’s an intensely contrarian position, and the majority choosing whatever the closest match to “I don’t know” on the survey is.
I seem to remember reading that virtually all serious scholars agree that there was a historical Jesus, and that the opposite claim is considered a fringe idea along the lines of homeopathy, so soundly has it been debunked. My memory might be exaggerating, but I think the gist is correct.
Could you have picked an example where one side isn’t composed entirely of crackpots?
Which side are you claiming to be crackpots?
Seriously, I can’t see how anyone could claim that Jesus was ahistorical who isn’t some combination of doing reverse-stupidity on Christianity or taking an absurd contrarian position for the sake of taking an absurd contrarian position.
Edit: fixed typo.
Am I correct in reading “a historical” as “ahistorical” and not as “a historical figure”?
I would think that believing Jesus didn’t exist would be just as absurd as thinking that all or almost all of the events in the Gospels literally happened. Yet the latter make up a significant number of practicing Biblical scholars. And for the majority of Biblical scholars who don’t think the Gospels are almost literally true, still have a form of Jesus-worship going on as they are practicing Christians. It would be hard to think that Jesus both came back from the dead and also didn’t exist; meaning that it would be very hard to remain a Christian while also claiming that Jesus didn’t exist, and most Biblical scholars were Christians before they were scholars.
The field both is biased in a non-academic way against one extreme position while giving cover and legitimacy to the opposite extreme position.
Modern day people who believe there was no real historical preacher, probably named Yeshua or something like that, wandering around Palestine in the first century, and on whom the Gospels are based, are crackpots. Their position is strongly refuted by the available evidence. You don’t have to be a theist or a Christian to accept this. See, for example, pretty much any of the works of Bart Ehrman, particularly “Did Jesus Exist?”
There are legitimate disputes about this historical figure. How educated was he? Was he more Jewish or Greek in terms of philosophy and theology? (That he was racially Jewish is undenied.) Was he a Zealot? etc. However that he existed has been very well established.
Depends on your definition of crackpots. I don’t think most Jesus scholars are crackpots, just most likely overly credulous of their favored theories.
What I’m curious about is if people in these fields that are starved for really decisive evidence still feel compelled to name a >50% confidence theory, or if they are comfortable with the notion that their most-favored hypothesis indicated by the evidence is still probably wrong, and just comparatively much better than the other hypotheses that they have considered.
I think he meant “jesus myth” proponents, who IIRC are … dubious.
Well, hence “historical Jesus”. If I were talking about Jesus mythicists, I would have said that. I ignorantly assume there aren’t that many Jesus mythicist camps fighting each other out over specific theories of mythicism...
I’m actually looking forward to Richard Carrier’s book on that, but I do not expect it to decide mythicism.
Perhaps the ability to work with poorly-defined objectives? Including how to get some idea of what someone wants and use that to ask useful questions to refine it?
Now if only the humanities departments of most universities taught any of those things, rather than the latest PC/SJ fashionable nonsense.
According to the “Academically Adrift” study, humanities and social science majors show the second highest gains in critical thinking skills, behind only science/math, above engineering and computer science.
To the extent that students are showing limited and declining learning, it largely reflexs a switch to business and education majors (business shows the least learning ,with education right behind), not a weakening of humanities majors.
Is this a reflection of the influence of course participation or of reasoning capability prior to entry?
For some reason, I very much want this to be true. And I take that as a warning sign. Does anyone know if it is true? And what sort of test could possibly measure ‘maths creativity’ and ‘english creativity’ on the same scale anyway?
They aren’t measuring field specific skills, which is the whole point. They are measuring gains in critical thinking using the CLA test (i.e. how much better do you get at general critical thinking as a result of studying your major.). The study itself was quite famous and made the blog rounds a few years ago, I’m sure some light googling will answer any other questions.
There a joke someone in education major being near the bottom when it comes to learning, but at the moment I don’t know how to best make it.
.… Those that can’t teach, teach teaching.
James Pollock
--- The Black Opera by Mary Gentle
“The best way to sort out confusion is to expose it”—Richard Dawkins. (In the greatest show on earth, p.157. )
.There is no such thing as absolute truth.… People are less deceived by failing to see the truth than by failing to see its limits.
Senac de Meilhan
Nick Szabo
The very narrow choice of values and their seemingly libertarian phrasing implies some hidden criteria for what constitutes “a good answer”—which enables whoever follows this advice to immediately dismiss a proposal based on some unspecified “good”-ness of the answer without further thought or discussion, and dramatically downgrade their opinion of the proposer in the bargain. This seems detrimental to the rational acquisition of ideas and options.
EDIT: Criticism has since been withdrawn in response to context provided below.
The quote doesn’t give that impression in context, including the comments—it’s actually a statement about the importance of the rule of law. From the comments, Nick notes:
Acknowledged, and criticism withdrawn.
Trivially true, as one who cannot point out the difference is ignorant in the field of legal systems. I guess it is not what is meant?
I’m currently reading Rapture of the Nerds by Cory Doctorow and Charles Stross, and came across this passage:
It’s funny in context, as both of them originate from a forced upload and the one speaking here just gave legal testimony in favor of forcibly uploading the entire Earth and converting the planet to computronium.
Also, the Committee (who were receiving the testimony) had to run a very large number of parallel Huw-instances and boil them down to representatively divergent samples, because they have no concept of CEV. Oh, the irony.
-- Raymond Chandler
I’ve always been skeptical of anything which uses “truth” to mean something other than “is factually correct”. It almost invariably is used as an excuse to say “we can’t show this is factually correct, but we want you to treat it as such anyway”.
Examples?
The first part could be read as, art (morality, aesthetics, appreciation of humanity) can prevent us from scientific methods (http://en.wikipedia.org/wiki/Nazi_human_experimentation#Freezing_experiments) or conclusions (human biodiversity). Regarding the freezing experiments, I wouldn’t be surprised if that knowledge has saved more people than were killed in the experiments. While “shut up and calculate” is popular around here, I think a lot of people would have a problem with such experiments, no matter what the net positive is.
The second part could be read as being against post-modernism/relativism/new-age b.s. Sadly the pointed, acknowledged absurdity of dada and surrealism has gone mainstream, and “What I say is art is art” is interpreted non-ironically.
Looking at modern art, I’d say it’s not doing a good job...
My science, unrestrained by mere art, will reveal inhuman laws of physics! I will prove inhuman mathematical theorems and research an inhuman cure for cancer!
...Seriously, is that saying anything beyond “both artists and scientists should have high status”?
Context: The quotes here are taken from the C.S. Lewis sci-fi novel Perelandra in which protagonist, Ransom, goes to an extremely ideal Venus to have philosophical discoveries and box with a man possessed by a demon.
These quotes come from the beginning of the novel when Ransom is attempting to describe the experience of having been transported through space by extraterrestrial means which had augmented his body to protect it from cold and hunger and atrophy for the duration of the journey.
This discussion (taking place in a debate over the Christian afterlife) touches upon certain sentiments about how the augmentation (or, for Lewis, glorification) of modern human bodies does not lessen us as humans but instead only improves that which is there.
C.S. Lewis, Perelandra, p. 29.
This feels like a combination of words that are supposed to sound Wisely, but don’t actually make sense. (I guess Lewis uses this technique frequently.)
How specifically could being “definite” be a a problem for language? Take any specific thing, apply an arbitrary label, and you are done.
There could be a problem when a person X experienced some “qualia” that other people have never experienced, so they can’t match the verbal description with anything in their experience. Or worse, they have something similar, which they match instead, even when told not to. And this seems like a situation described in the text. -- But then the problem is not having the shared experience. If they did, they would just need to apply an arbitrary label, and somehow make sure they refer to the same thing when using the label. The language would have absolutely no problem with that.
Since any attempt to defend the quote itself will only come off as a desire to shoehorn my chosen author into the rationality camp, I’ll just give the simple reason why I chose to include that quote instead of stopping with the two previous:
I felt it touched on the subject of inferential distance and discussing reality using labels in a manner that was worthy of attention.
This remark seems to flow from an oversimplified view of how language works. In the context of, for example, a person or a chair, this paradigm seems pretty solid… at least, it gets you a lot. You can ostend the thing (‘take’ it, as it were) and then appy the label. But in the case of lots of “objects” there is nothing analogous to such ‘taking’ as a prior, discrete step from talking. For example, “objects” like happiness, or vagueness or definiteness themselves.
I think you may benefit from reading Wittgenstein, but maybe you’d just hate it. I think you need it though!
Am not sure I follow your comment. I think I get the basic gist of it and I agree with it, but I gotta ask. Did you really mean ostend(or was it a typo?)?. I can’t really find it as a word in m-w.com or on google.
Yep, what The Ancient Geek said. Sorry I didn’t reply in a timely way—I’m not a regular user. I’m glad you basically agree, and pardon me for using such a recherche word (did I just do it again?) needlessly. Philosophical training can do that to you; you get a bit blind to how certain words are, while they could be part of the general intellectual culture, actually only used in very specific circles. (I think ‘precisification’ is another example of this. I used it with an intelligent nerd friend recently and, while of course he understood it—it’s self explanatory—he thought it was terrible, and probably thought I just made it up.)
Hope you look at Wittgenstein!
As in ostention, basically pointing, or a verbal substitute.
Yes.. If they had the shared experience, they would just need to apply an arbitrary label, however given how we learn language(by association based on how they are used by people around us on what we see as objective events/experiences), I am not too confident the labels will match even after having the shared experience. My previous comment assumes this, but did not make it explicit. And I derive the
quote from that assumption. I may be wrong about the assumption (since it seems to be more of a thought experiment than a practical experiment at the moment) but nevertheless I assign fairly high probability/confidence on that.
I tend to think of language as a symbolic system to denote/share/communicate these experiences with other brains. Ofcourse, there’s the inherent challenge of seldom two experiences are same.(Even if it is an experiment on electrons). It’s one of the reason, one of my sci-fi favourite scenario is brain-brain interfaces, that figure some way to interpret and transfer the empirical heuristic rules about a probability distribution(of any given event) one person has to another. Or may be am just being too idealistic about people always having such heuristics in their heads. (even if they are not aware of it) . :-)
While we are quoting Perelandra
A parallel passage from 1984:
“You will understand that I must start by asking you certain questions. In general terms, what are you prepared to do?′
‘Anything that we are capable of,’ said Winston.
O’Brien had turned himself a little in his chair so that he was facing Winston. He almost ignored Julia, seeming to take it for granted that Winston could speak for her. For a moment the lids flitted down over his eyes. He began asking his questions in a low, expressionless voice, as though this were a routine, a sort of catechism, most of whose answers were known to him already.
‘You are prepared to give your lives?’
‘Yes.’
‘You are prepared to commit murder?’
‘Yes.’
‘To commit acts of sabotage which may cause the death of hundreds of innocent people?’
‘Yes.’
‘To betray your country to foreign powers?’
‘Yes.’
‘You are prepared to cheat, to forge, to blackmail, to corrupt the minds of children, to distribute habit-forming drugs, to encourage prostitution, to disseminate venereal diseases—to do anything which is likely to cause demoralization and weaken the power of the Party?’
‘Yes.’
‘If, for example, it would somehow serve our interests to throw sulphuric acid in a child’s face—are you prepared to do that?’
‘Yes.’
‘You are prepared to lose your identity and live out the rest of your life as a waiter or a dock-worker?’
‘Yes.’
‘You are prepared to commit suicide, if and when we order you to do so?’
‘Yes.’
‘You are prepared, the two of you, to separate and never see one another again?’
‘No!’ broke in Julia.
It appeared to Winston that a long time passed before he answered. For a moment he seemed even to have been deprived of the power of speech. His tongue worked soundlessly, forming the opening syllables first of one word, then of the other, over and over again. Until he had said it, he did not know which word he was going to say. ‘No,’ he said finally.
There’s a passage by Lewis, and probably from Perelandra, which is to the effect that people’s actual choices are from a deeper part of themselves than the conscious mind. Might you happen to know it?
Off hand, I don’t recall. There is a moment at the end of the book where Ransom has a revelatory experience of all life in existence and understands it as an interlocking dance, something that doesn’t fit either his theory of predestination nor free will.
Actually, looked up some quotes and found this:
Show me a good loser, and I’ll show you a loser.
-- Vince Lombardi
Not true; one of the key skills needed to improve at most games where there are chance factors is the ability to distinguish cases when you did the right thing and lost anyway from those where you made mistakes and lost to them. You have to take loss gracefully and focus on mistakes and expected outcomes.
Trivially true, and the meaning is ambiguous depending on the meaning of “good” (skilled/frequent at losing? able to handle losing without psychological distress? capable of pulling some benefit out of a “loss”?). Is there context which might illuminate the connotation?
There’s probably cultural context you’re missing (I’m guessing you’re not a native English speaker, or at least non-American.), because it’s pretty straightforward from here without any textual context.
A “good loser” is idiomatically someone who can accept defeat graciously (i.e. not get bitter or angry at the opponent). The quote says that anyone who doesn’t get offended by their own losses won’t improve and will remain a loser.
If you get offended by losing, that’s not an incentive to improve beyond a pretty low threshold. It’s an incentive to avoid tough competition and remain a medium-sized fish in a tiny pond.
I actually am a native, American English speaker, and while I am aware that the common usage refers to somebody who is able to handle loss without taking offense, I did not rest on the assumption that the common usage was the relevant usage here. I would consider the meaning of the quote given the common usage inaccurate, as I find the implication that a gracious loser is necessarily an unmotivated loser incorrect. Therefore, I left open the possibility that the quote might use a less common meaning of the term “good loser”.
The speaker is a football guy, if that helps. But yes, I also find it a distasteful remark. You can improve without being in poor form in front of others (or even in private, really). And it’s pretty rare to literally NEVER lose.
I think being a good loser is more than that. Not investing more resources into a losing project because of the sunk cost bias is on of the things is a skill that makes someone a good loser.
It’s true if you think that winning arbitrary competitions is iimportant , and false if you can place things in wider context. Consider losing to your boss.