Rationality Quotes February 2012
Here’s the new thread for posting quotes, with the usual rules:
Please post all quotes separately, so that they can be voted up/down separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
Do not quote yourself.
Do not quote comments/posts on LW/OB.
No more than 5 quotes per person per monthly thread, please.
- Sticky threads? by 6 Feb 2012 4:12 UTC; 40 points) (
- 6 Mar 2012 0:50 UTC; 0 points) 's comment on Rationality Quotes March 2012 by (
Kurt Vonnegut, Breakfast of Champions
The most beautiful explanation of Hansonian signalling I’ve seen.
With all due respect to Robin, this very thread supplies prior art for this idea :).
Having an inkling about the existence of gravity is different from figuring out the motions of all the planets. Hanson actually built the idea into useful models. He gets the name. :D
Joel Stickley, How To Write Badly Well
-- Steven Kaas
I’m surprised that this got 32 upvotes in a community whose members in general believe that you are your brain. Do all 32 of you believe in some sort of dualism?
Steven and most of the people here (including me) do indeed believe that “you are your brain” in the sense that the mind is something that the brain does. But Steven’s epigram is using “you” in a narrower sense, referring to just the conscious, internal-monologue part of the mind.
In the fable of the fox and the grapes, it’s the fox’s brain that is the proximate cause of him giving up the attempt to get the grapes, but it’s the “creepy vizier” part of his mind that makes up the “I didn’t want them anyway” story.
(Edit: I should have said “most of the other people here” in my first sentence. In case you didn’t know it, Steven Kaas is an LWer. He is kind enough to let me and others earn tons of karma by quoting his Twitter bons mots.)
I don’t see how this implies dualism, nor why materialism implies some sort of ultra-strong unified mind with no divisible components such that it makes no sense to analogize to a king and vizier.
FWIW I find the quote kind of weird, as well. I don’t think it’s referring to dualism, but I can’t figure out what it does mean.
It’s illustrating the thing from psychology where your conscious self (the “you” in “you are” here) often seems to be more about making up narratives about why you do things you somewhat unconsciously decide to do, rather than fully consciously deciding to do what you do.
It’s not terribly obvious normally, but scary stuff happens when you get a suitable type of brain damage. Instead of necessarily going “hm, my introspective faculties seem to be damaged and I’m doing weird stuff for no reason I can ascertain”, people often start happily explaining why it is an excellent idea for the king of the brain who has been replaced with a zombie robot during the brain damage to start lumbering around moaning loudly and smashing things at random.
Yvain’s post The Apologist and the Revolutionary from a couple of years ago had some fascinating and mind-boggling discussion of other bizarre things that result from particular brain damage.
Where’s the dualism?
.
--Alain de Botton
But...how can a mass of chemicals in a saline solution be said to have moods in the first place?
How can entries in a ledger or words in a book be said to have moods either? Answering that question for anything is verging on the Hard Problem of Consciousness.
I admit, all of Alain’s considered options seem hopeless to me, those two included.
Because those moods are defined as varying states of the chemicals in the saline solution.
Defining the moods that way seems wrong. If some alien species evolves with a distinct biochemistry that has some similar moods, we wouldn’t assert that they weren’t the same. Rather the moods correspond to varying chemical states in these bags of saline solution and they are varying chemical states that predict some macroscopic behaviors of those saline solution containers, such as their tendencies to damage other containers or engage in activities that produce new containers.
So Alain’s claim is then “Our varying states of chemicals in a saline solution are unstable because we are only chemicals in a saline solution”?
That’s trivial as can be.
It is trivial, but it’s because the great-grandparent of your comment is essentially a statement of the parent. When you combine the two, you get a pretty trivial statement.
The parent is not, as the grandparent takes pains to be, ‘respectful to the complexity of the human condition’. The grandparent is obviously true. The parent is fairly unintelligible: in order to make sense of it, we have to define moods in such a way as to make it an empty tautology.
Is this true? Naive Googling yields this, which suggests (non-authoritatively) that blood sugar and moods are indeed linked (in diabetics, but it’s presumably true in the general population). However, despair is not noted and the effects generally seem milder than that (true despair is a rather powerful emotion!)
Blood sugar is very closely linked to self-control, including suppression of emotion. While this may appear to be a different thing, it isn’t: when you include feedback loops and association spirals, a transient, weak emotional distraction can become deep and overwhelming if normal modes of suppression fail.
See here, here and here.
.
Anecdotally: I’m not diabetic that I know of, but my mood is highly dependent on how well and how recently I’ve eaten. I get very irritable and can break down into tears easily if I’m more than four hours past due.
--Heretics, G. K. Chesterton
I was interested in the context here. Chesterton was referencing Wells’ original belief that the classes would differentiate until the upper class ate the lower class. Wells changed his mind to believe the classes would merge.
The entire book is free on Google Books.
At the point where those are the two hypothesises being considered there may be larger problems.
I think you’ve got problems at the point where you’re using that language to write your hypotheses.
In the Time Machine, it’s the other way round.
– Bertrand Russell
George Bernard Shaw
In my experience I’ve noticed the reverse, but I could be persuaded otherwise with statistics.
.
Dara O’Briain
Dara O’Briain
He also paraphrases what I’ve seen described as “the Minchin Principle” a few sentences later.
-Greg Egan, Distress
-- Paul Graham
Being well-calibrated is great, but it sounds like rtm isn’t even wrong in retrospect. I much prefer to say wrong things very loudly so that I will discover when I am in error.
Calibration is awesome. However, note that without an audience like the NSA or Paul Graham, this is probably sub-optimal signaling.
I have to say, I haven’t found calibration hugely useful. It’s certainly nice, but for the most part people ignore you.
Does it give you better answers, though?
Sure, but I find that most of what I do is not dependent on small probability increments.
-Voltaire (usually presented as, “It is dangerous to be right when the government is wrong.”)
“Heads I Win, Tails You Lose” by Venkat Rao
It’s also a good introduction to Nietzsche. (I find that most introductions to Nietzsche are good as long as they are humorous and informal enough that they wouldn’t be used in philosophy class.)
.
Space-time is like this set of equations, for which any analogy must be an approximation.
That’s certainly a mistake that many people make, but we shouldn’t consciously correct for it unless it’s a bias with predictable direction. Does excessive belief in common-sense analogies really cause more problems than excessive belief in new shiny ideas? How do you tell?
.
Geoffrey Warnock
Charles S. Peirce
That this quote has almost the same number of upvotes as this comment is a good sign, I guess. Curious that the other one collected all the replies that might’ve gone here, though.
Saint Augustine of Hippo, Confessions
--W. V. O. Quine
--Madonna
Scott Aaronson
--Jane Espenson
That is brilliant, I’m taking that one. It’s refreshing to see an alternative to the typical belligerently optimistic ‘motivational’ quotes that deny the rather significant influence of chance.
Well, but it can also be interpreted as a recursive definition expanding to:
Daniel Abraham, The Dragon’s Path
Jonathan Bernstein
Working in market research, I have to resist the impulse to point this out practically every day.
-E.H. Gombrich
Is that true or is Gombrich just handling a needle convincingly?
Either both are true, or neither.
Yeah, I spent a few minutes as I was falling asleep trying to rationalize that but don’t remember if I came up with anything sensible.
ETA: Something to do with metaphors and level-crossing.
How’s that then? Suppose Gombrich is Hing a NC, it doesn’t follow that anyone who can HaNC can make us see nonexistent thread; perhaps it’s necessary but not sufficient. On the other hand, maybe it is true that anyone who can HaNC can make us see things, but Gombrich is fumbling his needle- it’s just not noticeable because the thread actually exists in that case.
Not necessarily true. Could be that it only works in some subset of cases, of which Gombritch’s happens to be one.
-- Richard Carrier
Richard Feynman, in Surely You’re Joking, Mr. Feynman, chapter entitled “Mixing Paints”.
I may say that this is the greatest factor—the way in which the expedition is equipped—the way in which every difficulty is foreseen, and precautions taken for meeting or avoiding it. Victory awaits him who has everything in order — luck, people call it. Defeat is certain for him who has neglected to take the necessary precautions in time; this is called bad luck.
Yeah, but that’s not very useful to tell when you’re taking sensible precautions and when you’re just packing cans of shark repellent.
Not necessarily. Note that you take precautions because you foresee difficulties. If you intend to go diving in shark-infested waters… or, indeed, any body of water that might conceivably host sharks… then considering that fact in advance, purchasing shark repellent, and having it on-hand during the dive is totally sensible. If you’re going to the South Pole instead, then shark repellent is worse then useless; it’s presence will serve merely as additional weight to hinder your progress. The difference is, as the quote suggests, a question of whether you’re preparing because you’ve carefully considered the situation in advance, and determined that the preparation in question is necessary to your task… or whether you don’t really have a solid idea of why you’d need to do a given thing, but it seems like something that might be useful for a reason you haven’t considered carefully enough to describe in words.
-Vi Hart, Doodling in Math: Spirals, Fibonacci, and Being a Plant- Part 3 of 3
Judea Pearl (Causality)
Gary Drescher (Good and Real)
On countering, see also one man’s modus ponens is another man’s modus tollens.
.
--Razib Khan, here
H. Jackson Brown
(The second-last paragraph of The Power of Agency by Lukeprog reminded me of it.)
I wonder to what extent people who become famous have a way fairly early in their careers to have other people do the routine work for them.
Neal Stephenson, Quicksilver.
I was pondering whether to cut the quote at this point, or to include the rest of the dialogue between natural philosopher Daniel Waterhouse and alchemist Enoch Root. I decided to cut the quote here firstly because otherwise it would be too long, and secondly because the rest of the dialogue does not have the same stirring, “yay science!”, “yay modernity!” feeling of Daniel’s tirade. But it is thought-provoking, so I include it below, with some reflections after it:
How do you interpret this? The best interpretation I can make for what Root is saying is that when you describe Nature in abstract, mathematical/geometrical ways, you will end up confusing your abstraction for reality—and then anything which does not fit with your abstraction (like the pole does not fit in the Cartesian grid) will seem inherently mysterious, even though its mystery is an illusion of your abstract description and it is not more inherently mysterious than the pole is inherently different from other points on Earth.
This resonates with the view some philosophers have on the hard problem of consciousness and how to dissolve it: the idea goes that modern science describes nature in quantitative terms and pushes everything qualitative to the subjective realm (e.g., light is “in reality” electromagnetic waves defined as such and such equations, and color is the subjective perception of it and exists only “in the mind”) and then qualia seem inherently mysterious and not-fitting with the rest of nature, but this is only because we have confused our abstractions for reality. The more recent Putnam has said things along these lines, as well as several “neo-Aristotelian” philosophers. But I wouldn’t have associated Stephenson with such views, and yet Root seems to be speaking for him here, so I am a bit confused.
Hunh? It’s just an allusion to non-Euclidean geometry and the Gauss-Bonnet theorem, which prevents any Cartesian grid system from working on the sphere.
Yes, that is the surface meaning, but it seems to me there must be a secondary one. Daniel’s tirade in the previous comment is not just saying “we will be able to draw accurate maps using a Cartesian grid” (otherwise, why say “that will be the end of Alchemy”? what does that literal meaning have to do with alchemy?). Notice that he is responding there to Root’s assertion that there is little contrast between alchemy and “the younger and more vigorous order of knowledge that is associated with your club”, i.e. modern science (the club is the Royal Society). So I take him to mean that the new scientific method, which relies on precise, mathematical thinking as opposed to the qualitative, semi-mystical thinking in alchemy (this is what “Cartesian grid vs dragons” stands for), will carry the day and eliminate alchemy. So I think that Root’s reply that “you will leave out the poles” must have a hidden interpretation that fits in this broader argument, besides the surface one you point out.
That there must be a second meaning is also supported by Daniel saying with a sigh “Very well, perhaps we’ll get back to Alchemy in the end”—you wouldn’t need alchemy to draw a map with a different projection that includes the pole!
Well, it’s been pointed out on occasion that modern physics did get back to alchemy—in the sense of transmuting elements (radioactivity). Personally, I took Root as referring to what the alchemists did achieve: apparent immortality, given his presence in Cryptonomicon. The younger order achieved a great deal, but just as map projections always have difficulties caused by mapping 3D to 2D, the younger order has difficulties with a few singular parts of the territory, if you will.
Ah, nothing like a good old-fashioned book-burning.
On the Outside View:
--Steven Kaas
What lessons? The WP link was interesting, but I didn’t catch anything other than “defunct empire”.
To explain: the Outside View is a powerful tool, but one sometime should reject it based on even more powerful factors from the Inside View, where one can be sure that one is in a new (or at least, different) reference class from the one being used in the Outside View. Of course, one may want to reject it based just on one like one’s views...
This sometimes leads to a back-and-forth series of arguments over burdens of proof dubbed ‘reference class tennis’ where the two sides argue over what is the correct reference class which will either support or undermine a particular claim (is AGI in the reference class of “additional incremental innovation”, which would undermine claims of significant danger/reward, or entire “regime changes”, which would support the same claims? This is the game of reference class tennis which Eliezer and Hanson are arguing their way through in the link and related links).
Kaas is humorously parodying a side using an Outside View involving the Neo-Sumerian Empire, replying to the other side making the commonsense position—yours too (‘what lessons?’) - that the quasi-literate agricultural Neo-Sumerian Empire from 3000 years ago is not in any reference class that matters to us, and implying that the speaker is writing the other side off as rationalizing and excuse-seeking. The parody works because we agree that in this case, the Outside View is not applicable or its weak evidence is overwhelmed by Inside View evidence about how different the Neo-Sumerian Empire is from any contemporary societies or organizations or processes, and this reminds us that often Outside View arguments simply may not work (eg. arguments from evolutionary psychology, which draw from time periods and societies even more distant from and less like our own than the Neo-Sumerian Empire).
And now that I’ve explained it entirely, I can no longer find it funny. I hope you’re happy.
Thanks. The statement you quoted was meant as a continuation of this, in case that makes it less confusing. I should probably have made that explicit.
It was meant as a continuation of this, and I should have explicitly labeled it that way.
At least that explanation was fun to read :) Thanks.
--Thomas Sowell
(On a related theme:) Intelligent folk may be better at processing evidence and drawing correct conclusions, but this is to some extent counteracted by the massive selection effects on what evidence they actually encounter.
Other than various social effects (“everyone knows about the Pythagorean Theorem”), in what areas do you think intelligent people generally have worse information than their “normal” peers?
Neal Stephenson, The Confusion
Funny, I guess, but how is it rationality related?
“Imaginary horses are much slower than the other kind.” Pretending to have horses doesn’t allow you any of the benefits of having a horse, such as going faster.
Ah, I guess I was reading it with the wrong inflection. Thanks.
That is quite rational. However, some studies have shown that imagining (pretending) one is doing physical exercise can help heal the body as well as doing physical exercise. I find children imagining themselves as animals while playing often develop some amazing skills of both mind and body.
Long way of saying the power of our minds can sometimes stretch the known limits of rationality.
I wouldn’t say that “stretches the known limits of rationality.” If imagination can help physical development, I desire to believe that imagination can help physical development. If imagination does not help physical development, I desire to believe that imagination does not help physical development.
That’s not because the horses are slower, it’s because you can’t sit on them.
--Ben O’Neill, here
Considering the above quote can be used to criticize nearly any popular political position I don’t think it is inherently mind-killing. Also since we all agree democracy is a good thing this isn’t even very political. The original article and context obviously does make it somewhat political.
I don’t think everyone here would agree that democracy is a good thing.
Obviously you are right on that. I should have said:
What I really meant by this is that Democracy is something very well entrenched and accepted in Western society and even LessWrong. Dissent from democracy isn’t threatening heresy it is the mark of an eccentric.
Paul Graham has written quite extensively of why some things are considered “threatening heresy”, and other things mere eccentricity. Ultimately, he concludes that in order for something to be tabooed, it must be threatening to some group that is powerful enough to enforce the taboo, but not powerful enough that the can safely ignore what their critics say about them. Democracy is currently so entrenched in western civilization that it doesn’t have to give a fuck if a few people here and there criticize it occasionally.
The same is true of people who call for a dictatorship or any non-democratic form of government. They also always imagine it will be governed by “the right people”, and imagine all the things “the right people” could accomplish if freed from the need to listen to the “ignorant mob”.
Yes I fully agree. But it shouldn’t be underestimated that when it comes to non-democratic forms of government what kind of people are in power genuinely does have a big impact on how the country is run.
Wanting a philosopher king isn’t a bad idea if you aren’t mistaken about the philosopher king in question.
What kind of people are in power has a big impact under all forms of government, democracy included.
Or about your definition of “Philosopher king” in the first place. The character of Marcus Aurelius fit the preferences of those in Rome who dreamt of such a philosopher king; yet he was a poor ruler who displayed apathy—including going against his moral intuitions so as not to actually do anything, like finding gladiatorial games distasteful but making no attempt to limit them—and mediocre crisis management
Do we agree on that? I think there are quite a few on LessWrong who are no more in favour of democracy than Ben O’Neill. Or by linking “democracy” to the Sequences post on applause lights, do you mean to imply you mean the opposite of that sentence? Yet it is embedded between two others apparently intended straightforwardly.
That democracy can reliably be used as an applause light is a sign that we as a society agree it is indeed a good thing.
But not a sign that it is indeed a good thing.
Or, if I model human behavior correctly, it could also have been as sign that we as a society at one point agreed that it is a good thing but now agree that we agree that it is a reliable applause light. (But I don’t think democracy-approval has devolved to that level yet. We actually do seem to think it is a good idea.)
From the mission statement of the school at which I studied political science:
Even if society-at-large agrees something is good, the LW community may disagree in whole or in part.
Other things society-at-large treats as good and applause lights include:
Belief in belief
Deathism
Tabooing tradeoffs of sacred values like human life
Probably a duplicate, but I can’t find a previous version:
H. L. Mencken
It’s in the wiki:
(but it’s good enough that it can be repeated now and then...)
--Bertrand Russell, pg 178 Last philosophical testament: 1943-68
David Deutsch, The Beginning of Infinity
tries
Yes, but it’s also logically impossible.
-William M. Briggs
Voted up for the link, but the meaning of the quote isn’t very clear out of context.
– Bertrand Russell, History of Western Philosophy p. 98
Bertie is a goldmine of rationality quotes.
Also don’t confuse “logically coherent” with “true”.
You keep saying things I was gonna say. Dost thou haveth a blog perchance?
Downvoted for incorrect subject-verb agreement.
It was purposeful. It’s like “can i haz cheezburger?” but olde schoole.
You can’t get ye flask.
Un-downvoted. Sorry.
But it’s “i can haz cheesburger?” btw. ;)
I don’t believe you.
Really? There’s precedent in my other comments. Massacring grammar is a compulsion I indulge in when I don’t want to be seen as unreservedly endorsing something, in this case Eugine_Nier’s comments.
E.g. I sent this to Vladimir_M in a private message:
That’s a little much even for me, and I know what you’re talking about.
Edit: Ok, so apparently people think it actually is important to phrase it “hast thou a blog”. Shows what I know.
I would think it should be “Dost thou havest a blog?”
I’m voting for “Hast thou a blog?” if one wants to use period English, but I’m going by feel. Does anyone actually know?
May I suggest looking in period literature? If I Google Books “Hast thou a ”, I see in the first page of results hits from John Bunyan, 1678-1684 and William Shakespeare, c. 1591, among lesser lights.
Good point. Googling “Dost thou havest a ” turns up two results, one of which is Eliezer’s comment.
On the other hand, my instincts aren’t perfect. I’d have bet that “havest” wasn’t a word, but it is. “Hast” is a contraction of “havest”.
I was wondering whether the problem was that “dost havest” is redundant, but “havest thou a” doesn’t turn up anything period.
Yeah, “dost thou havest” would be much like “does he has”...
Thanks. Sorry, I don’t have a blog.
The human understanding is no dry light, but receives an infusion from the will and affections; whence proceed sciences which may be called “sciences as one would.” For what a man had rather were true he more readily believes. Therefore he rejects difficult things from impatience of research; sober things, because they narrow hope; the deeper things of nature, from superstition; the light of experience, from arrogance and pride, lest his mind should seem to be occupied with things mean and transitory; things not commonly believed, out of deference to the opinion of the vulgar. Numberless, in short, are the ways, and sometimes imperceptible, in which the affections color and infect the understanding.
-- Francis Bacon, Novum Organum (Aphorism XLIX), 1620. (1863 translation by Spedding, Ellis and Heath. You should read the whole thing, it’s all this good.)
Unsounded
Humanity becomes more and more of an accessory every day; with increasing power comes increasing responsibility.
I tried reading that story, but got stuck on the brat. Please tell me she gets better?
Not really, but there’s more focus on other characters as the comic goes on, and events get to show more sides to her (still basically bratty) personality.
--Burning Man organizers
Latest news: Burning Man blames game theory for their failure to understand basic supply and demand, hugely underprices tickets, 2⁄3 of buyers left in cold, Market Economics Fairy cries.
That’s not a fair assessment of the organizers’ skill level.
They seem to have a nice firm grip on the effect of fixed supply, fixed price, and increasing demand:
What they didn’t predict was that the expectation of scarcity would further increase demand, creating a positive feedback loop. In their words:
So, they understand supply and demand (they just made a bad factual estimate of demand), and they didn’t really understand game theory—but after they made their mistake they publicly admitted it, asked around to see what they did wrong, and proposed strategies for mitigating the mistake.
Why are we mocking them again?
I gather they didn’t know how huge the demand would be this year.
Burning Man’s problem might be a good topic for LW to kick around. Suppose you have pretty good abundance, how do you ration access to excellent social venues without having barriers that do damage to the venues? Is this even possible?
In this particular case, not all attendees appear to be equally valuable to the event/other attendees. Giving priority to people who’ve organized cool things in the last few years may make sense.
Yes, this was my reaction - ‘let the price float, and give transferrable vouchers to the people who do the most awesome stuff; if they object, well, that’s why the vouchers are transferrable’. It’s not much different from what they’re already suggesting, telling the lucky ones to distribute excess tickets among people they like.
I don’t understand, won’t pricing the tickets higher just cause people to be disappointed that the tickets were too expensive for them, instead of there not being enough?
It’d probably lead to a roughly equal amount of personal disappointment once the dust settles, but less disruption to the community. Major projects, the kind that the newsletter’s alluding to when it talks about collaborations, aren’t cheap; members of the camps that put them on usually spend at least their ticket price on supplies, to say nothing of labor. That implies that there’s enough loose money floating around those projects that an increase in ticket prices wouldn’t be an insurmountable hurdle.
Of course, it may very well be such a hurdle for those burners who’ve joined the event as spectators; principle of inclusion aside, though, those participants aren’t as valuable to the organization or to each other as more committed folks. If there’s concern over raising the bar too high for marginal theme camps to participate, the organizers could divert some of the excess funds into grants or reduced-price tickets for that demographic.
I get the impression that this line of thinking looks too cold-blooded for the Burning Man organizers to take to heart, though. Hence the rather strained attempt at casting the problem in terms of “Civic Responsibility” and “Communal Effort”.
It will allow people that were willing to pay the market price actually buy the tickets. If there is sufficient demand then maybe a Burning Man 2 festival makes economic sense, or increasing the supply of tickets for Burning Man itself.
We live in a world of limited resources not of good wishes. Good wishes lead to dead weight losses. I don’t see a possible scenario where price control is a good idea - LAW of supply and demand.
If there is some societal interest that the market fails to protect here (is Burning Man a fundamental right applicable to a certain type of person?) If so, then we should have a BMPA (like the EPA) formed to regulate the event.
Intellectual freedom cannot exist without political freedom; political freedom cannot exist without economic freedom; a free mind and a free market are corollaries. - Ayn Rand
Welcome to Less Wrong! If you have time, feel free to introduce yourself to the community here.
F*cking Markets, How Do They Work?
From the blog post:
It seems pretty easy to solve: auction off all the tickets.
The Market Economics Fairy is pleased with you! She blesses you with sparkles from her wand!
What profit does she get from dispensing sparkles?
It improves the chance that further Market Economics will happen by rewarding people who produce it. It goes without saying that Market Economics is a terminal value to the Market Economics Fairy. If she was just interested in profit, she’d be starting a hedge fund instead of going around telling people about Market Economics.
Market Economics fairy should consider starting a hedge fund anyway and investing that money into a lobby group or other means of promoting Market Economics. I sincerely doubt emitting sparkles from her wand is where her comparative advantage lies.
What do you mean? The Market Economics Fairy is way better at emitting sparkles from her wand than anyone else, and has no special talent for managing hedge funds.
Maybe, but I’m pretty sure there are substitutes: both for the role of sparkles, and manual production of them using a wand.
Well now you’ve proved that the Market Economics Fairy should quit her job and found a startup aimed at roboticizing sparkle production. I hope you’re happy.
Very. :D
Just how much better than everyone else is she? Perhaps her comparative advantage is in creating a power company. Spend early revenue on recursively improving (ie. research that is money limited) sparkle → electricity conversion then spend later revenue on hiring people to do FAI research so she can maintain and consolidate her overall advantage as technology makes sparkle power obsolete.
Unfortunately for the rest of us the FAI creates an environment that degenerates into a Hansonian Hell (then further into mere cosmic commons burning). If it behaved like a FAI and did the smart thing and became a singleton the market economics fairy would disintegrate into a puff of vapor—presumably not part of her extrapolated volition. Once someone has won (secured control via overwhelming intelligence advantage) ‘Market Economics’ becomes nothing more than a charade. Yet maintaining an environment where market economics hold sway ensures a steady evolution towards more efficient competition which will tend toward one of two obvious local minima (burn the earth or, more likely, burn the light cone, depending on whether the leap to interstellar is viable for anyone at any point in the economic competition.)
The Market Economics Fairy must (eventually) die or we will!
(Pardon the Newsomlike tangential stream. It seems relevant/interesting/important to me at least!)
Who do you think is behind Ayn Rand?
You’re missing the unstated corollary to this, or any other discussion of scalpers: ‘and prices have to be “reasonable” for whatever demographic we claim to serve or would prefer to serve’.
Hence, you get discussions of young girl singers unhappy that all these icky old men are paying hundreds of dollars for the tickets to her concert, even though the market doesn’t clear at the $40 or $60 her preteen fans can spare. (And if an organization does let the price float to its natural level of hundreds of dollars, then you get shocked articles in the newspaper on ‘ticket inflation’ and angry letters to the editor about how in their day you could get in for a nickel...)
I agree that ticketing is a difficult problem, but getting rid of scalping is easy if that’s your primary objective. Pricing the externalities of event-goers is tough, especially when anti-discrimination legislation means you generally can’t be upfront about it.
So there is the problem: The ideal of non-discrimination is not compatible with cases where the demographics of event-goers is itself a strong influence on the quality of the event for everyone involved.
I don’t get the impression that getting rid of scalping is easy at all. What do you have in mind?
In the ancestral post, I recommend auctioning off the tickets. This ensures that the people who are willing to pay the most get the tickets, dramatically reducing the demand and increasing the risk for scalpers (if I buy a $20 ticket to a show I expect to sell out, a price decline is unlikely, and even if it happens it’s probably only a few bucks per ticket. If I buy a $500 ticket to a show I expect to sell out, a price decline could wipe me out).
Now, you could still have people buying tickets at auction to sell at the door to people who weren’t prepared, but that won’t be a moral issue since you’ve already established that the tickets go to the highest bidder.
gwern rightly points out that this doesn’t always deliver the best experience. The good first approaches to diversity are quotas and subsidies. They might offer burning man attendance at historical prices to people who have come previously, and then auction off a batch of tickets to new attendees, or give previous attendees vouchers which increase their bids by a set amount or a multiplier. (Content providers could even be paid for their trouble.) Whatever you decide you want to encourage, though, you’re better off working with the price system than against the price system.
Would public hostility really result in lower profits than just selling at the market equilibrium price? If I did not know about the actual amount of scalping that happen, I would be very suprised to learn that tickets are priced so far below equilibrium.
The public hostility is clearly a negative of some kind; whether it actually reduces net lifetime discounted income or some metric like that, you’d have to ask an economist.
But the artists clearly do want to avoid the true prices being in any way ascribable to them. An example: I read in an article somewhere of the lawsuits against Ticketmaster where apparently one of the revelations was that high powered acts were able to quietly demand shares of Ticketmaster’s ‘fees’ - this price increase was not perceived as a price increase by the act, but as Ticketmaster’s fault. They took the blame in exchange for the act using their services, basically. I would guess that Ticketmaster gets a bigger percentage of the ‘fees’ than they would get in a straight ticket price increase; this difference would represent Ticketmaster’s compensation for taking the heat. (And there was another bit, about acts demanding larger fractions of the tickets, which they would quietly sell at premium prices—but without the public opprobrium accompanying official prices that high.)
This post seems relevant.
There are a lot of people in the entertainment industry and they tend to want to make money. Shouldn’t they know the answer and act upon it by now?
Hostility might not be the only risk. If you want to have fans for an extended period, you’d do well to attract young people—and they’re likely to not have as much money.
~ Collected Sayings of Muad’Dib, Irulan, Herbert elder
I’ve never been able to make sense out of that. It sounds very tough and definite, but what does it mean?
This is sort of what I say to remind myself that having read some of something isn’t a sufficient reason to finish it.
I pasted it into Google just now and found this article quoting it in a similar context.
I agree. It’s not… quite.… complete.
Let’s chop it off. (Let’s keep it at 0 points).
There, now it’s complete.
I guess it’s re-stating Antoine de Saint Exupéry’s “It seems that perfection is attained not when there is nothing more to add, but when there is nothing more to remove”.
.
The quote needn’t be taken as approving. Muad’Dib wanted to avoid the jihad he unleashed, even though he eventually came to see it as necessary. If you take it as neutral reporting of how the Fremen think, it could be taken as a comment on how circumstances shape your thinking, or as a caution against allowing no-longer-extant circumstances to constrain you.
Is this a recommendation or a warning?
Can’t it be both?
In this case that roughly translates to self contradictory advice. Do and do not do. There are plenty of quotes that make just as much sense when reversed and in such cases the quotes themselves contain very information and any actual wisdom must be entirely embedded in the algorithm that selects which quoted meaning to apply in which case.
You can’t simultaneously say “aim higher on the margin” and “aim lower on the margin”, but you can say “don’t aim too high” and “don’t aim too low”—or more simply “mind your aim point”. It is entirely possible that people miss on both sides and they are simply not being careful enough to avoid either extreme.
Consider it a recommendation to be aware of the trade off, not a recommendation to bias your decisions in any particular direction.
Upvoted because I actually think this phrase as my reminder-keyword on appropriate occasions. E.g. publishing an MOR chapter.
Reddit user sciencecomic, in response to a headline reading “‘Why Religion Is Natural and Science Is Not’. Emory philosopher Robert McCauley suggests that science is more fragile than we think while religion more resilient – all for reasons coming back to humans’ cognitive processes.”
You should include a link.
Done.
-a kid named Noah. (Hat-tip to Yvain.)
Original post.
It was found stuck underneath a metal bench at an elementary school bus stop.
Where did you read that he was five?
I definitely wasn’t that literate as a five year old.
Fixed.
Leonardo da Vinci
A poem about decision trees:
Michael Rothkopf
.
This song has been instrumentally useful to me in more ways than one...
-Andrew W. Mathis
Or potentially good luck if the combination of your instincts and the (irrationally justified) memes you inherited from tradition are better than your abstract decision making.
Or maybe some non-negligible subset of superstitions give good luck because they’re in fact rationally justifiable.
Or because their signalling (or countersignalling) value outweighs their instrumental disadvantages.
Or, while we are at it, superstitions held by those with a generally optimistic outlook will tend towards ‘good luck’ superstitions and so result in greater exploitation of potential opportunities.
Is this not approximately the same thing that I said, just changed from: “It is good luck to ” to “There is a class X of strategies that represent good luck” and a truncation of some causal details regarding the selection process? (That is, is my meaning not clear?)
Mere difference in connotation. I attribute my good luck to the gods, and would be annoyed at the implication that such an attribution is irrational justification obscuring my good luck’s allegedly-actual origin in my optimistic outlook or whatever. By my lights some non-negligible subset of superstitions give good luck due to the combination of ones instincts, cultural inheritance, and also quite crucially the help of the gods. This is compatible with what you said but I wanted to emphasize the importance of the gods, without which I suspect many superstitions would be pointless. It’s true that as you imply maybe even in the absence of gods superstitions would still be adaptive, but I’m less sure of such a counterfactual than of this world where there are in fact gods.
I’m afraid I must disagree with your connotation now that it is explicit and for the following reason:
No, the problem isn’t with the whole “gods exist” idea. Rather, given that gods (counterfactually) exist, rational and justified belief in them and behaving in a way that reflects that belief is not superstition. It’s the same as acting as though quarks exist. When those crackpots who don’t believe in gods (despite appearing to be on average for more epistemically rational in all other areas and appearing to have overwhelming evidence with respect to this one) call you superstitious for behaving as an agent who exists in the actual world they are mistaken.
This is a dispute over definitions then? On your terms then what should I call the various cognitive habits I have about not jinxing things and so on? (I don’t think the analogy to quarks holds, because quarks aren’t mysterious agenty things in my environment, they’re just some weird detail of some weird model of physics, whereas gods are very phenomenologically present.) It seems there is a distinct set of behaviors that people call “superstition” and that should be called “superstition” even if they are the result of epistemically rational beliefs. The set of behaviors is largely characterized by its presumption of mysterious supernatural agency. I see no reason not to call various of my cognitive habits superstitions, as it’d be harder to characterize them if I couldn’t use that word. This despite thinking my superstitions have strong epistemic justification.
That, and how the abstract concepts represented by them interact with the insight underlying the quote. Oh, and underneath that and causing the disagreement is a fundamental incompatibility of view of the nature of the universe itself which is in turn caused by, from what you have said in the past, a dispute over how the very act of epistemological thinking should be done.
What’s the nature of the difference? I figure we both have some sort of UDT-inspired framework for epistemology, bolstered in certain special cases by intuitions about algorithmic probability, and so any theoretical disagreements we have could presumably be resolved by recourse to such higher level principles. On the practical end of course we’re likely to have somewhat varying views simply due to differing cognitive styles and personal histories, and we’ve likely reached very different conclusions on various particular subjects for various reasons. Is our dispute more on the theoretical or pragmatic side?
I can only make inferences based on what you have described of yourself (for example ‘post-rationalist’ type descriptions) as well, obviously, as updates based on conclusions that have been reached. Given that the subject is personal I should say explicitly that nothing in this comment is intended to be insulting—I speak only as a matter of interest.
I think UDT dominates your epistemology more than it does mine. Roughly speaking UDT considerations don’t form the framework of my epistemology but instead determine what part of the epistemology to use when decision making. This (probably among other things that I am not aware of) leads me to make less drastic conclusions about fundamental moralities and gods. Yet UDT considerations remain significant when deciding which things to bother even considering as probabilities in such a way that the diff of will/wedrifid’s epistemology kernel almost certainly remains far smaller than wedrifid/average_philosopher.
Yes, most of our thinking is just a bunch of messy human crap that could be ironed out by such recourse.
A little of both I think? At least when I interpret that at the level of “theories about theorizing” and “pragmatic theorizing”. Not much at all (from what I can see) with respect to actually being pragmatic.
But who knows? Modelling other humans internal models is hard enough even when you are modelling cookie cutter ‘normal’ ones.
(I don’t know if this at all interests you, but I feel like putting it on the record:) It’s true my intuitions about decision theory are largely what drive my belief in objective morality a.k.a. the Thomistic/Platonic God a.k.a. objectively-optimal-decision-theory a.k.a. Chaitin’s omega, but my belief in little-g gods is rather removed from my intuitions about decision theory and is more the result of straightforward updating on observed evidence. In my mind my belief in gods and my belief in God are two very distinct nodes and I can totally imagine believing in one but not the other, with the caveat that if that were the case then God would have to be as the Cathars or the Neoplatonists conceptualized Him, rather than my current view where He has a discernible “physical” effect on our experiences. I’m still really confused about what I should make of gods/demons that claim to be the One True God; there’s a lot of literature on that subject but I’ve yet to read it. In the meantime I’d rather not negotiate with potential counterfactual terrorists. (Or have I already consented to negotiation without explicitly admitting it to myself? Bleh bleh bleh bleh...)
I’m very confused* about the alleged relationship between objective morality and Chaitin’s omega. Could you please clarify?
*Or rather, if I’m to be honest, I suspect that you may be confused.
A rather condensed “clarification”: “Objective morality” is equivalent to the objectively optimal decision policy/theory, which my intuition says might warrant the label “objectively optimal” due to reasons hinted at in this thread, though it’s possible that “optimal” is the wrong word to use here and “justified” is a more natural choice. An oracle can be constructed from Chaitin’s omega, which allows for hypercomputation. A decision policy that didn’t make use of knowledge of ALL the bits of Chaitin’s omega is less optimal/justified than a decision policy that did make use of that knowledge. Such an omniscient (at least within the standard models of computation) decision policy can serve as an objective standard against which we can compare approximations in the form of imperfect human-like computational processes with highly ambiguous “belief”-”preference” mixtures. By hypothesis the implications of the “existence” of such an objective standard would seem to be subtle and far-reaching.
The decisions produced by any decision theory are not objectively optimal; at best they might be objectively optimal for a specific utility function. A different utility function will produce different “optimal” behavior, such as tiling the universe with paperclips. (Why do you think Eliezer et al are spending so much effort trying to figure out how to design a utility function for an AI?)
I see the connection between omega and decision theories related to Solomonoff induction, but as the choice of utility function is more-or-less arbitrary, it doesn’t give you an objective morality.
His point is that if I fix your goals (say, narrow self-interest) the defensible policies still don’t look much like short-sighted goal pursuit (in some environments, for some defensible notions of “defensible”). It may be that all sufficiently wise agents pursue the same goals because of decision theoretic considerations, by implicitly bargaining with each other and together pursuing some mixture of all of their values. Perhaps if you were wiser, you too would pursue this “overgoal,” and in return your self-interest would be served by other agents in the mixture.
While plausible, this doesn’t look super likely right now. Will would get a few Bayes points if it pans out, though the idea isn’t due to him. (A continuum of degrees of altruism have been conjectured to be justified from a self-interested perspective, if you are sufficiently wise. This is the most extreme, Drescher has proposed a narrower view which still captures many intuitions about morality, and weaker forms that still capture at least a few important moral intuitions, like cooperation on PD, seem well supported.)
The connection to omega isn’t so clear. It looks like it could just be concealing some basic intuitions about computability and approximation. It seems like a way of smuggling in mysticism, which is misleading by being superfluous rather than incoherent.
But how does an agent introduce its values in the mixture? The agent is the way it decides, so at least in one interpretation its values must be reflected in its decisions (reasons for its decisions), seen in them, even if in a different interpretation its decisions reflect the mixed values of all things (for that is one thing the agent might want to take into account, as it becomes more capable of doing so).
Why do I write this comment? I decided to do so, which tells something about the way I decide. Why do I write this comment? According to the laws of physics. There seems to be no interesting connection between such explanations, even though both of them hold, and there is peril in confusing them (for example, nihilist ethical ideas following form physical determinism).
Presumably by what its action would have been, if not for the relationship between its actions and the actions of the other agents in the mixture.
I agree that the situation is confused at best, but it seems like this is a coherent picture of behavior, if the mechanics remain muddy.
Your comment did clarify for me what Will was talking about. This is an important confusion (to untangle).
Agent’s counterfactual actions feel like a wrong joint to me. I expect agent’s assertion of its own values has more to do with the interval between what’s known about the reasons for its decisions (including to itself, where introspection and mutual introspection is deep) and the decisions themselves, the same principle that doesn’t let it know its decisions in advance of whenever the decisions “actually” happen (as opposed to being enacted on precommitments). In particular, counterfactual behavior can also be taken as decided upon at some point visible to those taking that property (expression of values) into account.
I don’t accept Will’s overall position or reasoning but this particular part is relatively straightforward. It’s just the same as how anyone negotiates. In this case the negotiation is just a little… indirect. (Expanded below.)
An agent’s decisions are determined by it’s values but this relationship is many to one. For any given circumstances that an agent could be in all sorts of preferences will end up resolving to the same decision. If you decided to throw away all that information by only considering what can be inferred from the resultant decision then you will end up wrong. More importantly this isn’t what the other agents will be doing so you will be wrong about them too.
Consider the coordination game as described by paulfchristiano, adopted as a metaphysics of morality by Will and that I’ll consider as a counterfactual:
There are a bunch of agents located beyond the range at which they can physically or causally interact. (This premise is sometimes includes altogether esoteric degrees of non-interaction.)
The values of each agent includes things that can be influenced by the other agents.
The agents have full awareness of both the values and decision procedures of all the other agents.
It is trivial* to see that this game reduces to equivalent to a simple two party prisoners dilemma with full mutual information. Each agent calculates the most efficient self interested bargains that could be made between them all and chooses to either act as if those bargains have been made or doesn’t depending on whether it (reliably) predicts the other agents do likewise.
For all the agents when we look at their behavior we see them all acting equivalently to whatever the negotiated outcome comes out to. That tells us little about their individual values—we’ve thrown that information away and just elected to keep “Cooperate with negotiated preferences”. But the individual preferences have been ‘thrown into the mix’ already back at the point where each of the agents considers the expected behavior of the others. (And there is no way that one of the agents will cooperate without it’s values in the mix and all the agents like to win, etc, etc, and a lot more ‘trivial’.)
I don’t accept the premises here and so definitely don’t accept any ‘universal morality’ but the “But how does an agent introduce its values in the mixture?” just isn’t the weakpoint of the reasoning. It’s tangent and to the extent that it is presented as objection it is a red herring.
* In the come back 20 minutes later and say “Oh, it’s trivial” sense.
It only reduces to/is equivalent to a prisoner’s dilemma for certain utility functions (what you’re calling “values”). The prisoners’ dilemma is characterized by the fact that there is a dominant strategy equilibrium which is not Pareto optimal. But if the utility functions of the agents are such that the game is zero-sum, then this can’t be the case, as every outcome is Pareto optimal in a zero-sum game.
Furthermore, in a zero-sum game, no cooperation between all of the agents is possible. So it’s crazy to believe that an arbitrary set of sufficiently intelligent agents will cooperate to achieve a single “overgoal”. Collaboration is only possible if the agents’ preferences are such that collaboration can be mutually beneficial.
Yes, this entire scenario is based around scenarios where there is benefit to cooperation. In the edge case where such benefit is ‘0 expected utilons’ the behavior of the agents will, unsurprisingly, not be changed at all by the considerations we are talking about.
So I should interpret Will’s “Omega = objective morality” comment as meaning “sufficiently wise agents sometimes cooperate, when cooperation is the best way to achieve their ends”? I don’t think so.
No. Will thinks thought along these lines then goes ahead and bites imaginary bullets.
I don’t think that’s a very good model. Also, I’m curious: what’s your impression of this quote?
Worse than useless.
I didn’t intend to suggest throwing out information: a “public” decision, the action, is not the only decision that happens, and there is also “a priori” of agent’s whole initial construction. Rather, my point was that there is more to agent’s values than just the agent as it’s initially presented, with its future decisions marking the points where additional information (for the purposes of other decisions that coordinate) is being revealed, even if those decisions follow deterministically from agent’s initial construction.
I’m not sure if I’d get many Bayes points for my beliefs, rather than just my intuitions; after taking into account others’ intuitions I don’t think I think it’s that much more plausible than others think it is.
I wish I could respond to the rest of your comment but am too flustered; hopefully I’ll be able to later. What stands out as a possible miscontrual-with-a-different-idea is that I’m not sure if this idea of selfness as in narrow self interest even makes sense. If it does make sense then my intuition is probably wrong for the same reason various universal instrumental value hypotheses are probably wrong.
Well, one can certainly talk about agents who have what we might describe as “narrow self-interest,” though I don’t really care about the distinction between self-interest and paperclipping and so on, which do seem to be well-defined.
E.g., whenever I experience something I add it to a list of experiences. I get a distribution over infinite lists of experiences by applying Solomonoff induction. At each moment I define my values in terms of that, and then try and maximize them (this is reflectively inconsistent—I’ll quickly modify to have copy-altruistic values, but still to something that looks pretty self-interested).
Are you claiming that this sort of definition is incoherent, or just that such agents appear to act in service of universal values once they are wise enough?
If “wise enough” is taken to mean “not instantaneously implosive/self-defeating” and “universal values” is taken to mean “decision problem representations robust against instantaneous implosion/self-defeat”, then the latter option, but in practice that amounts to a claim of incoherence; in other words the described agent is incoherent/inconsistent and thus its description is implicitly incoherent upon naive interpretation. Or to put it differently, I’m still not convinced it’s possible to build an AGI with a naive “prove the Goldbach conjecture”-style utility function, and so I’m hesitant to accept the validity of admittedly common sense reasoning that takes Goedel machine or AIXI-style architectures at face value as premises.
This carving up of the problem in such a way that “universal values” stands out as a thing seems wrong to me; the most obvious way of interpreting “universal values” is in some object level way that connotes deluding ones “self” into seeing Pareto improvements that don’t exist or deluding ones “self” into locating/defining ones “self” via some normatively unjustified process.
I can write down TDT agents who have preferences about abstract worlds, e.g. in which the agent is instantiated on an ideal Turing machine and utility is just defined in terms of mathematical properties of the output (say, whether it is a proof of the Goldbach conjecture) and the running time.
Is the objection before or after this point?
I can write down TDT agents who care about the number of 1s in universally distributed sequences agreeing with their observations so far (as I remarked above). Do you think this agent definition implodes, or that the resulting agents just don’t act as self-interested as they look like they would? (Particularly I’m talking about the ones who actually are in simple universes, so who can quickly rule out concerns about simulators, and who don’t rely on others’ generosity).
(I’m trying to repeat things in many different ways so as to increase the chance that I’m understood; apologies if the repetition is needless.)
Before, but again my objection is sort of orthogonal to the way you’ve set up the scenario. When you say you can write down TDT “agents” I don’t believe you. I believe you can write down specifications of syntax-manipulating algorithms that will solve tic tac toe or other narrow problems just fine, and I of course believe that it’s physically possible to call such algorithms “agents” if such a fancy appeals to you, but I don’t confidently believe that they are or could ever be real agents in the way that word is commonly interpreted. (“Intelligence, to be useful, must be used for something other than defeating itself.”) You can interpret such a syntax manipulator as an agent to the extent that you can interpret the planet Saturn as an agent, but this is qualitatively different from talking about real agentic things like humans or gods, and I’m worried about pivoting on this word “agent” as if conclusions drawn in one domain can be routinely expected to work for the other. There is some math about an abstract thing called expected utility, and we can use roughly that conceptual scheme to conveniently label certain syntax-manipulating algorithms or to roughly carve up the world as we see it, but this doesn’t mean that things like “beliefs” or “preferences” actually exist out there in the world in any reliable metaphysical sense such that we can be confident of our application of them beyond their intended purview. So when you say:
I don’t know how to interpret this question in a way that I’m confident makes sense. I certainly want to know how to interpret it but would have to think about it a lot longer. Perhaps if I was more familiar with both the relevant arguments from the formal epistemology literature and the philosophy of mind literature then I would be able to confidently interpret it.
This does help with clarity.
So I can write down these formal symbol-manipulating algorithms, that look to a naive onlooker like they will do things like keep to themselves and prove the Goldbach conjecture. We can talk about the question of fact: if we run such an algorithm on a Turing machine (made of math), would it in fact output a proof of the Goldbach conjecture? And then we can talk about the other question of fact, which seems to be equivalent unless you dispute some very fundamental claims: if we simulate that computation on a real computer, will it in fact output a proof of the Goldbach conjecture?
It seems like one could try and cut this sort of reasoning at three points, if you accept it so far: either it breaks down when the goals get complicated, it breaks down when the reasoning gets hard, or it breaks down when the algorithm’s embedding in the environment is too complicated.
If you accept that these algorithms systematically do things that lead to their apparent “goals” being satisfied (so that we can predict outcomes using this sort of reasoning), then I don’t know what exactly you are arguing.
I thought about it some more and remembered one connection. I’ll post it to the discussion section if it makes sense upon reflection. The basic idea is that Agent X can manipulate the prior of Agent Y but not its preferences, so Agent X gives Agent Y a perverse prior that forces it to optimize for the preferences of Agent X. Running this in reverse gives us a notion of an objectively false preference.
Unfortunately I think you’ll have to familiarize yourself more with the existent decision theory literature here on LW or on the decision theory mailing list in order to understand what I’m getting at. I’m already rather familiar with the standard arguments for FAI. If you’re already a member of the decision theory list then the most relevant thing to read would be Nesov’s talking about decision processes splitting off into coordinated subagents upon making observations. That at least hints in the right direction.
(I have no idea what Will is talking about; I don’t even see which things I wrote on the list he is referring to.)
Edit: Both issues now resolved, with Paul clarifying Will’s point and Will explicitly linking to the decision theory list post.
(“A note on observation and logical uncertainty”, January 20, 2011.)
(I am mildly surprised that you have no idea what I’m talking about even after having read the thread I linked to that hints at the intuitions behind a creatorless decision theory. It’s not a very complicated idea, even if it might look uncomfortably like some hidden agenda promoting values deathism.)
(I still don’t see how that note could be recognized from the information you provided. Thank you for some clarity, I only wish you’d respect it more. I also remain ignorant about how the note relates to what you were discussing, but here’s an excuse to revisit that construction.)
The note in question mostly talks about a way in which an observation can shift agent’s focus of attention without changing its decision problem or potential state of knowledge. Agent’s preference stays the same.
The decision in question is where an agent focuses on seeing the implications of a particular observation (that is, gets to infer more in a particular direction, using the original premises), while mostly ignoring the implications of alternative observations (that is, inferring less from the same premises in other directions), thus mostly losing track of the counterfactual worlds where the observation turns out differently, leaving those worlds to its alternative versions. In doing so, the agent loses coordination with its versions in those alternative worlds, so its decisions will now be more about its own individual actions and not (or less) about the strategy coordinating it with the counterfactual versions of itself. In return, it gains more computational resources to devote to its particular subproblem.
This is one sense in which observations can act like knowledge (something to update on, focus on implications of) without getting more directly involved in agent’s reasoning algorithm, so that we can keep an agent updateless, in principle able to take counterfactuals into account. In this case, an agent is rather more computationally restricted than what UDT plays with, and it’s this restriction that motivates using observations in an updating-like manner, which is possible to do in this way without actually updating away the counterfactuals.
It is a compulsion of mine that given a choice between giving zero information and giving a small amount of information I must give a small amount or feel guilty for not even having tried to do the right thing. Likely leads to Goodhartian problems. I don’t have introspective access to the utility calculus that resulted in this compulsion.
E.g. in this case: Bla bla additive utility versus multiplicativeish “belief” self-coordination versus coordination with others computational complexity bla. Philosophy PSR and CFAI causal validity blah, Markovian causality includes formal/final causes. Extracting bits of Chaitin’s constant from “environment” bla. Bla don’t know if at equilibrium with respect to optimization after infinite time, unclear whether to act as if stars are twenty dollar bill on busy street or not.
Re friendliness, Loebian problems might cause collapse of recursive Bayes AI architectures via wireheading and so on, Goedel machine limits with strength of axioms, stronger axiom sets have self-reference problems. If true this would change singularity strategy, don’t have to worry as much about scary AIs unless they can solve Loebian problems indirectly.
ETA: Accidentally hit comment before editing/finishing but I’ll accept that as a sign from God.
False dichotomy. In the same number of words you could be communicating much more clearly.
I was curious actually. I had a fair idea of the general background for the objective morality belief but the basis for the belief in gods was somewhat less clear. I did assume that you had a more esoteric/idiosyncratic basis for the belief in gods than straightforward updating on observed evidence so in that respect I’m a little surprised.
By my way of thinking you (and I) have already engaged in the counterfactual negotiated by the act of considering the possibility of such a negotiation and deciding what to do but by implementing the underlying principle behind “I don’t negotiate with terrorists” our deciding not to negotiate is equivalent to a non-counterfactual negotiation in which we unequivocally stone-wall—which is functionally equivalent to not having considered the possibility in the first place.
(One of the several fangs of Roko’s Basilisk represents an inability in some people to casually stonewall like this in the negotiation that is implicit in becoming aware of the simple thought that is the basilisk.)
Rationality promotion:
-- Nate Silver, today’s 538 blog
http://fivethirtyeight.blogs.nytimes.com/2012/02/08/g-o-p-race-has-hallmarks-of-prolonged-battle/
The original even linked to the wikipedia entry on “Bayesian”.
David Deutsch, The Beginning of Infinity
-- Ronald E. Merrill
(The brackets around “vertebrates” are just for a spelling correction.)
This sounds radical but is if anything far too conservative.
Intelligence and tool using has for millennia allowed us to apply selection pressures which are much more focused than natural selection, and now also allows us to directly edit genetic material in ways which would be slower or even impossible via random mutation alone. Intelligence also allows for the generation, mutation, and replication of ideas, which end up having a much greater, much more rapidly changing impact on ourselves and our environment than the variation in our genes alone.
Those aren’t changes comparable to the difference between breathing water and breathing air; they’re changes comparable to the difference between non-life and life. The very idea of biological clades becomes more and more fuzzy when we make horizontal gene transfer a regular fact of life for even complex organisms, intermixing DNA from species that haven’t had a common ancestor in a billion years.
Theodore Dalrymple
What a cliffhanger.
Cracked, 4 Reasons Humans Will Never Understand Each Other
I had already read about the ideas of that Cracked in the sequences (http://lesswrong.com/lw/i7/belief_as_attire/, http://lesswrong.com/lw/9v/beware_of_otheroptimizing/, http://lesswrong.com/lw/i0/are_your_enemies_innately_evil/), but I still found it awesome.
A bit long for a quote. Might have been a good summary for a discussion post link.
Daniel Dennett, Elbow Room, (Control and Self-Control)
--Thomas Hobbes, Leviathan
--William James, The Will to Believe II
I like this William James quote and some others, but I guess LW doesn’t, considering this comment’s score. I could speculate on it as much as I want, but I don’t know why.
Edited for wedifrid’s uncharitable objection.
It is conceivable that people vote based on quotes and not just the author the quote is attributed to!
-Sun Tzu, The Art of War
quoted from here in that particular form
— Arkady and Boris Strugatsky
Mencius Moldbug
Everything after “If so—definitely, keep it. If not...” is (a) context-dependent and (b) debatable.
John Leslie, The End of the World, p. 242 (paperback)
(He is not talking about about trials in the “randomized controlled trial” sense but rather in the sampling sense.)
-Bengali proverb
I’ve heard a theory that half truths told with intent to deceive are more damaging than outright lies because if someone is deceived, they’re more likely to blame themselves.
Also, you’re more likely to notice that an outright lie is false.
Douglas Murray describing advice from a Holocaust survivor.
Perhaps this should be checked by comparing the number of people who say they want to annihilate a group to the number of attempts at annihilation.
True, but you should first assign appropriate weights to the two categories you mention based on the expected cost of having an incorrect belief.
This seems obviously correct, but at the same time it seems at odds with the virtue of evenness.
No, the weight factors into an expected utility calculation, it’s separate from the probability calculation. Miller didn’t say otherwise.
BTW, the opening three comments of this thread would make a great introduction to what the LW website is all about.
Aha. The original claim was that one should believe them, so I thought the weights were supposed to bear upon that question.
In that case, which expected utility calculation are you referring to? Or are you claiming that believing a proposition is more than a matter of the probability calculation?
At a minimum, you could include estimates of the ability to carry out the threat in your calculations.
I don’t attempt everything I want to do, either. But the number who try to do so given the opportunity...
Just for fun: similar advice based on British folk ballads.
Daniel Kahneman, Thinking, Fast and Slow
— Will Durant, Life, Oct. 18, 1963
George Orwell
He’s mistaken about math and physics, possibly because he didn’t expect his ideas on the subjects to be tested against solid reality....
I write only when inspiration strikes. Fortunately it strikes every morning at nine o’clock sharp.
-- W. Somerset Maugham
Ohh man, that would be convenient… Actually, given my current schedule, it’d be pretty irritating. I’d spend my mornings sitting in class, fuming that I couldn’t just leave and go write all day.
I think what he meant is sit down and get to work on a regular schedule, “inspired” or not. c.f. this.
-- Nicholas Gurewitch (creator of Perry Bible Fellowship)
-Retsupurae
That’s exactly how the character “The Sphinx” in the film “Mystery Men” delivered all his wise-sounding lines. Eventually it becomes a bit predictable to the D-list “superhero” characters that he’s trying to serve as a mentor to.
Edit: See DSimon’s reply for the dialogue.
[...]
[...]
[...]
Thanks. :)
-- Attributed to Gregory Cochran
On intellectual hipsters.
I would be very interested if anyone has good examples of this phenomenon.
There are a few “triads” mentioned in the intellectual hipster article, but the only one that really seems to me like a good example of this phenomenon is the “don’t care about Africa / give aid to Africa / don’t give aid to Africa” triad.
Well, the “dumb” (and uneducated) explanation of airfoil lift is that wings push air downwards.
The slightly less dumb people get exposed to bits and pieces of products of thought of very very smart people, which they completely don’t understand and absolutely can’t use for reasoning. But they want to be smart. So they come up with explanation that air on top of the wing must match up with the air on the bottom, but path is longer, so it must go faster, and so with bernoulli effect, there’s lift. Reduced from dumbly talking in dumbspeech to incoherently babbling in smartspeech.
The actually smart people’s explanation is that wings push air downwards (and also pull it downwards).
The reasoning tools made by real smart people for real smart people are a memetic hazard to semi smart slightly educated people, but not so much to uneducated people, in much same way how power tools made for adults are a huge hazard to children that can open the cabinet, but not infants. If we meet super smart aliens, and they just dump knowledge, results on the really smart people might well be exactly the same.
-- Scott Aaronson, in this blog post, reaching out to the pointy-haired bosses of the quantum computing world.
--Thomas Hobbes, Leviathan
I think there’s more to it than that. To label an opinion heresy is to claim that it deviates from the majority opinion, whether or not that is actually the case.
Sort of related: The Bolsheviks were clever to call themselves Bolsheviks; the Mensheviks probably outnumbered them at the time of the split, but failed to contest the nomenclature.
The Bolsheviks had a majority at the party congress where the split occurred. The Mensheviks were a loosely organized group of study circles. They included all sorts of “members” who weren’t actually active. They might have more members, but the defined “members” differently, and that definition was in fact the main basis for the original split with the Mensheviks.
I think that’s part of the meaning of “private opinion” in the quote. If someone agrees with the majority, they don’t have their own private opinion.
Andrew Tanenbaum
Tony Dye
From your link:
“Bit meters per second” or “megabyte kilometers per hour” would be a better measure than just “bits per second”.
Are there useful generalizations which can be derived from this?
“Shut up and multiply” works for practical purposes too.
(One of my favorite shut-up-and-multiply results: automatic dishwashers cost less than 2 euro per hour saved, so everyone should have one.)
I live on a fixed income, so hourly wage isn’t a very relevant metric. It wouldn’t even fit in my place. I couldn’t take it with me when I move, and I move a lot.
Would even this [source] be too large? It’s only ~50lbs (~22 kg), so moving it should be possible. (This is not an endorsement of the specific machine or this class of machines, I didn’t look very closely.)
I can’t sell an extra hour either, but reverse the situation: would you be willing to wash dishes for an hour for $2? (If so, I have a few jobs for you that are harder to automate than dishwashing… ;-))
I’ve lived in apartments where this would not fit. And I don’t think I know anyone who, after finishing dinner, would actually go and earn money during the time they used to spend washing up.
Everyone in the western world you mean ? Because 2 euros per hour is much more than the minimal wage in many countries. Sorry for nit-picking but forgetting that more than half of the world doesn’t live in as much comfort as we do is a frequent bias (probably a consequence of availability bias, we don’t see them as often).
True, but “everyone on LW” seems to be fairly defensible.
Dishwasher efficacy is variable. Where I live, the water is actually hard enough that I have to hand scrub most of the dishes I use because the dishwasher alone won’t clean them properly. It only barely takes me less time to get many of my dishes dishwasher-ready than to clean them entirely by hand
You’re assuming away a lot of individual variation in time spent manually washing dishes.
If you download a LOT of old movies onto your PC, a truck full of old tapes heading towards you, could be a great internet speed up from your perspective.
Or a pizza delivering man, he could bring you some files in less time than the email.
At least in principle, some “station wagons full of tapes”, cargo planes in the sky full of USB flash drives and pedestrians running on the streets with a massive data storage devices in their bags—they all together could increase the network bandwidth we need.
A few from M:TG flavour text.
When nothing remains, everything is equally possible. ~One with Nothing
“Believe in the ideal, not the idol.” -Serra ~Worship
“War glides on the simplest updrafts while peace struggles against hurricane winds. It is the way of the world. It must change.” ~Commander Eesha
I must admit that one of my favorite quotes from M:tG is one of the less rational ones:
-- Sizzle
-- Fodder Cannon
The card art of Browse gives this gem, which I think I may have posted before:
But the best flavor text ever is still Martyrs’ Tomb.
I don’t know, I find the Wall of Vapor quote inspirational, as well:
From Shattered Perception (Discard all the cards in your hand, then draw that many cards.):
I think this one takes the cake, in terms of rationality.
To a large extent it already has. Humans are much more peaceful now than they have been in the past. This is part of a large set of broad trends. See Pinker’s excellent “The Better Angels of Our Nature”. At this point, I’m not sure this quote is really accurate.
True in the sense that 0=0.
I understood it as advocating a maximum ignorance prior. In hindsight, it’s an MT:G card, so probably not.
Also I don’t recommend throwing out what you know to have a maximum ignorance prior.
Incidentally, the card itself is notorious for being among the most useless cards ever printed and routinely shows up on “worst card ever” lists.
~ Pat Wagner
Which occasions? If this were a rationality kata I would immediately ask, “What trigger condition does the person need to recognize that chains into using this technique?”
We will have to make the web better, then.
Who cares about “sometimes” when making a decision? What counts is the expectation, what happens on average.
Yes, sometimes investing all your savings in a single high-risk stock picked at random while drunk works better than listening to various experts, researching the relevant literature and diversifying your investments. That doesn’t mean it’s a good idea.
This quote seems to be losing its relevance, since even when I was a college senior you could get help from research librarians via web chat.
“Seek truth from facts”
--Chinese saying
http://en.wikipedia.org/wiki/Seek_truth_from_facts
Dindo Capello, as quoted in Truth in 24 (2009 film).
-- .Helmuth von Moltke the Elder (1800-1891) (paraphrased)
When learning, you must know how to make the clear distinction between what is ideology and what is genuine knowledge.
There is no such thing as good and evil. There is what is right and what is bad, what is consistent and what is wrong.
-- “Behaviour Guide (in order to avoid mere survival)”, Jean Touitou
I like the first line.
The second line, though… what on Earth is the difference between “good” and “right” or between “evil” and “bad”? They mean the same thing; “good” and “evil” have just migrated to slightly higher-brow-sounding language.
I’m not trying to defend the quote, but there are no evil microscopes. There are useful microscopes and not useful microscopes.
I’m confused why the original quote contrasts right with bad, rather than with evil, but I think that’s what Touitou is trying to say.
Are these two different quotes, or were they juxtaposed like this in the original? (i.e. “You must distinguish between ideology and knowledge. → There is no such thing as good and evil.”)
BEHAVIOR GUIDE (in order to avoid mere survival) Intended for younger generations by JEAN TOUITOU
Although appearance shows quite the reverse the natural trend of the system is to turn you into a slave. Your mission is to remain erect and never crawl.
when learning, you must know how to make the clear distinction between what is ideology and what is genuine knowledge.
Be fully aware of the difference between making a compromise and compromising yourself.
Whatever happens, heart break hotel is sure to be your dwelling place, for one or several stays. This is no reason to overindulge in the pangs of love for too long.
Learn how to make simple and excellent meals.
Fear no gods, whatever appearance they may have.
For girls: all boys are more or less the same. For boys: all girls are different.
Keep well away from competitive sport that will only cause wounds that will make you suffer when you are over forty.
There is no such thing as good and evil. There is what is right and what is bad, what is consistent and what is wrong.
That is the entire original quote, but not all felt like it belonged here. It’s all part of the same, I think.
The first part seems rather applause lighty; I think almost everyone agrees that we need to distinguish between ideology and fact; actually doing so is the hard part, and the quote doesn’t provide any interesting insights in how best to go about doing that.
This is probably me projecting, but I took it to be about distinguishing between those which make claims about reality and those which don’t.
For example: If somebody says “You should be democratic, because the people have the right to rule themselves”—that’s not even claiming to be a fact, just an ethical position. If they say “You should be democratic, because democratic countries do better economically,” then that’s a about the real world, which I could even test if I wanted to.
In my admittedly limited experience, it seems that a lot of confusion in the greatest mind-killing subjects (politics and spirituality) come from people not properly distinguishing between those two kinds of statements.
And that issue often becomes circular. People often have both ethical and factual reasons to take a political position, and they don’t clearly split them apart in their mind, each reason propagating to reinforce the other.
I’ll take a personal example : I oppose death penalty for many reason, but among them one is ethical (I don’t approve of voluntary terminating a human life for ethical reasons) and one is more factual (I believe as a fact, from various statistics, that death penalty does not deter crime). But it requires a conscientious effort from myself (and I didn’t always do it, and I suspect many don’t do it) to not have each of two reasons reinforcing the other with a feedback loop.
The interesting question is how you evaluate proposed big changes. Democracy has turned out to be a moderately good idea, but trying it out for the first few times was something of a leap in the dark.
There are reasons for thinking that democracy might work better than monarchy—generally speaking, a bad ruler can do more damage than not having a great ruler can do good, but is the theoretical reason good enough?
From what I heard, the person who established Athenian democracy did so after first overthrowing the previous ruler in a civil war, having concluded that becoming powerful was the best way to become a Great Man. He then reasoned that, since everyone should strive to be a Great Man, then everyone else would also be obliged to do the same thing he just did—which would mean endless civil wars. Which would be bad. So he came up with the clever solution of making everyone a ruler, so they could all be Great Men without having to kill each other first. Hence, democracy.
Or something like that, anyway. Wikipedia doesn’t say all that much, so I suspect that the story I remember is more story than actual history.
True, however if I recall correctly, one of the lessons in The Teacher’s Password not everything is about the answer. A lot of the time I gain more from the question than being served the answer directly. We need more insights anyway, so how DO we distinguish fact from ideology? People claim that the earth was created by God in 6 days, and others claim The Big Bang caused the creation of what we know as the universe, but since I haven’t discovered either of these on my own, how can I be sure that either is true?
By looking at the views of those who have been right about this sort of thing in the past, i.e., physicists.
Given more time, by asking/searching for the evidence that convinced that group.
Bertrand Russell
That advice seems to be predicated on poor reasoning. Not only are most eccentric opinions that have been held not accepted, those that gain the benefit of the eccentric opinions on their way to being accepted are not necessarily those that first hold them.
It’s bad advice if the advice is supposed to help a particular person get ahead. If you want a new good opinion to be generated, give that advice to ten thousand people.
I gave up trying to parse that sentence after the third attempt. Punctuation exists for a reason! :-)
No no, it’s not that bad if you try to figure out where the commas go:
So to rewrite with fewer negations:
Good point.
Not necessarily, but it’s often an effective way to gain status by being seen as visionary.
I’d recommend the alternative of gaining enough status and power that you can easily take credit for other people’s opinions when you have reason to believe they will be adopted.
I don’t see how a method of gaining status that begins with an unelucidated “First, gain status” is very helpful.
It’s rather a lot more useful than “Be weird because it doesn’t always backire”.
There are some social moves that do only work once you have sufficient status to pull them off. Gaining more status through differentiation is one of those.
I expect the other half of the advice is to fear being wrong. Lowering one’s fear of being eccentric could be quite useful if you suspect that the usual opinions are wrong and you can do better.
Marcus Aurelius, Meditations.
Saeid Fard
Indeed, even this quote is way below 140 characters :-)
By the way, you’re off by a year: the February 2013 thread is here.
oops
Nassim Taleb
Isn’t it a common-place of forecasting (and chaos theory in particular) that short-term projections can be terribly inaccurate, even while long-term forecasting can be extremely accurate?
Chaos theory often points in the opposite direction. For example, consider weather simulations which become worse than careful ignorance after 5 days- slight variations in initial conditions (and multiplication roundoff errors in computation, and so on) grow out of control, and soon the system is less accurate than just saying “it rains 20% of the time in general; 6 days from now, there is a 20% chance of rain.”
It is often the case that long-run means are easier to predict than short-run means, in large part because the variability in long-run means is lower. This is especially the case for systems with negative feedback loops, where the system corrects deviations from normality, making normality especially likely.
It’s not clear to me that that does much for oil prices or social security deficits, since I don’t see either as being systems where the negative feedback is obviously stronger than the positive feedback.
Typically, short-term forecasting is stymied by noise rather than fundamental underlying uncertainty. For example, consider the wager between Simon and Ehrlich. They used a basket of commodities because they didn’t want short-term noise to upset the wager, but the main difference in the long-term predictions was the different underlying models.
In both the oil and Social Security examples, there are powerful long-term trends which mean we should have as much or more confidence in long-term projections than short-term ones: in oil, as a nonrenewable resource, the more efficient the market the closer it will conform to Hotelling’s rule, and in SS, it’s almost entirely driven by locked-in demographics or actuarial factors, and the uncertainty is in how and whether payouts will be modified or revenue increased.
(The latter might be what Taleb is getting at, but since he’s an arrogant blowhard who loves to oversimplify and believes he is right about everything, I am not inclined to be charitable and think he’s making a subtle claim about the different sources of variability and their foreseeability over the short and long run.)
Regardless, Taleb is making the argument: “if we cannot predict something in the short term, we cannot predict it in the long-term” which is not true of many things and may not even be true of his chosen examples.
--John Cutter, The Prestige
The context in the movie is a bit different, but it’s a nice illustration of how people can let themselves be seduced by mysterious answers to mysterious questions, even when they purport to be “looking for the answer.”
Ludwig Wittgenstein, Philosophical Occasions
Arthur Schopenhauer, Counsels and Maxims
--Albert Einstein
Mandatory for science, generally advisable for anything else.
This advice is worse than useless. But coming from someone who was instrumental in the “Physicists have figured a way to efficiently eradicate humanity; let’s tell the politicians so they may facilitate!” movement, it’s not surprising.
Protip: the maxim “That which can be destroyed by the truth, should be” does not mean we should publish secrets that have a chance of ending global civilization.
I tend to think of science as the public common knowledge of mankind. It is obviously not the only kind of knowledge. Also I would say that humans tend to err more often in the direction of needlessly keeping secret important information rather than in the direction sharing it too easily.
Especially since it is easier to fool yourself than others.
--Joyce Cary
.
Mark Wilson, Wandering Significance
Curious to know why this was downvoted. Many philosophers use ‘scientism’ as a term of abuse, and Luke has written about reclaiming the term here. I found this a rather pithy rallying call that antedates Rosenberg’s.
Apologies if this is gratuitous but it was my first post!
The quote doesn’t seem to actually say anything.
I suppose it’s one of those statements that says a good deal in context and rather less outside it. ‘Scientism’ usually refers to a belief in the universal applicability of the tools of science in understanding the world. It is so understood by two camps, one who views it as an intellectual failing, the other a virtue. Wilson’s point is that the latter camp should not cede any ground to the former—not even terminological ground.
Edit: by context here I don’t mean the book in particular. More like, reading too much contemporary philosophy.
Unfortunately, the word “scietism” does describe a real set of related failure modes that people trying to be “scientific” frequently fall into, as I discussed in more detail in this thread.
Unscientific does that job already, while the ‘-ism’ suffix denotes, in this case, belief in science. Why let them have a perfectly good word?
I think “scientism,” “unscientific,” and “pseudoscientific” all have different and necessary meanings: respectively, “attempting to use scientific epistemology but misunderstanding it”, “using bad epistemology,” and “using bad epistemology but making a deliberate effort to look like one is being scientific”. The word closest to meaning what you want “scientism” to mean is probably “Bayesianism”.
No. It also cover people who don’t even try to be scientific.
Agree with that. There is a finer-grained distinction worth drawing—with some other word!
-- Doron Zeilberger - (see also)
Possibly useful career advice, but not a rationality quote.
Nassim Taleb
Taleb runs an interesting Facebook, but if you don’t want to get a Facebook account, I expect that a lot of this material will be in his upcoming book about anti-fragility (systems which get stronger when stressed).
I just realized that his domain dependence is equivalent to Rand’s “concrete-bound mentality”—in both cases, it’s getting stuck on a single example rather than seeing general principles.
Time and Robbery by Rebecca Ore
This quote hasn’t gotten any karma yet—it isn’t funny, and it seems so obvious as to almost not be worth saying.
Still, I suspect that a lot of trouble is caused by ignoring that advice.
-Bertrand Russell
--God’s Debris, Scott Adams.
-- Babylon 5, “Soul Hunter”
There is one art, no more, no less: to do all things with artlessness.
-Piet Hein
Why is “artlessness” desirable? AIUI the word means “without skill”.
“Artlessness” has a connotation of doing something naturally/smoothly/without guile.
Wiktionary gives both senses for artless. These words change sense over time, too. For instance, it’s my impression that once upon a time, saying that a person’s work was “artificial” was a compliment, meaning that it showed great skill (artifice). Today it would imply that it was inauthentic, contrived, or a surface imitation.
I suspect that in this context it’s meant to connote “attending to the task, rather than attending to your own technique for performing the task.”
Less is more.
Ockham’s razor (the law of parsimony, economy or succinctness), is a principle that generally recommends that, from among competing hypotheses, selecting the one that makes the fewest new assumptions usually provides the correct one, and that the simplest explanation will be the most plausible until evidence is presented to prove it false.
Julius Evola, Occult War, on how to avoid “magic therefore Seventh Day Adventism” kinda errors when interpreting the paranormal.
Marcus Aurelius, Meditations.
The intended listener must be doing an awful lot of stuff they already know is wrong. Ten days is a pretty short period of time to impress people as a god, and it usually requires more training and practice to get there. Heck, I still only impress people as a god around 10% of the time, and it took me 17 years to get here from when I first dedicated myself to rationality.
My understanding is that the intended listener was himself. He wrote the Meditations for his own benefit, to be read by himself to strengthen his own resolve. And indeed he does characterize himself as doing a lot of stuff that he knows is wrong.
Your likelihood of being seen as a god is also dependent on what sort of people are looking at you.
Should “seen a god” there be “seem a god”?
More substantially, isn’t this basically saying “Believe X because then you’ll get status”?
I fixed it, thank you very much.
I interpreted it to mean that if you adhere to logical principles based on a rational view of reality, you will be better because you do so.
So, yes you’re right. I would just change it to this: Do X because then you’ll get status.
While status isn’t the focus, it could (and likely will be) a product of what you’re doing.
Alison Sudol (singer/composer) The Minnow and the Trout
So? We’re also ‘starstuff’.
-- Babylon 5, “Soul Hunter”
Somewhat weakened by the fact that the show leaves it open whether or not Delenn was right.
In show, she more-or-less was.
Hmm? She didn’t have any real evidence other than a perceived degradation of Minbari society.
I was thinking of whatever test they did to determine that Sinclair has a Minbari soul.
I GUESS WE’RE MARKING SPOILERS FOR FIFTEEN YEAR OLD TELEVISION SHOWS
All the test showed is that Sinclair had Valen’s DNA. Except Valen is Sinclair after some Minbari DNA splicing; the reverse of what Delenn did.
Stable time loops for the win.
-edit never mind answering my question would have probably involved spoilers.
A specific Minbari soul, picking him out with stunning accuracy.
The fantasy doesn’t sound quaint—it sounds like a depressing story of inevitable decay and without even the possibility of allowing the creation of new (ensouled) individuals even in the case where those alive remove their vulnerability to death. The Soul Hunter presents a reality where souls evidently become generated each generation in the same way that they were before.
--George Box
Dupe: http://lesswrong.com/lw/2ev/rationality_quotes_july_2010/285f