Rationality Quotes May 2012
Here’s the new thread for posting quotes, with the usual rules:
Please post all quotes separately, so that they can be voted up/down separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
Do not quote yourself
Do not quote comments/posts on LW/OB
No more than 5 quotes per person per monthly thread, please.
- May 7, 2012, 4:21 PM; 10 points) 's comment on Lesswrong Community’s How-Tos and Recommendations by (
- May 4, 2012, 7:56 PM; -9 points) 's comment on Rationality Quotes May 2012 by (
-celandine13 (Hat-tip to Frank Adamek. In addition, the linked article is so good that I had trouble picking something to put in rationality quotes; in other words, I recommend it.)
Another quote from the same piece, just before that para:
I really, really like this. Thanks for posting it!
To elucidate the “bug model” a bit, consider “bugs” not in a single piece of software, but in a system. The following is drawn from my professional experience as a sysadmin for large-scale web applications, but I’ve tried to make it clear:
Suppose that you have a web server; or better yet, a cluster of servers. It’s providing some application to users — maybe a wiki, a forum, or a game. Most of the time when a query comes in from a user’s browser, the server gives a good response. However, sometimes it gives a bad response — maybe it’s unusually slow, or it times out, or it gives an error or an incomplete page instead of what the user was looking for.
It turns out that if you want to fix these sorts of problems, considering them merely to be “flakiness” and stopping there is not enough. You have to actually find out where the errors are coming from. “Flaky web server” is an aggregate property, not a simple one; specifically, it is the sum of all the different sources of error, slowness, and other badness — the disk contention; the database queries against un-indexed tables; the slowly failing NIC; the excess load from the web spider that’s copying the main page ten times a second looking for updates; the design choice of retrying failed transactions repeatedly, thus causing overload to make itself worse.
There is some fact of the matter about which error sources are causing more failures than others, too. If 1% of failed queries are caused by a failing NIC, but 90% are caused by transactions timing out due to slow database queries to an overloaded MySQL instance, then swapping the NIC out is not going to help much. And two flaky websites may be flaky for completely unrelated reasons.
Talking about how flaky or reliable a web server is lets you compare two web servers side-by-side and decide which one is preferable. But by itself it doesn’t let you fix anything. You can’t just point at the better web server and tell the worse one, “Why can’t you be more like your sister?” — or rather, you can, but it doesn’t work. The differences between the two do matter, but you have to know which differences matter in order to actually change things.
To bring the analogy back to human cognitive behavior: yes, you can probably measure which of two people is “more rational” than the other, or even “more intelligent”. But if someone wants to become more rational, they can’t do it by just trying to imitate an exemplary rational person — they have to actually diagnose what kinds of not-rational they are being, and find ways to correct them. There is no royal road to rationality; you have to actually struggle with (or work around) the specific bugs you have.
I agree with the general thrust of the essay (that broad, fuzzy labels like “bad at” are more useful if reduced to specific bug descriptions,) but I’ll note that being aware of the specific bugs that cause people to make the mistakes they’re making does not stop me from thinking of people as stupid. If a person’s bugs are numerous, obtrusive, and difficult to correct, I’m going to end up thinking of them as stupid even if I can describe every bug.
I read the article because of your post; thank you.
(obviously the grandparent deserves credit too).
Author used to post here as __, but I think her account’s been deleted.
ETA: removed username as I realized this comment kind of frustrates the presumable point of the account deletion in the first place.
I already upvoted this but want to emphasize that the article is really good.
My favorite sentence in it: “Are there no stupid people left?”
I’ve been trying to change my impulse to think “this person is an idiot!” into “this person is a noob,” because the term still kinda has that slightly useful predictive meaning that suggests incompetence, but it also contains the idea that they have the potential to get better, rather than being inherently incompetent.
Excellent article, thank you for the link!
Great article. One thing:
I don’t know much about Knewton, but it seems like it could address this—at least in some cases—and possibly better than teachers. Knewton and programs like it can keep track of success rates at the individual problem level, rather than the test or semester level. Such data could be used to identify the ‘bugs’ the author speaks of. All Knewton needs is knowledge of common ‘bugs’ and what problems they make students get wrong.
This article also recalls to mind http://lesswrong.com/lw/6ww/when_programs_have_to_work_lessons_from_nasa/, specifically the part where problems are considered to be the fault of the system, not of the people involved and are treated by changing the system, not by criticizing the people.
--Chinese Tale
I always use the metaphor of the fast car to distinguish between intelligence and rationality.
That’s a very handy assortment of fallacies. Where did you find it?
I first saw the story in “School in Carmarthen”, which I would absolutely recommend to everyone, except it’s in Russian. I thought there should probably be an English translation of the Chinese tale, so I googled it up by keywords.
The tale is apparently the origin story behind a common Chinese idiom that literally translates as “south house north rut”, and which means acting in a way that defeats one’s purpose.
They had too much time to talk, if one of them was that fast. Can’t help, but this technicality bothers me.
It was not said how the old man was travelling, and I doubt the horse was at a literal run. A carriage can go as fast as about 30 miles an hour on a modern road, but even in those conditions you should expect to break your carriage. On ancient roads, depending on condition, the speed limit for going “very fast” in a carriage could easily have been as low as about 10 miles per hour. If the old man was riding on an animal, or walking very fast, then he could have kept up for some time.
We at least know that the carriage wasn’t moving at its top speed because at the end of the story the horse sped up.
The carriage stopped while the two conversed. Or am I misunderstanding your objection?
Non stop and extremely fast, the story says. Well must be something lost in the translation.
Lost somewhere, I suppose. It seems clear to me that the carriage stopped. Just as it would not have carried on literally non-stop for ten days, 24 hours a day. These details are not stated; they do not need to be. And at the end, the man tells the driver to drive on. If this is an imperfection in the story, it is nothing more than a hyperbolic use of “non-stop”, as trifling as the extraneous “to” in the passage you quoted, which does not seem to have held you up.
Even in conventional English, “Non-stop” doesn’t necessarily mean without stopping at all. The express train from New Haven to Grand Central, for example, is called express because it doesn’t stop between Connecticut and New York City, though there are several stops in Connecticut and one stop in Harlem.
“Non-stop” in context could just mean that they were not stopping in any towns they passed.
--Mencius Moldbug, on belief as attire and conspicuous wrongness.
Source.
This reminds me of the following passage from We Need to Talk About Kevin by Lionel Shriver:
Possible additional factor: The truth is frequently boring—it helps to add some absurdity just to get people’s attention. Once you’ve got people’s attention, proof of loyalty can come into play.
Also relevant.
This reminds me of Baudrillard, I might come back in a few days with a Baudrillard rationality quote.
More quotes by Mencius Moldbug:
They are all from the article A Reservationist Epistemology
Surely the actual Bayesian rational mind’s conclusion is that the attacker will (probably) always show a blue ball, nothing to do with the urn at all.
Solomonoff prior gives nonzero probability to the attacker deceiving us. But humans are not very good at operating with such probabilities precisely.
I just facepalmed the hardest I’ve ever done while reading Unqualified Reservations. That is, not very hard—Mencius is nothing if not a charming and polite author—but still. Maybe he really ought to read at least one Sequence!
Could we start that reading with the classic Bayes’ Theorem example? Suppose 1% of women have breast cancer, 80% of mammograms on a cancerous woman will detect it, 9.6% on an uncancerous woman will be false positives. Suppose woman A gets a mammogram which indicates cancer. What are the odds she has cancer?
p(A|X) = p(X|A)p(A)/(p(X|A)p(A)+p(X|~A)p(~A)) = 7.8% Hooray?
Now suppose women B, C, D, E, F… Z, AA, AB, AC, AD, etc., the entire patient list getting screened today, all test positive for cancer. Is the probability that woman A has cancer still 7.8%? Bayes’ rule, with the priors above, still says “yes”! You need more complicated prior probabilities (e.g. what are the odds that the test equipment is malfunctioning?) before your evidence can tell you what’s actually likely to be happening. But those more complicated, more accurate priors would have (very slightly) changed our original p(A|X) as well!
It’s not that Bayesian updating is wrong. It’s just that Bayes’ theorem never allows you to have a non-zero posterior probability coming from a zero prior, and to make any practical problem tractable everybody ends up implicitly assuming huge swaths of zero prior probability.
It’s not assuming zero probability. It’s assuming independence. Under the original model, it’s possible for all the women to get positives, but only 1% to actually have breast cancer. It’s just that a better prior would give a much higher probability.
Is there any practical difference between “assuming independent results” and “assuming zero probability for all models which do not generate independent results”? If not then I think we’ve just been exposed to people using different terminology.
No.
I think it’s more than terminology. And if Mencius can be dismissed as someone who does not really get Bayesian inference, one can surely not say the same of Cosma Shalizi, who has made the same argument somewhere on his blog. (It was a few years ago and I can’t easily find a link. It might have been in a technical report or a published paper instead.) Suppose a Bayesian is trying to estimate the mean of a normal distribution from incoming data. He has a prior distribution of the mean, and each new observation updates that prior. But what if the data are not drawn from a normal distribution, but from the sum of two such distributions with well separated peaks? The Bayesian (he says) can never discover that. Instead, his estimate of the position of the single peak that he is committed to will wander up and down between the two real peaks, like the Flying Dutchman cursed never to find a port, while the posterior probability of seeing the data that he has seen plummets (on the log-odds scale) towards minus infinity. But he cannot avoid this: no evidence can let him update towards anything his prior gives zero probability to.
What (he says) can save the Bayesian from this fate? Model-checking. Look at the data and see if they are actually consistent with any model in the class you are trying to fit. If not, think of a better model and fit that.
Andrew Gelman says the same; there’s a chapter of his book devoted to model checking. And here’s a paper by both of them on Bayesian inference and philosophy of science, in which they explicitly describe model-checking as “non-Bayesian checking of Bayesian models”. My impression (not being a statistician) is that their view is currently the standard one.
I believe the hard-line Bayesian response to that would be that model checking should itself be a Bayesian process. (I’m distancing myself from this claim, because as a non-statistician, I don’t need to have any position on this. I just want to see the position stated here.) The single-peaked prior in Shalizi’s story was merely a conditional one: supposing the true distribution to be in that family, the Bayesian estimate does indeed behave in that way. But all we have to do to save the Bayesian from a fate worse than frequentism is to widen the picture. That prior was merely a subset, worked with for computational convenience, but in the true prior, that prior only accounted for some fraction p<1 of the probability mass, the remaining 1-p being assigned to “something else”. Then when the data fail to conform to any single Gaussian, the “something else” alternative will eventually overshadow the Gaussian model, and will need to be expanded into more detail.
“But,” the soft Bayesians might say, “how do you expand that ‘something else’ into new models by Bayesian means? You would need a universal prior, a prior whose support includes every possible hypothesis. Where do you get one of those? Solomonoff? Ha! And if what you actually do when your model doesn’t fit looks the same as what we do, why pretend it’s Bayesian inference?”
I suppose this would be Eliezer’s answer to that last question.
I am not persuaded that the harder Bayesians have any more concrete answer. Solmonoff induction is uncomputable and seems to unnaturally favour short hypotheses involving Busy-Beaver-sized numbers. And any computable approximation to it looks to me like brute-forcing an NP-hard problem.
In response to:
and
I think a hard line needs to be drawn between statistics and epistemology. Statistics is merely a method of approximating epistemology—though a very useful one. The best statistical method in a given situation is the one that best approximates correct epistemology. (I’m not saying this is the only use for statistics, but I can’t seem to make sense of it otherwise)
Now suppose Bayesian epistemology is correct—i.e. let’s say Cox’s theorem + Solomonoff prior. The correct answer to any induction problem is to do the true Bayesian update implied by this epistemology, but that’s not computable. Statistics gives us some common ways to get around this problem. Here are a couple:
1) Bayesian statistics approach: restrict the class of possible models and put a reasonable prior over that class, then do the Bayesian update. This has exactly the same problem that Mencius and Cosma pointed out.
2) Frequentist statistics approach: restrict the class of possible models and come up with a consistent estimate of which model in that class is correct. This has all the problems that Bayesians constantly criticize frequentists for, but it typically allows for a much wider class of possible models in some sense (crucially, you often don’t have to assume distributional forms)
3) Something hybrid: e.g., Bayesian statistics with model checking. Empirical Bayes (where the prior is estimated from the data). Etc.
Now superficially, 1) looks the most like the true Bayesian update—you don’t look at the data twice, and you’re actually performing a Bayesian update. But you don’t get points for looking like the true Bayesian update, you get points for giving the same answer as the true Bayesian update. If you do 1), there’s always some chance that the class of models you’ve chosen is too restrictive for some reason. Theoretically you could continue to do 1) by just expanding the class of possible models and putting a prior over that class, but at some point that becomes computationally infeasible. Model checking is a computationally feasible way of approximating this process. And, a priori, I see no reason to think that some frequentist method won’t give the best computationally feasible approximation in some situation.
So, basically, a “hardline Bayesian” should do model checking and sometimes even frequentist statistics. (Similarly, a “hardline frequentist” in the epistemological sense should sometimes do Bayesian statistics. And, in fact, they do this all the time in econometrics.)
See my similar comments here and here.
I find this a curious thing to say. Isn’t this an argument against every possible remotely optimal computable form of induction or decision-making? Of course a good computable approximation may wind up spending lots of resources solving a problem if that problem is important enough, this is not a blackmark against it. Problems in the real world can be hard, so dealing with them may not be easy!
“Omega flies up to you and hands you a box containing the Secrets of Immortality; the box is opened by the solution to an NP problem inscribed on it.” Is the optimal solution really to not even try the problem—because then you’re trying “brute-forcing an NP-hard problem”! - even if it turns out to be one of the majority of easily-solved problems? “You start a business and discover one of your problems is NP-hard. You immediately declare bankruptcy because your optimal induction optimally infers that the problem cannot be solved and this most optimally limits your losses.”
And why NP-hard, exactly? You know there are a ton of harder complexity classes in the complexity zoo, right?
The right answer is simply to point out that the worst case of the optimal algorithm is going to be the worst case of all possible problems presented, and this is exactly what we would expect since there is no magic fairy dust which will collapse all problems to constant-time solutions.
There might well be a theorem formalising that statement. There might also be one formalising the statement that every remotely optimal form of induction or decision-making is uncomputable. If that’s the way it is, well, that’s the way it is.
This is an argument of the form “Suppose X were true—then X would be true! So couldn’t X be true?”
You try to find a method that solves enough examples of the NP-hard problem well enough to sell the solutions, such that your more bounded ambition puts you back in the realm of P. This is done all the time—freight scheduling software, for example. Or airline ticket price searching. Part of designing optimising compilers is not attempting analyses that take insanely long.
Harder classes are subsets of NP-hard, and everything in NP-hard is hard enough to make the point. Of course, there is the whole uncomputability zoo above all that, but computing the uncomputable is even more of a wild goose chase. “Omega flies up to you and hands you a box containing the Secrets of Immortality; for every digit of Chatin’s Omega you correctly type in, you get an extra year, and it stops working after the first wrong answer”.
No, this is pointing out that if you provide an optimal outcome barricaded by a particular obstacle, then that optimal outcome will trivially be at least as hard as that obstacle.
This is exactly the point made for computable approximations to AIXI. Thank you for agreeing.
Are you sure you want to make that claim? That all harder classes are subsets of NP-hard?
Fantastic! I claim my extra 43 years of life.
No, carelessness on my part. Doesn’t affect my original point, that schemes for approximating Solomonoff or AIXI look like at least exponential brute force search.
Since AIXI is, by construction, the best possible intelligent agent, all work on AGI can, in a rather useless sense, be described as an approximation to AIXI. To the extent that such an attempt works (i.e. gets substantially further than past attempts at AGI), it will be because of new ideas not discovered by brute force search, not because it approximates AIXI.
43 years is a poor sort of immortality.
Well, yeah. Again—why would you expect anything else? Given that there exist problems which require that or worse for solution? How can a universal problem solver do any better?
Yes.
No. Given how strange and different AIXI works, it can easily stimulate new ideas.
It’s more than I had before.
The spin-off argument. Here’s a huge compendium of spinoffs of previous approaches to AGI. All very useful, but not AGI. I’m not expecting better from AIXI.
Hm, so let’s see; you started off mocking the impossibility and infeasibility of AIXI and any computable version:
Then you admitted that actually every working solution can be seen as a form of SI/AIXI:
And now you’re down to arguing that it’ll be “very useful, but not AGI”.
Well, I guess I can settle for that.
I stand by the first quote. Every working solution can in a useless sense be seen as a form of SI/AIXI. The sense that a hot-air balloon can be seen as an approach to landing on the Moon.
At the very most. Whether AIXI-like algorithms get into the next edition of Russell and Norvig, having proved of practical value, well, history will decide that, and I’m not interested in predicting it. I will predict that it won’t prove to be a viable approach to AGI.
How can a hot air balloon even in theory be seen as that? Hot air has a specific limit, does it not—where its density equals the outside density?
Isn’t there a very wide middle ground between (1) assigning 100% of your mental probability to a single model, like a normal curve and (2) assigning your mental probability proportionately across every conceivable model ala Solomonoff?
I mean the whole approach here sounds more philosophical than practical. If you have any kind of constraint on your computing power, and you are trying to identify a model that most fully and simply explains a set of observed data, then it seems like the obvious way to use your computing power is to put about a quarter of your computing cycles on testing your preferred model, another quarter on testing mild variations on that model, another quarter on all different common distribution curves out of the back of your freshman statistics textbook, and the final quarter on brute-force fitting the data as best you can given that your priors about what kind of model to use for this data seem to be inaccurate.
I can’t imagine any human being who is smart enough to run a statistical modeling exercise yet foolish enough to cycle between two peaks forever without ever questioning the assumption of a single peak, nor any human being foolish enough to test every imaginable hypothesis, even including hypotheses that are infinitely more complicated than the data they seek to explain. Why would we program computers (or design algorithms) to be stupider than we are? If you actually want to solve a problem, you try to get the computer to at least model your best cognitive features, if not improve on them. Am I missing something here?
Yes, the question is what that middle ground looks like—how you actually come up with new models. Gelman and Shalizi say it’s a non-Bayesian process depending on human judgement. The behaviour that you rightly say is absurd, of the Bayesian Flying Dutchman, is indeed Shalizi’s reductio ad absurdum of universal Bayesianism. I’m not sure what gwern has just been arguing, but it looks like doing whatever gets results through the week while going to the church of Solomonoff on Sundays.
An algorithmic method of finding new hypotheses that works better than people is equivalent to AGI, so this is not an issue I expect to see solved any time soon.
Eh. What seems AGI-ish to me is making models interact fruitfully across domains; algorithmic models to find new hypotheses for a particular set of data are not that tough and already exist (and are ‘better than people’ in the sense that they require far less computational effort and are far more precise at distinguishing between models).
Yes, I had in mind a universal algorithmic method, rather than a niche application.
The hypothesis-discovery methods are universal; you just need to feed them data. My view is that the hard part is picking what data to feed them, and what to do with the models they discover.
Edit: I should specify, the models discovered grow in complexity based on the data provided, and so it’s very difficult to go meta (i.e. run hypothesis discovery on the hypotheses you’ve discovered), because the amount of data you need grows very rapidly.
Hmmm. Are we going to see a Nobel awarded to an AI any time soon?
I don’t think any robot scientists would be eligible for Nobel prizes; Nobel’s will specifies persons. We’ve had robot scientists for almost a decade now, but they tend to excel in routine and easily automatized areas. I don’t think they will make Nobel-level contributions anytime soon, and by the time they do, the intelligence explosion will be underway.
What if, as a computational approximation of the universal prior, we use genetic algorithms to generate a collection of agents, each using different heuristics to generate hypotheses? I mean, there’s probably better approximations than that; but we have strong evidence that this one works and is computable.
Whatever approach to AGI anyone has, let them go ahead and try it, and see if it works. Ok, that would be rash advice if I thought it would work (because of UFAI), but if it has any chance of working, the only way to find out is to try it.
I’m not saying I’m willing to code that up; I’m just saying that a genetic algorithm (such as Evolution) creating agents which use heuristics to generate hypotheses (such as humans) can work at least as well as anything we’ve got so far.
If you have a few billion years to wait.
No reason that can’t be sped up.
Eh. I like the approach of “begin with a simple system hypothesis, and when your residuals aren’t distributed the way you want them to be, construct a more complicated hypothesis based on where the simple hypothesis failed.” It’s tractable (this is the elevator-talk version of one of the techniques my lab uses for modeling manufacturing systems), and seems like a decent approximation of Solomonoff induction on the space of system models.
It’s basically different terminology. His point is valid.
A model isn’t something you assign probability to. It’s something you use to come up with a set of prior probabilities. The model he used assumed independence. It didn’t actually assign zero probability to any result. It doesn’t assign a probability, zero or otherwise, to the machine being broken, because that’s not something that’s considered. It also doesn’t assign a probability to whether or not its raining.
From the little Moldbug I’ve been able to slog through, my main impression of him is “reader-hostile”. If he were polite maybe he would get to the effing point already.
I think his point is that you are still entirely unable to even enumerate, let alone process, all the relevant hypotheses, nor does the formula inform you of those, nor does it inform you how to deal with cyclic updates (or even that those are a complicated case), etc.
It’s particularly bad when it comes to what rationalists describe as “expected utility calculations”. The ideal expected utility is a sum of the differential effect of the actions being compared, over all hypotheses, multiplied with their probabilities… a single component of the sum provides very little or no information about the value of the sum, especially when picked by someone with a financial interest as strong as “if i don’t convince those people I can’t pay my rent”. Then, the actions themselves have an impact on the future decision making, which makes the expected value sum grow and branch out like some crazy googol-headed fractal hydra. Mostly when someone’s talking much about Bayes they have some simple and invalid expected value calculation that they want you to perform and act upon, so that you’ll be worse off in the end and they’ll be better off in the end.
And yet he wants a pragmatically motivated society.
A man can dream can’t he? Note he isn’t advocating nonsense as an organizing tool, much of his wackier thought is precisely around trying to make an organizing tool work as good as nonsense does. Unfortunately I don’t think he has succeed since in my opinion neocameralism is unlikely to be implemented and likely to blow up if someone did implement it.
I agree, except that some of my own wacky thought (well, it’s hardly original, of course) basically says that nonsense isn’t a “bad” at all—not for anyone whom we might reasonably call human. For example, as has been pointed out here, people have in-built hypocritical mechanisms to cope with various kinds of “faith”, but if you truly consider that you’re doing something “rational” and commonsensically correct, you’re left driving at an enormous speed without brakes, and the likely damage might be great enough that no-one should ever aspire to “rational” thinking.
Also:
Orwell’s diary, 20th March, 1941
Even though his prescription may be lacking (here is some criticism to neocameralism: http://unruled.blogspot.com/2008/06/about-fnargocracy.html ), his description and diagnosis of everything wrong with the world is largely correct. Any possble political solution must begin from Moldbug’s diagnosis of all the bad things that come with having Universalism as the most dominant ideology/religion the world has ever experienced.
One example of a bad consequence of Universalism is the delay of the Singularity. If you, for example, want to find out why Jews are more intelligent on average than Blacks, the system will NOT support your work and will even ostracize you for being racist, even though that knowledge might one day prove invaluable to understanding intelligence and building an intelligent machine (and also helping the people who are less fortunate at the genetic lottery). The followers of a religion that holds the Equality of Man as primary tenet will be suppressing any scientific inquiry into what makes us different from one another. Universalism is the reason why common-sense proposals like those of Greg Cochran ( http://westhunt.wordpress.com/2012/03/09/get-smart/ ) will never be official policy. While we don’t have the knowledge to create machines of higher intelligence than us, we do know how to create a smarter next generation of human beings. Scientific progress, economic growth and civilization in general are proportional to the number of intelligent people and inversely proportional to the number of not-so-smart people. We need more smart people (at least until we can build smarter machines), so that we all may benefit from the products of their minds.
From the Greg Cochran link:
It’s worth pointing out that at least part of the opposition to government-run eugenics programs is rational distrust that the government will not corrupt the process. If a country started a program of tax breaks for high-IQ people having children, and perhaps higher taxes for low-IQ people having children, a corrupt government would twist its policies and official IQ-testing agency to actually reward, for example, reproduction among people who vote for [whatever the majority party currently is]. It’s a similar rationale to the one against literacy tests for voting: sure, maybe illiterate people can’t be informed voters, but trusting the government to decide who’s too illiterate to vote leads to perverse incentives.
Absolutely. It would start with: “Everyone (accepted as an expert by our party) agrees that the classical IQ tests developed by psychometric methods are too simple and they don’t cover the whole spectrum of human intelligence. Luckily, here is a new improved test developed by our best experts that includes the less mathematical aspects of intelligence, such as having a correct attitude towards insert political topic. Recognizing the superiority of this test to the classical tests already gives you five points!”
Also, governments are notoriously bad at making broad and costly social policies that will only give a return on investment “in a few centuries or less”. We’re not talking just beyond the next elections, the party, the politicians, even the whole state may not even exist by then.
“Will”.
That seems a little bit simplistic. How many problems have been caused by smart people attempting to implement plans which seem theoretically sound, but fail catastrophically in practice? The not-so-smart people are not inclined to come up with such plans in the first place. In my view, the people inclined to cause the greatest problems are the smart ones who are certain that they are right, particularly when they have the ability to convince other smart people that they are right, even when the empirical evidence does not seem to support their claims.
While people may not agree with me on this, I find the theory of “rational addiction” within contemporary economics to carry many of the hallmarks of this way of thinking. It is mathematically justified using impressively complex models and selective post-hoc definitions of terms and makes a number of empirically unfalsifiable claims. You would have to be fairly intelligent to be persuaded by the mathematical models in the first place, but that doesn’t make it right.
basically, my point is: it is better to have to deal with not-so-smart irrational people than it is to deal with intelligent and persuasive people who are not very rational. The problems caused by the former are lesser in scale.
The theory of “rational addiction” seems like an example that for any (consistent) behavior you can find such utility function that this behavior maximizes it. But it does not mean that this is really a human utility function.
For an intelligent and persuasive person it may be a rational (as in: maximizing their utility, such as status or money) choice to produce fashionable nonsense.
True. I guess it’s just that the consequences of such actions can often lead to a large amount of negative utility according to my own utility function, which I like to think of as more universalist than egoist. But people who are selfish, rational and intelligent can, of course, cause severe problems (according to the utility functions of others at least). This, I gather, is fairly well understood. That’s probably why those characteristics describe the greater proportion of Hollywood villains.
Hollywood villains are gifted people who pathologically neglect their self-deception. With enough self-deception, everyone can be a hero of their own story. I would guess most authors of fashionable nonsense kind of believe what they say. This is why opposing them would be too complicated for a Hollywood script.
Yes! I’m glad that someone is with me on this.
And yet, it’s the “Universalist” system that allows Jews to not get exterminated. I think the cognitive and epistemological flaws of “Universalism” kinda makes some people ignore the fact that it’s the system that also allows the physical existence of heretics more than any other system in existence ever yet has.
Was (non-Universalist) Nazi Germany more open to accepting Jew-produced science than the “Universalist” West was? Or is the current non-Universalist Arab world more open to such? Were the previous feudal systems better at accepting atheists or Jewish people? Which non-universalist (and non-Jewish) system was actually better than “Universalism” at recognizing Jewish contributions or intelligence, that you would choose to criticize Universalism for being otherwise? Or better at not killing heretics?
Let’s keep it simple—which non-Universalist nation has ever been willing to allow as much relative influence to Jewish people as Universalist systems have?
As for Moldbug’s diagnosis, I’m unimpressed with his predictive abilities: he predicted Syria would be safe from revolt, right, because it was cozying up to Iran rather than to America? He has an interesting model of the world but, much like Marxism, I’m not sure Moldbuggery has much predictive capacity.
I agree. In my mind this is its great redeeming feature and the main reason I think I still endorse universalism despite entertaining much of the criticism of it. At the end of the day I still want to live in a Western Social Democracy, just maybe one that has a libertarian (and I know this may sound odd coming from me) real multicultural bent with regards to some issues.
The same is true of the Roman and Byzantine empire. The Caliphate too. Also true of Communist regimes. Many absolute monarchies now that I think about it. Also I’m pretty sure the traditional Indian cast system could keep Jews safe as well.
If Amy Chua is right democracy (a holy word of universalism) may in the long run put market dominant minorities like the Jews more at risk than some alternatives. Introducing democracy and other universalist memes in the Middle East has likely doomed the Christian minorities there for example.
I’m not quite sure why particularly the Jewish people matter so very much to you in this example. I’m sure you aren’t searching for the trivial answer (which would be “in any ancient and medieval Jewish state or nation”).
If you are using Jews here as an emblem of invoking the horrors of Nazism, can’t we at least throw a bone to Gypsy and Polish victims? And since we did that can we now judge Communism by the same standard? Moldbug would say that Communism is just a country getting sick with a particularly bad case of universalism.
The thing is Universalism as it exists now dosen’t seem to be stable, the reason one sees all these clever (and I mean clever in the bad, overly complicating, overly contrarian sense of the word) arguing in the late 2000s against “universalism” online is because the comfortable heretic tolerating universalism of the second half of the 20th century seems to be slowly changing into something else. They have no where else to go but online. The economic benefits and comforts for most of its citizens are being dismantled, the space of acceptable opinion seems to be shrinking. As technology, that enables the surveillance of citizens and enforcement of social norms by peers, advances there dosen’t seem to be any force really counteracting it. If you transgress, if you are a heretic in the 21st century, you will remain one for your entire life as your name is one google search away from your sin. As mobs organize via social media or apps become more and more a reality, a political reality, how long will such people remain physically safe? How do you explain to the people beating you that you recanted your heresy years ago? Recall how pogroms where usually the affair of angry low class peasants. You don’t need the Stasi to eliminate people. The mob can work as well. You don’t need a concentration camp when you have the machete. And while modern tech makes the state more powerful since surveillance is easier, it also makes the mob more powerful. Remaining under the, not just legal, but de facto, protection of the state becomes more and more vital. The room for dissent thus shrinks even if stated ideals and norms remain as they where before.
And I don’t think they will remain such. While most people carrying universalist memes are wildly optimistic with “information wants to be free” liberty enhancing aspect of it, the fact remains that this new technology seems to have also massively increased the viability and reach of Anarcho-Tyranny.
The personal psychological costs of living up to universalist ideals and internalizing them seem to be rising as well. To illustrate what I mean by this, consider the practical sexual ethics of say Elizabethan England and Victorian England. On the surface and in their stated norms they don’t differ much, yet the latter arguably uses up far more resources and a places a greater cognitive burden of socialization on its members to enforce them.
Now consider the various universalist standards of personal behaviour that are normative in 2012 and in 1972. They aren’t that different in stated ideals, but the practical costs have arguably risen.
nykos’ was the one who used the example of Jewish superior intelligence not getting acknowdged as such by Universalism. My point was that was there have been hardly any non-Universalist systems that could even tolerate Jewish equal participation, let alone acknowledged Ashkenazi superiority.
Thank you, I missed that context. Sorry.
I see no proof of that. What economic benefits and comforts? Sure, real wages in Western countries have stopped growing around the 1970s, but e.g. where welfare programs are being cut following the current crisis, it’s certainly not the liberals but economically conservative governments championing the cuts.
I don’t understand. Do you mean prestigious norms like “never avoid poor neighbourhoods for your personal safety, because it’s supposedly un-egalitarian”, or what? What other norms like that exist that are harmful in daily life?
What’s happening is, to paraphrase Thacher, that governments are running out of other people’s money. Yes, conservative parties are more willing to acknowledge this fact, but liberal parties don’t have any viable alternatives and it was their economic policies that lead to this state of affairs.
Hmm? And in places where fiscally conservative parties were at the helm before the crisis? What about them?
The places that are being hardest hit have been ruled by left wing parties for most of the time since at least the 1970s. Also in these places the right wing parties aren’t all that right wing.
Are the Scandinavian nations among the ones hit hardest? Or, say, Poland?
You’ve got to make it more general, that’s where it gets interesting! Speaking frankly, from the selfish viewpoint of a typical Western person, the Universalist system has been better than any other system at everything for more than a century, especially at the quality and complexity of life for the average citizen. Of course, Moldbug’s adherents would argue that there’s no dependency between these two unique, never-before-seen facts of civilization—universalist ideology and an explosive growth in human development for the bottom 90% of society. They’d say that both are symptoms of more rapid and thoroughly supported technological progress than elsewhere.
Let’s concede that (although there are reasons to challenge it—see e.g. Weber’s The Protestant Ethic and the Spirit of Capitalism, an early argument that religion morphing into a secular quasi-theocracy is what gave the West its edge). Okay, so if both things are the results of our civilization’s unique historical path… then, from an utilitarian POV, the cost of universalism is still easily worth paying! We know of no society that advanced to an industrial and then post-industrial state without universailsm, so it would be in practice impossible to alter any feature of technical and social change to exclude the dominance of universalist ideology but keep the pace of “good” progress. Then, even assuming that universalist ideology is single-handedly responsible for the entirety of the 20th century’s wars and mass murder (and other evils), it is still preferable to the accumulated daily misery of the traditional pre-industrial civilization—especially so for everyone who voted “Torture” on “Torture vs Specks”! (I didn’t, but here I feel differently, because it’s “Horrible torture and murder” vs “Lots and lots of average torture”.)
Moldbug isn’t arguing we should get rid of some technology and its comfort in order to also get rid of universalism and he certainly does recognize both as major aspects of modernity, no he is saying that precisely technological progress now enables us to get rid of the parasitic aspect of modernity “universalism”. One can make a case that since it inflames some biases, it is slowing down technological progress and the benefits it brings. Peter Thiel is arguably concerned precisely by this influence when he talks of a technological slowdown. Universalism not only carries opportunity costs, it has historically often broken out in Luddite strains. Consider for example something like the FDA. Recall what criticism of that institution are often heard on LW, yet aren’t these same criticism when consistently applied basically hostile to the Cathedral?
Whether MM is right or wrong what you present seems like a bit of a false dilemma. You certainly are right that we haven’t seen societies that advance to a post-industrial or industrial state without at least some influence of universalism but it is hard to deny that we do observe varying degrees of such penetration. Moldbug’s idea is that even if we can’t use technology to get rid of the memeplex in question by social manoeuvring we can still perhaps find a better trade off by not taking “universalism” so seriously. The vast majority of people, the 90% you invoke, may be significantly better of with a world where every city is Singapore than a world where every city is London.
It is no mystery which of these two is more in line with universalist ideals.
And could you please name those ideals once again? Because it’s very confusing.
In the case of Singapore vs. London (implicitly including the governing structure of Britain since London isn’t a city state)? A few I can think of straight away:
Democratic decision making. Therapeutic rather than punitive law enforcement. Lenient punishment of crime. Absence of censorship.
Naturally all of these aren’t fully realized in London either. Britian dosen’t have real free speech, yet it has much more of it than Singapore. Britain has (in my opinion) silly and draconian anti-drug laws, but it dosen’t execute people for smuggling drugs. London doesn’t have corporal or capital punishment. The parties in Britain are mostly the same class of people, yet at least Cerberus (Lib/Lab/Con) has three heads, you get to vote the one that promises to gnaw at you the least, Singapore is democratic in form only, and it is a very transparent cover. Only one party has a chance of victory, and it has been that way and will remain that way for some time.
Yet despite all these infractions against stated Western ideals, life isn’t measurably massively worse in Singapore than in London. And Singapore seems to work better as a multi-ethnic society than London. The world is globalizing, de facto multiculturalism is the destined future of every city from Vladivostok to Santiago so the Davos men tell us. No place like Norway or Japan in our future, but elections where we will see ethnic blocks and identity politics. I don’t know about you but I prefer Lee Kuan Yew to that mess of tribal politics. Which city would deal better with a riot? Actually which city is more likely to have a riot? Recall what Lee said in his autobiography and interviews he learned from the 1960s riots. Did it work? It sure looks like it did. Also recall from what Singapore started, and where surrounding Malaysia from which it diverged is today. What is the better model to pull the global south out of poverty? What is the better model to have the worlds people live side by side? Which place will likely be safer, more liveable and more prosperous in 20 years time?
It seems in my eyes that Singapore is clearly winning in such a comparison. Yet clearly it does so precisely by ignoring several universalist ideals. Strangely they didn’t seem to have needed to give up iPods and other marvels of modern technology to do it either.
Taboo “worse”!
If by life not being “worse” you mean the annual income or the quality of healthcare or the amount of street crime, maybe it’s so. If one values e.g. being able to contribute to a news website without fear of fines or imprisonment (see e.g. Gibson’s famous essay where he mentions that releasing information about Singapore’s GDP could be punished with death), or not fearing for the life of a friend whom you smoke marijuana with, or being able to think that the government is at least a little bit afraid of you (this not necessarily being real, just a pleasant delusion to entertain, like so many others we can’t live without)… in short, if one values the less concrete and material things that speak to our more complex instincts, it’s not nearly so one-sided.
That’s why I dislike utilitarianism; it says without qualification that a life always weighs the same, whatever psychological climate it is lived in (the differences are obvious as soon as you step off a plane, I think—see Gibson’s essay again), and a death always weights the same, whether you’re killed randomly by criminals (as in the West) or unjustly and with malice by the government (as in Singapore), et cetera, et cetera… It’s, in the end, not very compatible with the things that liberals OR classical conservatives love and hate. Mere safety and prosperity are not the only things a society can strive for.
Yes. But these are incredibly important things to hundreds of millions of people alive today drowning in violence, disease and famine. What do spoiled first world preferences count against such multitudes?
And you know what, I think 70% of people alive today in the West wouldn’t in practice much miss a single thing you mention, though they might currently say or think they would.
There’s a threshold where violence, disease and hunger stop being disastrous in our opinion (compare e.g. post-Soviet Eastern Europe to Africa), and that threshold, as we can see, doesn’t require brutal authoritarianism to maintain, or even to achieve. Poland has transitioned to a liberal democracy directly after the USSR fell, although its economy was in shambles (and it had little experience of liberalism and democracy before WW2), Turkey’s leadership became softer after Ataturk achieved his primary goals of modernization, etc, etc. There’s a difference between a country being a horrible hellhole and merely lagging behind in material characteristics; the latter is an acceptable cost for attempting liberal policies to me. I accept that the former might require harsh measures to overcome, but I’d rather see those measures taken by an internally liberal colonial power (like the British Empire) than a local regime.
The actual real people living there, suppose you could ask them, which do you think they would chose? And don’t forget those are mere stated preferences, not revealed ones.
If you planted Singapore on their borders wouldn’t they try to move there?
Sure, Singapore is much better than Africa; I never said otherwise! However, if given choice, the more intelligent Africans would probably be more attracted to a Western country, where their less tangible needs (like the need for warm fuzzies) would also be fulfilled. Not many Singaporeans probably would, but that’s because the Singaporean society does at least as much brainwashing as the Western one!
I don’t understand why you think “warm fuzzies” are in greater supply in London than in Singapore. They are both nice places to live, or can be, even in their intangibles. London-brainwashing is one way to inoculate yourself against Singapore-brainwashing, but perhaps there is another way?
Have you been to Singapore for any amount of time? I haven’t (my dad had, for a day or so, when he worked on a Soviet science vessel), but I trust Gibson and can sympathize with his viewpoint. At the very least I observe that it does NOT export culture or spread memes. These are not the signs of a vibrant and sophisticated community!
What could you mean by this that isn’t trivially false?
I haven’t read the Gibson article (but I will). I know that “disneyland” and “the death penalty” are both institutions that are despised by a certain cohort, but they are not universally despised and their admirers are not all warmfuzzophobic psychos. Artist-and-writer types don’t flock to Singapore, but they don’t flock to Peoria Illinois either do they?
Downvoted without hesitation.
If you have the unvoiced belief that cultural products (especially high-quality ones) and memes are created by some specific breed of “artist-and-writer types” (wearing scarves and being smug all the time, no doubt!), then I’d recommend purging it, seeing as it suggests a really narrow view of the world. A country can have a thriving culture not because artistic people “flock” there, but because they are born there, given an appropriate education and allowed to interact with their own roots and community!
By your logic, “artist-and-writer types” shouldn’t just not flock to, but actively flee the USSR/post-Soviet Russia. And indeed many artists did, but enough remained that most people on LW who are into literature or film can probably name a Russian author or Russian movie released in the last half-century. Same goes for India, Japan, even China and many other poor and/or un-Westernized places!
Notice how this more or less refutes the argument you tried to make in the grandparent.
I’m not making the argument that liberal democracy directly correlates to increasing the cultural value produced. Why else would I defend Iran in that particular regard? No, no, the object of my scorn is technocracy (at least, human technocracy) and I’m even willing to tolerate some barbarism rather than have it spread over the world.
What definition of technocracy are you using that excludes the USSR and India before its economic liberalization?
You seem to have read some hostility towards artists and writers into my comment, probably because of “types” and “flock”? These are just writing tics, I intended nothing pejorative.
I hold no such belief, and I’m glad you don’t either. I only want to emphasize my opinion that Singapore does have a thriving culture, even if it does not have a thriving literary or film industry. But since you admit you don’t know a lot about it I’m curious why you have so much scorn for the place? A city can have something to recommend itself even if it hasn’t produced a good author or a good movie.
In short, well, yeah, I hold more “formal” and “portable” culture such as literature, music or video to have relatively more value than the other types of “culture”, such as local customs and crafting practices and such—which I assume you meant by “thriving culture” here. All are necessary for harmonious development, but I’d say that e.g. a colorful basket-weaving tradition in one neighborhood which is experienced/participated in by the locals is not quite as important and good to have as an insightful and entertaining story, or a beautiful melody—the latter can still have relevance continents or centuries apart.
Some African tribe can also have a thriving culture like that, but others can’t experience it without being born there, it can be unsustainable in the face of technical progress, it can interfere with other things important for overall quality of life (trusting a shaman about medicine can be bad for your health), etc. Overall, you probably get what I’m talking about.
Sure, that’s biased and un-PC in a way, but that’s the way that I see the world.
(I don’t have any scorn for Singapore as a nation and a culture, I just don’t care much for a model of society imposed upon it by the national elites in the 20th century that, unlike broadly similar societies in e.g. Japan or even China, doesn’t seem to produce those things I value. Even if its GDP per capita is now 50% or so higher than somewhere else. Heck, even Iran—a theocracy that’s not well-off and behaves rather irrationally—has been producing acclaimed literature and films, despite censorship.)
It seems to me that if you are talking about artistic achievements that have stood the test of centuries, then you are talking almost exclusively about the west, which I agree is utterly dominant in cultural exports. What I have in mind when I say “Singapore culture is thriving” is that it’s a city filled with lovely people going about their business. You could appreciate Singapore culture because you find muslim businessmen or guest worker IT types agreeable—maybe you like their jokes. You could hate Singapore culture if you instead found muslim businessmen to be vacant and awful. But couldn’t we allow that the intelligent african that kicked the discussion off might have either taste? Then we should find out what his tastes are before recommending that he choose London over Singapore.
I read “Disneyland with the death penalty.” Gibson’s not a very good travel-writer, there’s hardly any indication in the article that he spoke to anyone while he was there.
You’re not being fair. Singaporeans would have surely produced something to your tastes, if there were a billion of them and their country were two thousand years old.
I would like seeing comments on Gibson’s article from Singaporeans, including ex-pat Singaporeans.
Konkvistador’s point is that third world countries attempting to imitate western countries haven’t had much success.
When Turkey was modernizing it sure as heck was looking towards Europe for examples, it just didn’t implement democratic mechanisms straight away and restricted religious freedom. And if you look at Taiwan, Japan, Ghana, etc… sure, they might be ruled by oligarchic clans in practice, but other than that [1] they have much more similarities than differences with today’s Western countries! Of course a straight-up copy-paste of institutions and such is bound to fail, but a transition with those institutions, etc in mind as the preferred end state seems to work.
[1] Of course, Western countries are ruled by what began as oligarchic clans too, but they got advanced enough that there’s a difference. And, for good or ill, they are meritocratic.
I’m not familiar with Ghana, but both Japan and Taiwan had effectively one-party systems while modernizing.
I don’t care all that much about political democracy; what I meant is that Japan, India or, looking at the relative national conditions, even Turkey did NOT require some particular ruthlessness to modernize.
edit: derp
Could you explain the meaning of this sentence please. I’m not sure I have grasped it correctly. To me it sounds like that you are saying that there was no ruthlessness involved in Atatürk’s modernizing reforms. I assume that’s not the case, right?
Compared to China or Industrial Revolution-age Britain? Hell no, Ataturk pretty much had silk gloves on. At least, that’s what Wikipedia tells me. He didn’t purge political opponents except for one incident where they were about to assassinate him, he maintained a Western facade over his political maneuvering (taking pages from European liberal nationalism of the previous century), etc, etc.
To extent that this is a discussion of quality of life and attractiveness of a country, as opposed to what is strictly speaking necessary for development, it’s worth remembering the Armenian genocide.
There’s no evidence that Ataturk was more complicit in that than, say, many respected public servants in 50s-60s Germany were complicit in the Holocaust. Nations just go insane sometimes, and taboos break down, and all that. It takes a hero to resist.
I feel pretty confident that Niall Ferguson, in his The War of the World, claims that Ataturk directly oversaw at least one massacre; I don’t have my copy on hand, however. Also, the Armenian National Institute claims that Ataturk was “the consummator of the Armenian Genocide.”
Also, Israel Charney (the founder of the International Association of Genocide Scholars) says:
Really Ataturk was less harsh than Industrial Revolution-age Britain? I find this highly unlikely (unless your taking about their colonial practices in which case the Armenian genocide is relevant). I think the reason you’re overestimating the relative harshness of Britain is that Britain had more freedom of speech than other industrializing nations and thus its harshness (such as it was) is better documented.
http://en.wikipedia.org/wiki/Enclosure
http://en.wikipedia.org/wiki/Riot_Act
http://en.wikipedia.org/wiki/Peterloo_Massacre
http://en.wikipedia.org/wiki/Great_Famine_%28Ireland%29
http://en.wikipedia.org/wiki/Industrial_Revolution#Child_labour
http://en.wikipedia.org/wiki/Opposition_to_the_Poor_Law
http://www.victorianweb.org/history/workers1.html
http://www.victorianweb.org/history/workers2.html
(That’s just after a fifteen-minute search. By the way, haven’t you read Dickens? He gives quite a vivid contemporary account of social relations, although dramatized.)
Are you claiming that similar and worse things didn’t happen in Turkey?
Let me get this straight: you’re trying to argue that Britain was harsh because some people expressed opposition to a law you like?
Yes, that’s want I meant by Britain’s harshness (such as it was) being better documented thanks to its freedom of speech.
With the exception of the Armenian genocide (which is comparable in vileness to many things, including the actions of that wonder of private enterprise, the East India Company) - yes. Not during the late 19th and 20th century, I mean. Turkish landlords might’ve been feudals, but they didn’t outright steal the entirety of their tenants’ livelihood from under them.
The other way around! Many respected people hated and denounced it so much, it famously prompted Dickens to write Oliver Twist.
“The blogosphere overflows with Google Pundits; those who pooh-pooh, with a few search queries, an argument that runs counter to their own ideological assumptions, usually regarding a subject with which they possess only a passing familiarity.” It always gets my goat when the other guy does it.
I knew perfectly well about all of those except the Great Famine before searching, thank you very much! (I used to think there was only one Irish famine.) That’s why I felt confident in saying that 20th century Turkey was not as bad! “Fifteen-minute search” referred to a search for articles to show in support of my argument, not an emergency acquisition of knowledge for myself.
Taboo ‘ruthlessness’. For example Japan was certainly ruthless while modernizing by any reasonable definition.
It didn’t fully come into the “Universalist” sphere, ideologically and culturally, until its defeat in WW2, and the most aggressive and violent of its actions were committed in a struggle for expansion against Western dominiance.
Konkvistador’s argument would be that it wouldn’t of been able to modernize nearly as effectively if it had come into the “Universalist” sphere before industrializing.
Maybe, I don’t know. On the other hand, maybe it would’ve avoided conquest and genocide if it had come into that sphere before industrializing.
Or maybe my premise above is wrong and its opening in the Meiji era did in fact count as contact with “Universalism”—note that America and Britain’s influence had been considerable there, and Moldbug certainly says that post-Civil War U.S. and post-Chartist Britain (well, he says post-1689, but the Chartist movement definitely was a victory for democracy[1]) were dominated by hardcore Protestant “Universalism”.
1- Although its effects were delayed by some 20 years.
You seem to have an overly romantic view of criminals if you think they never kill with malice.
Heck when the government doesn’t keep them in check criminal gangs operate like mini-governments that are much worse in terms of warm fuzzies then even Singapore.
In the West they operate more or less like wild animals.
Um no.
Actually Modlbug’s diagnosis does provide decent predictive power: In the West at least Whigh history shall continue. The left shall continue to win nearly all battles over what the stated values and norms of our society should be (at least outside the economic realm).
Naturally Whig history makes the same prediction of itself, but the model it uses to explain itself seems built more for a moral universe than the one we inhabit. Not only that I find the stated narrative of Whig history has some rather glaring flaws. MM’s theories win in my mind simply because they seem a explanation of comparable or lower complexity in which I so far haven’t found comparably problematic flaws.
Yes, and notice that unlike Mubarak and Gaddafi who both (at least partially) cozyed up to America, Assad is still in charge of Syria.
The prediction Moldbug made was “no civil war in Syria”; not that there would be a civil war but Assad would manage to endure it.
Indeed in the post I link to, Mencius Moldbug seemed to be predicting that Qaddafi would endure the civil war too; as Moldbug made said post at a point in time in which the war was turning to Qaddafi’s favour, and Moldbug wrongly predicted that the West would not intervene to perform airstrikes.
So what exactly did he predict correctly?
Not proven. It seems to me that people wildly overdo even the prejudices they have evidence for, so we don’t know how much is lost due to excessive prejudice compared to how much is lost due to insufficient prejudice.
My impression is that we aren’t terribly good yet at understanding how traits which involve many genes play out, whether political correctness is involved or not.
Very true. I think most HBD proponents are somewhat overconfident of their conclusions (though most of them seem more likely than not). But what I think he was getting at is that we would have great difficulty acknowledging if it was so and that any scientist that wanted to study this is in a very rough spot.
Unlike say promotion of the concept of human caused climate change which has the support of at least the educated classes, it may be impossible for our society to assimilate such information. It seems more likely that they would rather discredit genetics as a whole or perhaps psychometry or claim the scientists are faking this information because of nefarious motives. This suggest there exists a set of scientific knowledge that our society is unwilling or incapable of assimilating and using in a manner one would expect from a sane civilization.
We don’t know what we don’t know, we do know we simply refuse to know some things. How strong might our refusal be for some elements of the set? What if we end up killing our civilization because of such a failure? Or just waste lives?
I don’t know if you could get away with studying the sort of thing you’re describing if you framed it as “people who are good at IQ tests” or “people who have notable achievements”, rather than aiming directly at ethnic/racial differences. After all, the genes and environment are expressed in individuals.
It’s conceivable but unlikely that the human race is at risk because that one question isn’t addressed.
I think I didn’t do a good job of writing the previous post. I was trying to say that regardless what the truth is on that one question (and I am uncertain on it, more so than a few months ago), it demonstrates there are questions we as a society can’t deal with.
I wasn’t saying that not understanding the genetic basis of intelligence is a civilization killer (I didn’t mention species extinction, though that is possible as well), which in itself is plausible if various people warning about dysgenics are correct, but that future such questions may be.
I argued that since reality is entangled and our ideology has no consistent relationship with reality we will keep hitting on more and more questions of this kind (ones that our society can’t assimilate) and that knowing the answer to some such questions may turn out to be important for future survival.
A good hypothetical example is a very good theory on the sociology of groups or ethics that makes usable testable predictions, perhaps providing a new perspective on politics, religion and ideology or challenging our interpretation of history. It would be directly relevant to FAI yet it would make some predictions that people will refuse to believe because of tribal affiliation or because it is emotionally too straining.
Sorry—species extinction was my hallucination.
Dysgenics is an interesting question—what do we need to be adapting to?
I think this statement is too strong. Our ideology doesn’t have a 100% consistent relationship with reality, true, but that’s not the same as 0%.
What, sort of like Hari Seldon’s psychohistory ? Regardless of whether our society can absorb it or not, is such a thing even possible ? It may well be that group behavior is ultimately so chaotic that predicting it with that level of fidelity will always be computationally prohibitive (unless someone builds an Oracle AI, that is). I’m not claiming that this is the case (since I’m not a sociologist), but I do think you’re setting the bar rather high.
That hasn’t stopped us from doing incredible feats of artificial selection using phenotype alone. You can work faster and better the more you understand a system on the genetic level, but it’s hardly necessary.
I agree and have for some time, I didn’t mean to imply otherwise. Especially this is I think terribly important:
But currently there is nothing remotely approaching an actionable political plan, so I advocated doing what little good one can despite Cryptocalvinism’s iron grasp on the minds of a large fraction of mankind. As Moldbug says Universalism has no consistent relation to reality. A truly horrifying description of reality if it is accurate, since existential risk reduction eventually will become entangled with some ideologically charged issue or taboo.
I wish I could be hopeful but my best estimate is that humanity is facing a no win scenario here.
Another thing I’d like to ask you! What are those bad things in your estimate? Or, rather, what areas are we talking about? Are you mainly concerned with censorship, academic dishonesty, bad prediction-making, other theory-related flaws? Or do you find some concrete policy really awful for those epistemic reasons, like state welfare programs, ideological pressure on illiberal regimes or immigration from poor countries? (I chose those examples because I’m in favor of all three, with caveats.)
I know you’re against universal suffrage, but that’s more or less meta-level; is there something you really loathe that directly concerns daily life, its quality, comfort and freedoms? Of course, I know about the policy preferences Mencius himself draws from his doctrine, but his beliefs are… idiosyncrasic: e.g. I don’t think you’d agree with him that selling oneself and one’s future children into slavery should be at all acceptable or tolerated.
That’s more than I’ve managed to get from my reading of him. I get no picture from his writings about what he wants life to be like—“daily life, its quality, comfort and freedoms ”—under his preferred regime, only about what he doesn’t want life to be like under the current regimes.
True, it’s in bits and pieces; but see e.g. the Patchwork series and try some other posts at random.
Basically, a good example of his preferences is the “total power, no influence or propaganda” model of Patchwork; in his own words, the Sovereign’s government wouldn’t censor dissenters because it has nothing to fear from them. Sure, I strongly doubt it would work that way, even with a perfectly rational sovereign (the blog post linked to above provides some decent criticism of that from an anarchist POV). But we nonetheless can conclude that MM would like a comfortable, rich society with liberal mores (although he does all the conservative elderly grumbling about the supposed irresponsibility and flighty behavior of Westerners today [1]) where he wouldn’t ever have to worry about tribal power games or such—enforced with an iron fist, for selfish reasons of productivity and public image, and totally un-hypocritical about that.
He’s okay with some redistribution of wealth (the sovereign giving money to private charities it finds worthy, which, being driven mainly by altruism, automatically care for everyone better than a disinterested bureaucracy—again, I’m a little skeptical).
Another thing he likes to say is that the capacity for violence within society should be supremely concentrated and overwhelming, and then the rational government supposedly wouldn’t have to actually use it.
And then there are the totally contrarian things like his tolerance for indentured servitude on ideological grounds (look up his posts on “pronomianism”), which, along with his less disagreeable opinions, could well stem from his non-neurotypical (I take Konkvistador’s word, and my impressions) wiring.
[1] When he repeats some trite age-old bullshit about “declining personal morality”—while cheering for no-holds-barred ruthless utilitarianism—that’s when I tolerate him least.
There’s an important question here: WHY do you think people dislike that so much that they’re willing to subvert entire fields of knowledge to censor those inquiries? Please ponder that carefully and answer without any mind-killed screeds, ok?
(I’m not accusing you in advance, it’s just that I’ve read about enough such hostile denunciations from the “Internet right” who literally say that “Universalists/The Left/whoever” simply Hate Truth and like to screw with decent society. Oh, and the “Men’s Rights” crowd often suggests that those who fear inequality like that just exhibit pathetic weak woman-like thinking that mirrors their despicable lack of masculinity in other areas. And Cthulhu help you if you are actually a woman who thinks like that! Damn, I can’t stand those dickheads.)
Of course, I’d like others here to also provide their perspective on probable reasons for such behavior! Don’t pull any punches; if it just overwhelmingly looks like people with my beliefs are underdeveloped mentally and somewhat insane, I’ll swallow that—but avoid pettiness, please.
After reading that sentence, I expected some rather radical eugenics advocacy. Then I followed that link and saw that all those suggestions (except maybe for cloning, but we can hardly know about that in advance) are really “nice” and inoffensive. Seriously, I think that if even I, who’s pretty damn orthodox and brainwashed—a dyed-in-the-wool leftist, as it is—haven’t felt a twinge—than you must be overestimating how superstitious and barbaric an educated Universalist is in regards to that problem.
-- Lion Kimbro, “The Anarchist’s Principle”
Forgive my stupidity, but I’m not sure I get this one. Should I read it as “[...] it’s probably for the same reasons you haven’t done it yourself.”?
I think it just means “you should do it”, which is only sometimes the appropriate response.
Both sound quite appropriate; it seems likely that in the process of attempting to do some crazy awesome thing, you will run into the exact reasons why nobody has done it before; either you’ll find out why it wasn’t actually a good idea, or you’ll do something awesome.
But there must be better ways to find out the reasons not to do it. Just doing it instead is a tremendous waste of time.
Talking to the sorts of people who would or should have tried already might be one avenue.
That’s obviously true, yeah. But if it’s cool enough that you’d consider doing it, and you actually, as the quote implies, cannot understand why nobody has attempted it despite having done initial research, then you may be better off preparing to try it yourself rather than doing more research to try and find someone else who didn’t quite do it before. Not all avenues of research are fruitful, and it might actually be better to go ahead and try than to expend a bunch of effort trying to dig up someone else’s failure.
Pardon my ignorance, but could anyone explain this quote or give an example where it applies? Also, how is it related to anarchism? I don’t get it...
Albus Dumbledore
Sometimes I check the original and am surprised by how little I actually diverged from Rowling’s Dumbledore.
It took MatthewBaker’s reply to make me realize you were talking about your character and not yourself.
PHOENIXS FATE, was something I don’t think Rowling’s Dumbledore could have done but up until Dumbledore lost the idiot ball in recent chapters I fully agree with you :)
How a game theorist buys a car (on the phone with the dealer):
From The Predictioneer’s Game, page 7.
Other car-buying tips from Bueno de Mesquita, in case you’re about to buy a car:
Figure out exactly what car you want to buy by searching online before making any contact with dealerships.
Don’t be afraid to purchase a car from a distant dealership—the manufacturer provides the warranty, not the dealer.
Be sure to tell each dealer you will be sharing the price they quote you with subsequent dealers.
Don’t take shit from dealers who tell you “you can’t buy a car over the phone” or do anything other than give you their number. If a dealer is stonewalling, make it quite clear that you’re willing to get what you want elsewhere.
Arrive at the lowest-price dealer just before 5:00 PM to close the deal. In the unlikely event that the dealer changes their terms, go for the next best price.
From my limited experience with buying cars, as well as from theoretical considerations, this won’t work because you lack the pre-commitment to buy at the price offered. Once they give you a favorable price, you can try to push it even further downwards, possibly by continuing to play the dealerships against each other. So they’ll be afraid to offer anything really favorable. (The market for new cars is a confusopoly based on concealing the information about the dealers’ exact profit margins for particular car models, which is surprisingly well-guarded insider knowledge. So once you know that a certain price is still profitable for them, it can only be a downward ratchet.)
The problem can be solved by making the process double-blind, i.e. by sending the message anonymously through a credible middleman, who communicates back anonymous offers from all dealers. (The identities of each party are revealed to the other only if the offer is accepted and an advance paid.) Interestingly, in Canada, someone has actually tried to commercialize this idea and opened a website that offers the service for $50 or so (unhaggle.com); I don’t know if something similar exists in the U.S. or other countries. (They don’t do any sort of bargaining, brokering, deal-hunting, etc. on your behalf—just the service of double-anonymous communication, along with signaling that your interest is serious because you’ve paid their fee.) From my limited observations, it works pretty well.
I take he does not discuss whether he actually ever did that.
He further claims to have once saved $1,200 over the price quoted on the Internet for a car he negotiated for his daughter, who was 3000 miles away at the time.
Apparently being a game theory expert does not prevent one from being a badass negotiator.
Why did you guess otherwise?
Typically people describing clever complex schemes involving interacting with many other people do not actually do them. Mesquita has previously tripped some flags for me (publishing few of his predictions), so I had no reason to give him special benefit of the doubt.
Maybe many of his predictions are classified because they are for the government?
“I’d love to tell you, but then I’d have to kill you...”
Theoretically could you make an approximation of his accuracy by looking at fluctuations in death rates among relevant demographics?
Even theoretically, you would then need to have perfect information every single other factor influencing relevant-demographics death rates, assuming you somehow magically know the exact relevant demographics. If there is even one other factor that is uncertain, you end up having to increase your approximation’s margin of error proportionally to the uncertainty, and each missing data point is another power factor of increase in the margin. Eventually, it’s much smarter to realize that you don’t have a clue.
Now, take into account that you don’t even know all of the factors, and that it’s pretty much impossible to prove that you know all of the factors even if by some unknown feat you managed to figure out all possible factors… quickly the problem becomes far beyond what can be calculated with our puny mathematics, let alone by a human. Of course, if you still just want an approximation after doing all of that, it may become possible to obtain an accurate one, but I’m not even sure of that.
Thanks for the food for thought, though.
What does he mean by “price quoted on the Internet”? If it’s the manufacturer’s suggested retail price, then depending on the car model and various other factors, saving $1,200 over this price sounds unremarkable at best, and a badly losing proposition at worst. If it was the first price quoted by the dealer, it could be even worse—at least where I live, dealers will often start with some ridiculous quote that’s even higher that the MSRP.
Having bought/leased a few new and used cars over the years, I immediately think of a number of issues with this, mainly because this trips their “we don’t do it this way, so we would rather not deal with you at all” defense. This reduces the number of dealers willing to engage severely. Probably is still OK in a big city, but not where there are only 2 or 3 dealerships of each kind around. There are other issues, as well:
Bypassing the salesperson and getting to talk to the manager directly is not easy, as it upsets their internal balance of fairness. The difference is several hundred dollars.
The exact model may not be available unless it’s common, and the wait time might be more than you are prepared to handle. Though the dealers do share the inventory and exchange cars, they are less likely to bother if they know that the other place will get the same request.
They are not likely to give you the best deal possible, because they are not invested in the sale (use sunk cost to your advantage)
They are not likely to believe that you will do as you say, because why should they? There is nothing for you to lose by changing your mind. In fact, once you have all the offers, you ought to first consider what to do next, not blindly follow through on the promise.
This approach, while seemingly neutral, comes across as hostile, because it’s so impersonal. This has extra cost in human interactions.
“Searching online” is no substitute to kicking the tires for most people. The last two cars I leased I found on dealers’ lots after driving around (way after I researched the hell out of it online), and they were not the ones I thought I would get.
And the last one: were this so easy, the various online car selling outfits, like autobytel would do so much better.
So, while this strategy is possibly better than the default of driving around the lots and talking to the salespeople, it is far from the best way to buy a car.
-Seneca
In this case, isn’t it equally true that no wind is unfavourable?
“The Way is easy for those who have no utility function.” -- Marcello Herreshoff
Not sure, this came up in a few previous conversations. If an agent is almost certain that it’s completely indifferent to everything, the most important thing it could do is to pursue the possibility that it’s not indifferent to something, that is to work primarily on figuring out its preference on the off chance that its current estimate might turn out to be wrong. So it still takes over the universe and builds complicated machines (assuming it has enough heuristics to carry out this line of reasoning).
Say, “Maybe 1957 is prime after all, and hardware used previously to conclude that it’s not was corrupted,” which is followed by a sequence of experiments that test the properties of preceding experiments in more and more detail, and then those experiments are investigated in turn, and so on and so forth, to the end of time.
If someone didn’t value any world-states more than any others, I’m not sure that a Way would actually exist for them, as they could do nothing to increase the expected utility of future world-states. Thus, it doesn’t seem to really make sense to speak of such a Way being easy or hard for them.
Am I missing something?
I think you’re over analyzing here, the quote is meant to be absurd.
Whaaa?
Someone explain please. It didn’t seem absurd when I read it.
If you don’t want anything, it’s very easy to get what you want.
However, everyone reading this post is a human, and therefore is almost certain to want many things: to breath, to eat, to sleep in a comfortable place, to have companionship, the list goes on.
I interpreted it similarly to part of this article:
Since you said the quote itself was absurd I thought you were saying the post was an internally flawed strawman meant for the purpose of satire, but you meant something else by that word.
I’m the one who said that. Just to make it clear, I do agree with your first comment: taken literally, the quote doesn’t make sense. Do you get it better if I say: “It is easy to achieve your goals if you have no goals”? I concede absurd was possibly a bit too strong here.
Okay, that makes more sense, yeah I see what you mean and agree.
That depends on whether your goal is to travel or to arrive.
I am reminded of an exchange between Alice and the Cheshire cat
–Lewis Carrol
Of course, this requires that the Cat either is being difficult, or doesn’t understand the word “much”.
Which applies to the first quote too: if your destination is not limited to a single possible port, but it is limited to something narrower than “anywhere at all”, then bad winds can in fact exist. (Applying this insight to the metaphorical content of that statement is an exercise for the reader.)
I don’t see how this criticism applies to the original quote.
(And yes, the Cheshire Cat’s entire schtick is being difficult.)
Even if you don’t know which port you’re going to, a wind that blows you to some port is more favorable than a wind that blows you out towards the middle of the ocean.
That’s only true if you prefer ports reached sooner or ports on this side of the ocean.
It is possible that you don’t know which port you’re sailing to because you have ruled out some possible destinations, but there is still more than one possible destination remaining. If so, it’s certainly possible that a wind could push you away from all the good destinations and towards the bad destinations. (It is also possible that a wind could push you towards one of the destinations on the fringe, which pushes you farther from your destination based on a weighted average of distances to the possible destinations, even though it is possible that the wind is helping you.)
(Consider how the metaphor works with sailing=search for truth, port=ultimate truth, and bad wind=irrationality. It becomes a way to justify irrationality.)
The difference between “no knowledge about your destination whatsoever” and “not knowing your destination” is the difference between “I don’t care where I’m going” and “I don’t much care where I’m going” in the Cheshire Cat’s version.
Inspired by maia’s post:
“When life gives you lemons, don’t make lemonade. Make life take the lemons back! Get mad! I don’t want your damn lemons, what the hell am I supposed to do with these? Demand to see life’s manager! Make life rue the day it thought it could give Cave Johnson lemons! Do you know who I am? I’m the man who’s gonna burn your house down! With the lemons! I’m gonna get my engineers to invent a combustible lemon that burns your house down!”
---Cave Johnson, Portal 2
— Steven Kaas
When life gives you lemons, order miracle berries.
Calvin, Calvin and Hobbes
When life gives you lemons, lemon canon.
“He says what we’re all thinking!”
---GlaDOS, Portal 2, in response to above quote
I like lemons...
When life gives you lemons, be sure to say Thank-you politely.
Huh, I scrolled past this and read Nisan’s post first, by the time I got any further this was already running through my head.
Not so sure that this is a good rationalist quote though.
Huh, I scrolled past this and read Nisan’s post first, by the time I got any further this was already running through my head.
Not so sure that this is a good rationalist quote though.
Huh, I scrolled past this and read Nisan’s post first, by the time I got any further this was already running through my head.
Not so sure that this is a good rationalist quote though.
-Ralph Waldo Emerson, probably not apocryphal (at first, this comment said “possibly apocryphal since I can’t find it anywhere except collections of quotes”)
It’s in WikiQuotes.
Which is a collection of quotes!
One that anyone can edit!(!)
But it gives a source!
One that anyone can check!
THE SOURCE.
(Just going to note that I wholly disapprove of this line of conversation.)
It is not as though I did not try to find a source, damnit. Though on closer inspection I see it highlights some invisible text, so that counts as good evidence it’s real.
The full entry for November 8th is shown on pages 120-123 here. The real entry is much longer than that small excerpt would suggest.
Edit: But the quote is there alright. Clear as day (page 123).
“It is indeed true that he [Hume] claims that ‘reason is, and ought only to be the slave of the passions.’ But a slave, it should not be forgotten, does virtually all the work.”
-Alan Carter, Pluralism and Projectivism
http://www.smh.com.au/business/clive-palmer-plans-to-build-titanic-ii-20120430-1xtrc.html
xkcd
Because instead of pissing them off you get to terrify them?
-David Wong, 5 Ways to Spot a B.S. Political Story in Under 10 Seconds
I am consistently impressed by the quality of the writing that comes out of Cracked, especially relative to what one might expect given its appearance.
If “impact on your life” is the relevant criterion, then it seems to me Wong should be focusing on the broader mistake of watching the news in the first place. If the average American spent ten minutes caring about e.g. the Trayvon Martin case, then by my calculations that represents roughly a hundred lifetimes lost.
You have a funny definition of “lost”. By that measure, JRR Tolkien is worse than a mass-murderer.
-Douglas Hofstadter (posted with gwern’s “permission”)
It bothers me how there are no replies to this quote that aren’t replies to gwern’s prediction comment.
He walked along the trail with all the other workers. They had toiled all day in the field, and now were heading back to join the rest just over the hill. His kind had lived and worked this land for over a thousand years. They are the hardest workers anyone has ever known. They were all tired and hungry, and it was quiet as they mindlessly shuffled down the trail. He had walked this way many times before, as they all had, without a single thought about the individual sacrifice each has made for the collective. This is the way it has always been. His large strong body moved forward with no thought about what tomorrow would bring. In fact, he didn’t think anything at all. None of them did.
Suddenly a bright white intensely hot beam of light shot out of the sky. His legs curled up underneath him as he collapsed, instantly dead. His insides were cooked and a single puff of smoke rose from his body with a pop. “Time to eat” Jimmy’s mother called from the back porch. Jimmy put his magnifying glass in his pocket and muttered under his breath, ”Stupid ants”. End of the Trail- Monkeymind
I merely said
If anyone was wondering. (So far my prediction is right...)
I’m downvoting the whole karma-discussion, because it’s effectively karma-wanking spam that abuses the karma-system, and distorts what actual value karma has in estimating the value of any given quote.
Keep this crap to predictionbook.
I think that Gwern’s comment, being primarly a clarification of “(posted with gwern’s ‘permission’)”, could be interpreted more charitably. I agree about the responding karma predictions though.
That was indeed what I meant. In retrospect, I probably should have omitted the parenthetical remark entirely.
My comment above was intended to do just that.
I’m predicting 10<=x<=30 (it’s currently at 7).
And just how much is this line of discussion going to change the karma amount? I’m expecting it to go higher than any (reasonable) estimate, just because I expect LWers to want to screw with people.
Perhaps not, now that you’ve said this.
Does TDT account for agents that deliberately try to go against your expectations?
How high is “reasonable”?
EDIT: The reason I ask is so that I can add it as a prediction statement on PredictionBook.
I’m expecting that people are currently looking at the current balance of 22, seeing that faul_sname has predicted [10, 30], and will upvote to try to get it out of that range. Which is a good thing for Grognor. But if you want me to pick a “reasonable” estimate, the same process will repeat itself, using whatever value I give. So I need to pick a value that’s high enough that I don’t think people will even try to reach it.
3^^^3 ;)
What if I predicted that the karma was going to end up even?
Edit: Or better, that it was going to end in a seven?
What’s the last digit (base 10) of 3^^^3, anyway?
7. See here
(EDIT: apparently it’s no longer possible to link to sections of Wikipedia articles using #. Above link is meant to point to the section of the article entitled “Rightmost decimal digits...”)
URL encode the apostrophe, and it works.
I haven’t studied number theory, but I expect that someone who has would be able to answer this. Successive powers of three have final digits in the repeating pattern 1, 3, 9, 7, so if we can find N mod 4 for the N such that 3^N = 3^^^3, then we would have our answer.
3^odd = 3 mod 4
so it ends in 7.
(but I repeat myself)
I think you’re mistaken. Counterexample: 3^9 = 19683.
19683 = 3 mod 4
and 3^19683 = 150 … 859227, which ends in 7.
( The full number is 9392 digits long, which messes up the spacing in these comments. )
Oh, sorry; I agree that odd powers of three are 3 mod 4, but I had read VKS as claiming that odd powers of three had a final digit of seven; I probably misunderstood the argument. [EDIT: Yes, I was confused; I understand now.]
right, well, it’s just that 3^^^3 = 3^3^3^3^3...3^3^3 = 3^(3^3^3^3...3^3^3), for a certain number of threes. So, 3^^^3 is 3^(some odd power of three).
Yes, thanks; I apologize for having misunderstood you earlier.
That is entirely ok—I am badly in need of sleep and may have failed to optimise my messages for legibility.
Nope
no, 7
(see other comment)
I would guess that Randaly meant to cheekily respond to
in place of my actual question.
Nope, I actually was wrong (thanks for the charitable interpretation, though!) (I cubed it an extra time)
That was my initial reflex desire, but then I thought about it and decided not to.
This happened before you made the parent of this comment.
People, it makes no sense to karma punish me for:
Suppressing inappropriate preferences.
Giving the above commenter the type of information that the commenter was wondering about.
Giving people reasons that their karma punishments are unwarranted.
Using the word dumb to describe irrational karma distributions.
Modifying my feedback in response to further displays of irrational behavior.
Not responding to karma incentives in the way you would like me to.
Not taking any of this seriously at all.
Don’t be dumb.
Karma isn’t (necessarily) about punishment. Downvotes often just mean “I’d prefer to see fewer comments like this.”
Either way all of my objections apply, this isn’t really relevant to what I was contending.
Also, at least one person gave my initial comment +karma while presumably downvoting the other one, I want to mention that I think that kinda sorta makes sense and appreciate that nuanced view more than the view of people who for reasons unknown dislike feedback about feedback.
If someone would give a reason they dislike feedback about feedback I would feel better. It feels vindictive.
Posts that are solely about karma tend to get downvoted by me, because I want fewer posts that are solely about karma.
Lolz ironic downvoting on your comment.
I disagree with you, I think giving feedback about received feedback makes sense.
Edit: The −3 really just goes to prove my point people, don’t you think? I was making a valid point here.
Saying this does not go a long way towards proving that it is true.
Doesn’t it intuitively make sense that feedback about feedback is good for the same reasons that feedback is good? If my intuitions are bad, the least someone could do is offer an argument to prove the flaws of my intuition. I could have clarified this, I guess, but I felt no real reason to do so given the stunning absence of actual substantive criticisms of what I was doing.
People weren’t responding rationally to my comments, so I pointed out that those people were being dumb. That seems like something that is okay, and like something that might improve feedback mechanisms and which should thus be praised rather than downvoted. ArisKatsaris’ one sentence statement about her karma habits didn’t have any justifications behind it, so it didn’t deserve a detailed and warranted response. I did describe in general terms the substance of my objection, that’s enough in the absence of warranted counterarguments
All of the above listed reasons seem like valid arguments to me, if they’re flawed I would like to know. But I would like actual reasons, not just vague statements that appeal to unjustified personal preferences.
Listen to yourself.
I’m not interested in having arguments with you; you don’t make that look like a remotely productive use of my time. I’m trying to point out the things you are saying that sound juvenile and cause people to downvote you; it seems to bother you, so maybe if you can figure out the pattern, you will stop saying those things.
Announcing that you are making a valid point does not add anything to a point, however valid it may or may not be.
Declaring unilaterally without support that people weren’t responding “rationally”, and then pointing out that this makes them “dumb”, is not any kind of worthwhile behavior.
I am not upset by receiving negative reputation in itself. I am annoyed that people are not giving justifications for their negative reputations, and I was also trying to give reasons that their negative reputations were unjustified. I don’t even know that I’m annoyed so much as I’m trying to point out the flawed behavior on this site so that third parties or intelligent but silent viewers within the community are aware of the danger.
Give a reason that my overall position or the above list is logically flawed, please. Or shut up.
Please go away.
EDIT: Going to turn this into a poll. Permalink to karma sink if it drops below threshold.
Vote this comment up if you think chaosmosis is annoying enough that future chasomosis comments should be banned.
Vote this comment up if you do not think chaosmosis’s future comments should be banned.
Chaosmosis has had a significant number of upvoted comments. Some of his conduct has been very obnoxious and counterproductive, but I don’t think he’s reached a point where it’s reasonable to write him off as unable to learn from his mistakes. At the least, I think his continued presence is more likely to be fruitful than a couple of other recently active members whose contributions have been uniformly downvoted.
Karma sink. You’re all irrational and dumb, shut up!
Point of clarification: does banning a user on LW do anything but force them to create a new user account if they wish to keep contributing?
I have been using Wei_Dai’s awesome greasemonkey script for a while now to filter out some of the users I find valueless, so having them create multiple usernames to dodge the banhammer would be a mild nuisance for me.
So if that’s all it does, I’m somewhat opposed to it, but willing to remain neutral for the sake of not inconveniencing other people who don’t use that script for whatever reason.
OTOH, the responses to those users are themselves mildly annoying, so if the banhammer does something more worthwhile than that, then I might be in favor of it.
What about giving users the ability to apply a penalty to the score of posts from people they find uninteresting or aggravating, for the purpose of determining whether comments are hidden for that user? It could be inherited for one comment, or over the entire subtree, or perhaps decay according to some function.
This would, in general, hide comments from those users you object to as well as responses to them. The primary advantage it would have over outright blocks is that it would allow more space for someone to redeem themselves, and would let you catch interesting things in the responses when they do arise. A comment at +22 is likely interesting regardless of who posted it, and if you’ve seen some interesting posts from someone you’ve previously downgraded, you’ll probably think about relaxing that.
Edited to add: Note that if there seems to be a consensus on this, I’m willing to do the coding required.
I can’t ban users; I can ban comments. This makes them inaccessible to nonmod people. Creating a new username would only work against this for as long as it took to identify the new one as the same person.
Ah, gotcha.
Is there a certain threshold that once passed, personal karma totals no longer matter?
Eh? You only need 20 to post in Main.
I think negative karma throttles posting rate, but I don’t know the formula for it.
I feel as though your comments are now solely directed towards the purpose of gaining reputation.
I am convinced that you are wrong in point of fact. I am telling you this so you can adjust your feelings to fit reality. As long as you feel Alicorn or others are out to get you, or profit by hurting you, you will probably not be able to make a useful contribution to discussions or enjoy them yourself.
From my experience as a longtime reader of the site I can tell you that reputation on LW is not normally gained by attacking anyone, even if everyone else agrees in disliking the target or their comments. We have community values of responding to the factual content of each comment, with clear literal meaning, and without covert signalling. Reputation is gained by contributing to the conversation.
We also require civility, and since people are often bad at predicting how others will react to borderline comments like “dumb”, it’s best to be what may seem to you to be extra-civil just to avoid conversational traps. You have edited the comment that seemed to start this whole subthread, and I haven’t seen the original, so can’t comment more specifically.
That would have been more believable if you were complaining about downvotes that other people received, instead of the downvotes that you receive.
It would also have been more believable if you had also complained about the upvotes you received without justification, instead of only the downvotes you received without justification.
Edited to add: Here’s my impression of you: You are very strongly biased against all negative feedback you receive, whether silent downvotes or explicit criticism and you’re therefore not the best person to criticize it in turn. e.g. in the first thread I encountered you in in, you repeatedly called me a liar when I criticized a post without downvoting it. You couldn’t believe me when I told you I didn’t downvote it.
You are BAD at this. You are BAD at receiving negative feedback. Therefore you are BAD at criticizing it in turn. If you want to give feedback on negative feedback, then make sure said “negative feedback” wasn’t originally directed at you. Try to criticize the feedback given to other people instead—you might be better suited to evaluate that.
People might be downvoting for any number of reasons.
I spot the following, as potential downvote-triggers for various demographics:
1) “Lolz” 2) “ironic” 3) downvoting 4) disagreement without explanation
4 doesn’t match the data, because no one who has disagreed with the above listed points has given a reason that they disagree with them. There have been no arguments made against my posts, just (1, 2, and 3) statements about aesthetic preferences. IMHO all y’all have really bad aesthetic preferences.
Just because I don’t speak in a pretentious tone doesn’t mean that I can’t make valid points. I get kind of sick of all of the LessWrong commenters sounding alike in tone so I intentionally try to diversify things. Diverse forms of discussion seem more likely to produce diverse forms of thought [insert generic Orwell reference here]. Informal tones are also more conducive to casual communication which takes less time to articulate. Formalism in everyday life is stupid.
Judge the accuracy of the information I provide, please, not the tone which I choose to provide it in. Arguing shouldn’t have to be so formal and should never preclude major lulz whenever major lulz can be achieved. Anyone who acts as though something else is true should provide warranted reasons for doing so or else should be considered a major n00b.
I’d appreciate an explanation of why criticism of unwarranted negative feedback justifies more negative feedback.
Anyone up for it?
I currently feel that people just irrationally lash out at criticisms or statements which come close to suggesting criticism of the people controlling the karma. I currently think the commenters identify with each other to the extent that criticizing one of their actions draws them all in to attack, and I also don’t think that’s a healthy thing for a website to do.
There’s probably some law of diminishing returns, where commenting on something = (A1) utility, and commenting on how people comment on something = (A2) utility, and commenting on how people vote = (A3) utility, and you commenting on how people vote on you commenting on how people vote = (Z) utility, where it tends to go A1>A2>A3>...>Z, and probably Z is deep in the negatives.
You should also distinguish between types of feedback: receiving downvotes for an unknown reason may be frustrating to you, but it doesn’t clutter up the threads. You complaining about every time you get downvoted does clutter up the threads. It’s not the same type of “negative feedback”.
In short: obsess less about karma.
There are no “laws” which take over your behavior and force you to respond to my comments in a certain way. If people don’t like to see useful comments, and consider those useful comments to be clutter, then their interpretation of what clutter is is wrong and should be corrected to maximize feedback efficiency. Giving feedback on feedback makes sense.
You aren’t saying anything new: you think your words useful (of positive utility), but the people who downvote them obviously don’t so think them—they consider them of zero or negative utility; and they vote accordingly.
Yes, and I was wondering if someone would give an argument justifying that behavior or that utility assessment rather than simply taking that behavior as a given “law”.
I would like warranted arguments as to what was wrong with my comments. I am not asking for arguments that explain the response of the commenters or even one that mentions the reasons they have for downvoting, I’m asking for arguments that justify that behavior and that warrant those reasons. Without those arguments the community appears to be acting very irrationally on a fairly wide scale, which is concerning. No one has provided those arguments as of yet, so I am concerned.
Some people seem to be confusing my complaints with “I don’t want to receive bad karma” and then they advise me on ways to make other people like my comments better. But that is not my complaint or my goal, my complaint is that I am receiving bad karma for no good reason and my goal is to get people to recognize this. I’m not really interested in being popular on this site, I’m interested in pointing out the lack of justification for my unpopularity, thus drawing attention to the possibly dangerous implications that this has for the community. The confusion between the two goals is natural but it is entirely mistaken.
Proving that someone didn’t have a good reason for doing something is a lot harder than saying that it’s so. If you want to get fewer downvotes, there are ways you can do that; if you want to avoid changing your posting style, you could also do that. You cannot do both. Life presents choices.
I’m not trying to prove that, I’m trying to get a few people to think that it is probably the case, there’s a slight difference. I think that the irrationality of the commenters is the most logical and the simplest way for anyone to make sense of the reputation patterns I’ve seen so far despite the lack of good warranted criticism.
It’s not as though irrationality is incredibly rare or that I should have low priors on the probability that humans use social signalling mechanisms in an irrational manner, after all. The fact that out of all the people here none of them have conceded that they are irrational would actually seem to lend a bit more credence to my belief than it already has.
I agree that I cannot do both. However, I anticipate downvotes even if I were to conclusively prove my argument with all the might and power that Science and Bayes have to offer the universe. If my argument is correct and the commenters respond irrationally to criticism, then of course I should anticipate downvotes despite the accuracy of my criticism. That’s kind of the entire point.
’round these parts, the way to persuade people of controversial points is proof, not just assertion. So far you haven’t offered anything but big talk.
You’ve got this theory that makes a certain prediction, namely that your posts will be downvoted because everyone but you is an idiot. There is a competing theory which makes the same prediction, namely that your posts will be downvoted because they lack productive content. In order for your theory to beat out the competition, you’ll need to find some point where the predictions differ, and then demonstrate that yours is more accurate.
Surely, if we are such fools, and you understand the irrationalities involved so well, you could compose a post which manipulates those corrupt thought-structures into providing you with upvotes?
this is a little ridiculous. The reason you were downvoted is someone didn’t like your post. The reason all of the rest of your comments are being downvoted is that people don’t like to be questioned. And there’s some bandwagon effect in there somewhere. I’ve never got people to explain anything like this (edit: this method of trying to get an explanation). Maybe you are particularly good at it in real life thanks to body language or something but just in text there’s no way you’re going to get people to explain themselves this
also this sort of thing:
People, it makes no sense to karma punish me for: Giving people reasons that their karma punishments are unwarranted. Using the word dumb to describe irrational karma distributions. Modifying my feedback in response to further displays of irrational behavior. Not responding to karma incentives in the way you would like me to. Not taking any of this seriously at all.
tends to elicit an “I’LL SHOW YOU, FUCKER”, response in people or something, effectively identical, from what I have observed of people.
also, people like their requests for feedback humble and/or “positive.”
As for what’s wrong with your first comment: Supressing “innapropriate” preferences isn’t something I like.” I didn’t downvote you but it’s not like you can’t just not read comments. If i’d understood that was what you were doing when i read your comment (as I skipped down the page to the comments I was interested in) I would have downvoted it. I won’t now as most of the rest of your downvotes are clearly punishing your demanding an explanation (in an “innapropriate” tone) which no one has bothered doing. (why the fuck is the comment pointing out the non existence of laws which take over behaviour downvoted? and the one it’s responding to upvoted?) but I really don’t like the idea of trying to suppress comments that have no obvious negative impact. It looks kind of the same to me as the way no one bothered to give you an explanation and just decided to downvote instead. Your post is just saying “I decided not to do that,” which is simply an expression of your dislike, with no reasoning given, much as your being downvoted rather than responded to is. Also, it’s social policing and signalling taking priority over explaining, to the point where the actual “here is what I don’t like” bit that could allow someone to learn something is entirely left out. It wasn’t as bad as the response you’re receiving though.
edit: I must say, though, the demands of “proof” are ridiculous.
Some may consider this an inaccurate or incomplete description of your downvoted posts.
You may want to give a good-faith consideration as to why that is if you want to keep pursuing this.
Additionally, complaints about downvotes are usually not well received—not least because if someone downvoted you, they will probably disagree with a post claiming that they were incorrect to do so.
I would like to see less of a focus on karma—and, for that matter, “status”—on this website.
For what it’s worth, I downvoted the grandparent, and upvoted the great-grandparent.
Um, just see this: http://lesswrong.com/lw/c4h/rationality_quotes_may_2012/6ik8 it applies roughly equally as well to your comment as it does to ArisKatsaris’ comment.
I’m not interested in status, sorry for the confusion. I actually plan on leaving eventually because I’m concerned about getting drawn into a community which shares so many different memes and concepts. I want to internalize most of those concepts because they do seem objectively useful, but then I want to move on before those concepts become ingrained and I become a drone trapped in the hive. Remaining static is dangerous to free thought, that’s something I learned from the Deleuzians, they’re really cool.
The Internet is meant for nomadism, not anything else.
In that case—I upvoted your initial comment. I did not downvote the “complaint list” comment until it had grown quite large.
I think your characterization of your “complaint” comment is inaccurate, and I was trying to induce you to revisit it, because otherwise you’re arguing from false premises. I don’t think the initial comment should have been downvoted! However, your response was not useful to you for your expressed purpose of eliciting clarification on feedback, or to me or (apparently) the LW-emergent-consciousness for contributing to valuable discussion. It was downvoted for these reasons—and they’re good ones!
Of particular note is your unwarranted confidence that people who disagree with you are “irrational” and “dumb”. You did not have access to sufficient information to conclude this! In fact, if they were downvoting you because they expected your comment to lead to unproductive discussion, they were right.
More to the point, have you considered that you may have erred in this thread?
I’m trying to understand your reasoning here, and failing. Do you downvote controversial things often? Are you upset with the quality of my comments, or with the quality of all of the comments that followed? Why does it make sense to downvote simply because one of my comments drew lots of attention and anger?
To me, this does not make sense. Please explicitly state your rationale.
In what possible sense was the characterization inaccurate? I characterized it as a criticism of the negative karma I received. To me, it seems to quite clearly be exactly that. Other commenters also have been responding to it as though it was a criticism of the negative karma I received, that’s why some of them mentioned that they downvote comments about karma and why I tried to engage them in a discussion on the merits of criticizing flawed feedback.
If I ask for clarification repeatedly and do not receive clarification, that is not my fault. Additionally, I may yet end up receiving actual clarification. Moreover, the lack of clarification also suits my purposes, because it goes a long way towards supporting the possibility that of the dozen or so commenters voting in this thread, none of them have any real justification for their votes.
But, do you have a suggestion as to what might be better at eliciting clarification?
Or are you just trying to seem Reasonable?
This is nonfalsifiable, and you’re getting something out of this conversation or else you’d leave.
I don’t believe this is a relevant metric, my entire point is that their evaluation process is flawed.
EDIT: Moreover, new proof. I’ve already clearly demonstrated that I don’t respond to karma incentives by shutting up, the fact that people keep giving me bad karma despite my obvious immunity to their effects clearly demonstrates that the commenters and reputation people are primarily concerned not with stopping me from making new comments because of the supposed logical invalidity of my comments but instead are concerned with the social cohesion they feel when they use karma to reject my comments as wrong, regardless of the truth value of my comments.
I will change this believe if I see people change their behavior or give a good justification for giving my list comment −10 votes or if people answer the arguments which I have made in other places about the value of feedback about feedback. I have good reason to believe that LessWrong commenters are irrational because they are a subset of humans and knowing about biases does not make them go away. The fact that no justifications have been produced is also hugely relevant.
The fact that individuals can choose to sabotage the usefullness of a question does not make that question invalid.
Have you considered that no matter what answer I give to this question people will perceive the answer as though it is a “no”? Does this question have any purpose other than making the end of your post sound better? Are you actually thinking that my answer to this question will matter in some way?
The answer is yes.
And, I’ve concluded that I was overconfident in my expectations that irrational individuals would concede their own irrationality within a community that values rationality. I should not have expected otherwise, there are stronger incentives within this community to avoid admitting defeat than there are in other communities because this community treats accuracy and objectivity as a sacred value.
However, I’ve also concluded that posting that question was still a good decision on an overall level because I still believe that individuals are perceiving the power of my arguments. Part of the reason that I perceive this is because of the scarcity of downvotes on the comments where I challenge commenters to provide me with evidence and those commenters fail. Another reason that I perceive this is because I have yet to see any objection to my list of comments which attacks it on its merits. A third reason is that I believe all of those arguments are objectively good. The final reason is that I have not seen any objections to my comments from any of the main posters on this site who strike me as extremely intelligent.
EDIT: I also just realized that I need to identify a new threshold at which I’m satisfied with stopping. My previous threshold was going to be the moment where someone stated that they believed my critique to be largely accurate, but given my above realization about how disincentives against conceding irrationality within a rationalist community are actually stronger, I no longer think that threshold will suit my purposes.
What I meant was—I did not downvote the comment until it itself had grown quite large. To be blunt, my rationale was that at some point it crossed the line from “poorly-worded request for clarification” to “nutty rant”.
In what possible sense?! You called it “criticism of unwarranted negative feedback”. It could easily be argued that it didn’t read as “criticism” so much as “complaint”, it certainly wasn’t just “criticism”, and the term “unwarranted” basically assumes the conclusion, making yours a loaded question (“why did you give me an undeserved downvote?”).
If you have a goal, and your actions do not accomplish that goal, then saying that this is not your fault will also not accomplish that goal.
“Could someone who downvoted clarify why they thought my comment was not valuable?”
Quit it! Even “rationalists” will be better disposed towards you if you make a basic attempt to interpret them charitably.
I suspect that maybe you could be an interesting contributor here once this thread concludes. You haven’t claimed to have discovered the secret mathless Grand Unified Theory, for one thing.
Distinguish the former and the latter complaint! Are you saying that “contributes to valuable discussion” is a bad metric for LWers to use, or that LW is bad at judging what accomplishes that?
As to why your list comment is at −10, you’ve received a lot of justifications. Some in this very post. If you want justifications for the other comment’s downvotes, you may have to choose a different tack.
My primary purpose was not rhetorical grandstanding or anything to do with your expected answer in this thread. I was hoping you would think hard about the decisions you’ve made in this thread and realize that some were in error, then decide to change them.
No! That’s not the kind of error I’m talking about. “I overestimated your intelligence” does not count. Do you really think that every single downvote and every single comment explaining your missteps was undeserved? Because if so, you should realize how unlikely that is, and reexamine the thread with that fact in mind.
I’m predicting 20<=x<=40 (It’s currently at 23).
-- Terry Pratchett, “Guards! Guards!”
I really like the character of Lord Vetinari. He’s like a more successful version of Quirrell from HPMOR who decided that it’s okay to have cynical beliefs but idealistic aims.
I really like this passage, and Vetinari in general, but I downvoted your quote simply because it’s too long. It would be better if you could somehow condense it into a single paragraph.
Vimes has the right of it here, I think. They are just people, they are just doing what people do. And even if what people do isn’t always as good as it could be, it is far from being as bad as it could be. Mankind is inherently good at a level greater than can be explained by chance alone, p<.05.
Simply writing “p<.05” after a statement doesn’t count as evidence for it.
Edit: “Goodness” can be explained from evolutionary game theory: Generous Tit-for-Tat behavior is an excellent survival strategy and often leads to productive (or at least not mutually destructive) cooperation with other individuals practicing Generous Tit-for-Tat. Calling this “goodness” or “evilness” (altruism vs selfishness) is a meaningless value judgment when both describe the same behavior. Really it’s neither- people aren’t good for the sake of being good, or bad for the sake of being bad but behaving a certain way because it’s a good strategy for survival.
“p<.05” is a shorthand way of saying “the evidence we have is substantially unlikely to be the random result of unbiased processes”. It wasn’t intended to be taken literally, unless you think I’ve done randomized controlled trials on the goodness of mankind.
Yes, surely the inherent goodness comes from evolutionary game theory, it’s hard to see where else it would have come from. But the fact that evolutionary game theory suggests that people should have evolved to be good should be a point in favor of the proposition that mankind is inherently good, not a point against it.
EDIT: Now that I think about it, doing an RCT on the goodness of mankind might help illuminate some points. You could put a researcher in a room and have him “accidentally” drop some papers, and see if it’s people or placebo mannequins who are more likely to help him pick them up.
Chance as opposed to...?
The larger the island of knowledge, the longer the shoreline of wonder.
Wikiquotes: Huston Smith Wikipedia: Ralph Washinton Sockman
Only while the island is smaller than half the world :-)
Anyway, I can always measure your shore and get any result I want.
No, you can only get an answer up to the limit imposed by the fact that the coastline is actually composed of atoms. The fact that a coastline looks like a fractal is misleading. It makes us forget that just like everything else it’s fundamentally discrete.
This has always bugged me as a case of especially sloppy extrapolation.
The island of knowledge is composed of atoms? The shoreline of wonder is not a fractal?
Perhaps it’s composed of atomic memes ?
I think this conversation just jumped one of the sharks that swim in the waters around the island of knowledge.
Of course you can’t really measure on an atomic scale anyway because you can’t decide which atoms are part of the coast and which are floating in the sea. The fuzziness of the “coastline” definition makes measurement meaningless on scales even larger than single atoms and molecules, probably. So you’re right, and we can’t measure it arbitrarily large. It’s just wordplay at that point.
And assuming an arbitrarily large world, as the area of the island increases, the ratio of shoreline to area decreases, no? Not sure what that means in terms of the metaphor, though...
Eventually the island’s population can’t fit all at once on the shore, and so not everyone can gather new wonder.
And when you discover modal realism, you realize that everything is known in some universe and there is no sea after all.
Then you realize that in almost all universes there is no life, and consequently, no land...
Now I’m confused, so I guess I’m out.
Modal realism says “all possible worlds are as real as the actual world” (Wikipedia). In different possible worlds there are different laws of physics, almost all of which don’t allow for life. In some proportion of those where they do allow for life, there’s no life anyway (it seems to be rare in our universe). In some proportion of universes with life, there is no sentient life...
Without sentient life, there’s no knowledge, so no shore. No shore means no land.
Well, shoot.
Cf. Larry Niven’s early short story “Bordered in Black”.
A short shoreline of wonder is a good sign that the island of knowledge is small.
UNLESS IT’S A CONTINENT!!!!!! BOOM.
I don’t understand. Continents are just big islands, they have shorelines too.
If a continent takes up more than half the world, then the shorter the shoreline, the bigger the continent.
But the cutoff is obviously not “continent”/”not continent”, but rather “takes up more than half the world” versus “doesn’t take up more than half the world”—possibly with an additional constraint of a sufficiently simple shoreline...
“Continent” vs. “island” is an arbitrary line, a matter of definition. Whereas smaller/bigger than half the world is precise and objective.
Geometry. Big areas with less big corresponding perimeters.
This is answer is about as informative as answering “Why do aeroplanes fly?” with “Calculus. Differential equations with forces.”.
If you are talking about continents larger than half the world, then DanArmak has already pointed it out and much more politely. However, as dlthomas points out the distinction is not based on it being a continent or not, but on it covering more than half the word.
Also, everything we call a continent on Earth takes up less than half of it, and for such things there is a minimum perimeter that increases as the area increases. (The minimum perimeter is something a little bit like
2*sqrt(pi*Area)
(except different because the Earth is a sphere rather than a plane).)Were you trying to point out that the shoreline’s length varies as the square root of the size of the island?
Doesn’t that depend a lot on how convoluted the shoreline is?
Yes, but only if the shape varies too.
I’m not sure immediately what it means for the shape not to vary if you are growing a complexly shaped island on a sphere.
You’re right. I was imagining it on a plane.
Edit: Only later did I look at this comment out of context and start daydreaming about making some sort of Snakes on a Plane joke.
Antoine de Saint Exupery
Both operations seem vitally necessary, but he’s probably right that you should start with the latter.
Has this actually been working?
I would expect the Fun Theory Sequence to be outcompeted by advertisements for toothpaste, Axe body spray, and sports cars, at least among the general public.
OK, but we’re not the general public.
http://lesswrong.com/lw/xp/seduced_by_imagination/
http://lesswrong.com/lw/ye/and_say_no_more_of_it/
“If God gives you lemons, you find a new God.”
-- Powerthirst 2: Re-Domination
I maintain you should use the lemons as an offering to appease your angry new god.
If you liked Powerthirst, there’s a similar thing called “SHOWER PRODUCTS FOR MEN” on youtube.
http://www.youtube.com/watch?v=jUjh4DE8FZA
-- Jorge Luis Borges, “Dr. Américo Castro is Alarmed”
(Pliny, not Plinty.)
The article is not about antisemitism, by the way. It’s about one Dr. Castro’s alarm over a “linguistic disorder in Buenos Aires” — i.e. a putative decline in the quality of Argentinian Spanish usage.
Thank you, corrected! Yes, it is a wonderful demolition of Castro’s pretentious pronouncements on the Argentine dialect, which contains some of the finest examples of Borges’ erudite snark. (”...the doctor appeals to a method that we must either label sophistical, to avoid doubting his intelligence, or naive, to avoid doubting his integrity...”)
Norbert Wiener
I’m going to be unfair here—there is a limit to how much specificity one can expect in a brief quote but: In what sense is the difficulty “mathematical in essence”, and just how ignorant of how much mathematics are the physiologists in question? Consider a problem where the exact solution of the model equations turns out to be an elliptic integral—but where the practically relevant range is adequately represented by a piecewise linear approximation, or by a handful of terms in a power series. Would ignorance of the elliptic integral be a fatal flaw here?
Speaking as someone who is neither the OP nor Norbert Wiener, I think even the task of posing an adequate mathematical model should not be taken for granted. Thousands of physiologists looked at Drosophila segments and tiger stripes before Turing, thousands of ecologists looked at niche differentiation before Tilman, thousands of geneticists looked at the geological spread of genes before Fisher and Kolmogorov, etc. In all these cases, the solution doesn’t require math beyond an undergraduate level.
Also, concern over an exact solution is somewhat misplaced given that the greater parts of the error are going to come from the mismatch between model and reality and from imperfect parameter estimates.
.
If you have a result with a p value of p<0.05, the universe could be kidding you up to 5% of the time. You can reduce the probability that the universe is kidding you with bigger samples, but you never get it to 0%.
How would you rephrase that using Bayesian language, I wonder?
It already is in Bayesian language, really, but to make it more explicit you could rephrase it as “Unless P(B|A) is 1, there’s always some possibility that hypothesis A is true but you don’t get to see observation B.”
-- Scott Aaronson
OTOH it could be that the “you” in the above knows little to nothing about computer simulation.
For example a moderately competent evolutionary virologist might have theory about how viruses spread genes across species, but have only a passing knowledge of LaTeX and absolutely no idea how to use bio-sim software.
Or worse, CAN explain, but their explanation demonstrates that lack of knowledge.
Such as set theory?
Well, every heuristic has exceptions.
By definition?
--Peter Thiel, on 60 Minutes
Heh, I bet he was being coy. In my experience, people who can’t get any enjoyment out of wearing a clown suit (or, indeed, a scary uniform) - and aren’t somewhat, well, autistic—actively avoid saying contrarian things; it’s more or less social autopilot.
I read this as “people who aren’t ( (clownsuit enjoyers) and (autistic) ) …”, but it looks like others have read it as “people who aren’t (clownsuit enjoyers) and aren’t (autistic)” = “people who aren’t ( (clownsuit enjoyers) or (autistic) )”, which might be the stricter literal reading. Would you care to clarify which you meant?
Peter Thiel is, well, autistic. Or at least has some tendencies in that direction (he’s unlikely to be clinically autistic, with the possible exception of mild aspergers).
― Brandon Mull, Fablehaven
The real sharp ones also learn from the mistakes of others.
Are you correcting the accuracy of the quotation, or commenting?
Commenting.
Others rarely collect enough data when making mistakes. Sometimes you need to go make the mistake yourself.
I’m not going to search for it, but I recall having heard that saying well before 2006.
-Kurt Vonnegut
Then you can commit suicide without worries.
Or try to vary life among other dimensions than (un)”examined”; most people do feel they live lifes worth living, after all.
(In general, I’m not sure we should be advocating suicide in all but the most extreme cases.)
I’m fairly sure gwern was being glib
And Socrates didn’t use this argument about the hemlock, so it looks like he found the examined life worthwhile.
Thoughts like that should not be encouraged. One should instead worry about preserving the life we’ve been given and committing actions which will lead to the preservation of others as well. What you just said sounds like a very schizophrenic thing to say (To me. I do however realize when I am stating an opinion regardless of how much evidence supports my hypothesis.).
I’ve argued on a number of occasions on this site that people who’re suicidal are usually not in a position to accurately judge the expected value of staying alive, but honestly, if a person’s life really isn’t worth living, why should they have to?
This is a question that is very close to me, and which I’ve been chewing over for the better part of a decade. I have had a close personal friend for many years with a history of mental illness; having watched their decline over time, I found myself asking this question. From a purely rational standpoint, there are many different functions that you can use calculate the value of suicide versus life. As long as you don’t treat life as a sacred/infinite value (“stay alive at all costs”), you can get answers for this.
My problem is that a few years ago, I started noticing measures that were pro-suicide. As quality of life and situation declined, more and more measures flipped that direction. What do you do as an outside observer when most common value judgements all seem to point toward suicide as the best option?
It’s not like I prefer this answer. What I want is for the person in question get their life together and use their (impressive) potential to live a happy, full life; what I want is another awesome person making the world we live in even more awesome. Instead, there is a person who as near as I can estimate actually contributes negative value when measured by most commonly accepted goals.
How much is the tradeoff worth? If I sacrifice the remainder of my rather productive life in an attempt to ‘save’ this person, have I done the right thing? I cannot in good conscience say yes.
These are obnoxious problems.
“If a person’s life really isn’t worth living [objectively]” then the person should stop caring about flawed concepts like objective value. “If a person’s life really isn’t worth living [subjectively]” then they should work on changing their subjective values or changing the way that their life is so it is subjectively worth living. If neither of the above is possible, then they should kill themselves.
It’s important that we recognize where the worth “comes from” as a potential solution to the problem.
This insight brought to you by my understanding of Friedrich Nietzsche. (Read his stuff!)
It’s hard to say what it would even mean for moral value to be truly objective, but say that, if a person is alive, it will cause many people to suffer terribly. Should they stop caring about this in order to keep wanting to live?
If a person is living in inescapably miserable circumstances, changing their value system so they’re not miserable anymore is easier said than done. And if it were easy, do you think it would be better to simply always change our values so that they’re already met, rather than changing the world to satisfy our values?
Better to self-modify to suffer less due to not achieving your goals (yet), while keeping the same goals.
Easier said than done, unfortunately.
This doesn’t make sense.
How do you retain something as a goal while removing the value that you place on it?
I think DanArmak means modify the negative affect we feel from not acheiving the goals while keeping the desire and motivation to acheive them.
EDIT: oops, ninja’d by DanArmak. Never mind.
Don’t remove the value. Remove just the experience of feeling bad due to not yet achieving the value.
If I have a value/goal of being rich, this doesn’t have to mean I will feel miserable until I’m rich.
What you’re implicitly doing here is divorcing goals from values (feelings are a value). Either that or you’re thinking that there’s something especially wrong related to negative incentives that doesn’t apply to positive ones.
If you don’t feel miserable when you’re poor or, similarly, if you won’t feel happier when you’re rich, then why would you value being rich at all? If your emotions don’t change in response to having or not having a certain something then that something doesn’t count as a goal. You would be wanting something without caring about it, which is silly. You’re saying we should remove the reasons we care about X while still pursuing X, which makes no sense.
There’s something terribly wrong about the way negative incentives are implemented in humans. I think the experience of pain (and the fear or anticipation of it) is a terrible thing and I wish I could self-modify so I would feel pain as damage/danger signals, but without the affect of pain. (There are people wired like this, but I can’t find the name for the condition right now.)
Similarly, I would like to get rid of the negative affect of (almost?) everything else in life. Fear, grief, etc. They’re the way evolution implemented negative reinforcement learning in us, but they’re not the only possible way, and they’re no longer needed for survival; if we only had the tools to replace them with something else.
Being rich is (as an example) an instrumental goal, not a terminal one. I want it because I will use the money to buy things and experiences that will make me feel good, much more than having the money (and not using it) would.
“pain asymbolia”
Treating it as an instrumental goal doesn’t solve the problem, it just moves it back a step. Even if you wouldn’t feel miserable by being poor because you magically eliminated negative incentives you would still feel less of the positive incentives when you are poor than when you were rich, even though richness is just the means to feeling better. All of this:
still applies.
(Except insofar as it might be altered by relevant differences between positive and negative incentives.)
To clarify, what I’m contending is that this would only make sense as a motivational system if you placed positive value on achieving certain goals which you hadn’t yet achieved, I think you agree with this part but am not sure. But I don’t think we can justify treating positive incentives differently than negative ones.
I don’t view the distinction between an absence of a positive incentive and the presence of a negative incentive the same way you do. I’m not even sure that I have any positive incentives which aren’t derived from negative incentives.
Negative and positive feelings are differently wired in the brain. Fewer positive feelings is not the same as more negative ones. Getting rid of negative feelings is very worthwhile even without increasing positive ones.
But the same logic justifies both, even if they are drastically different in other sort of ways.
Forcing yourself to feel maximum happiness would make sense if forcing yourself to feel minimum unhappiness made sense. They both interact with utilitarianism and preference systems which are the only relevant parts of the logic. The degree or direction of the experience doesn’t matter here.
Removing negative incentives justifies maxing out positive incentives = nihilism.
I mean, you can arbitrarily only apply it to certain incentives which is desirable because that precludes the nihilism. But that feels too ad hoc and it still would mean that you can’t remove the reasons you care about something while continuing to think of it as a goal, which is part of what I was trying to get at.
So, given that I don’t like nihilism or preference paralysis but I do support changing values sometimes, I guess that my overall advocacy is that values should only be modified to max out happiness / minimize unhappiness if happiness / no unhappiness is unachievable (or perhaps also if modifying those specific values helps you to achieve more value total through other routes). Maybe that’s the path to an agreement between us.
If you have an insatiable positive preference, satiate it by modifying yourself to be content with what you have. If you can never be rid of a certain negative incentive, try to change your preferences so that you like it. Unfortunately, this does entail losing your initial goals. But it’s not a very big loss to lose unachievable goals while still achieving the reasons the goals matter, so fulfilling your values by modifying them definitely makes sense.
Reducing bad experience was the original subject of discussion. As I said, it’s worthwhile to reduce them even without increasing good experience. I never said I don’t want to increase good experience—I do! As you say, both are justified.
I didn’t mean to imply that I wanted one but not the other; I just said each one is a good thing even without the other. I’m sorry I created the wrong impression with my comments and didn’t clarify this to begin with.
Of course when self-modifying to increase pleasure I’d want to avoid the usual traps—wireheading, certain distortions of my existing balance of values (things I derive pleasure from), etc. But in general I do want to increase pleasure.
I also think reducing negative affect is a much more urgent goal. If I had a choice between reducing pain and increasing pleasure in my life right now, I’d choose reducing pain; and the two cannot (easily) be traded. That’s why I said before that “there’s something wrong about negative [stuff]”.
I agree with a lot of what you’re saying, I made errors too, and IMHO apologizing doesn’t make much sense, especially in the context of errors, but I’ll apologize for my errors too because I desire to compensate for hypothetical status losses that might occur as a result of your apology, and also because I don’t want to miss out any more than necessary on hypothetical status gains that might occur as a result of (unnecessary) apologies. But the desire to reciprocate is also within this apology, I’m not just calculating utilons here.
Sorry for my previous errors.
You said:
I said:
I don’t know how you avoid this problem except by only supporting modifying incentives in cases of unachievable goals. I’d like to avoid it but I would like to see a mechanism for doing so explicitly stated. If you don’t know how to avoid this problem yet, that’s fine, neither do I.
Apologizing is indeed status signaling; I feel better in conversations where it is not necessary or expected.
When I said I was sorry, I meant it in the sense of “I regret”. I didn’t mean it as an apology and wasn’t asking for you to reciprocate. (Also, the level of my idiomatic English tends to vary a lot through the day.)
Now I regret using the expression “sorry”!
I’m glad we agree about apologies :-)
As for the problem of modifying (positive) preferences: I don’t have a general method, and haven’t tried to work one out. This is because I don’t have a way to self-modify like this, and if I acquire one in the future, it will probably have limitations, strengths and weaknesses, which would guide the search for such a general method.
That said, I think that in many particular cases, if I were presented with the option to make a specific change, and enough precautions were available (precommitment, gradual modifications, regret button), making the change might be safe enough—even without solving the general case.
I think this also applies to reducing negative affect (not that we have the ability to that, either) - and the need is more urgent there.
It’s not just about status. It also communicates “This was an accident, not on purpose” and/or “If given the opportunity, I won’t do that again” which are useful information.
It’s not clear to me where the line between status signalling and communicating useful information even is.
My dog, when she does something that hurts me, frequently engages in behaviors that seem designed to mollify me. Now, that might be because she’s afraid I’ll punish her for the pain and decides to mollify me to reduce the chances of that. It might be because she’s afraid I’ll punish her for the status challenge and decides to anti-challenge me. It might be both. It might be that she has made no decisions at all, and that mollifying behavior is just an automatic response to having caused pain, or given challenge. It might be that the behavior isn’t actually mollifying behavior at all, whether intentional or automatic, and it’s a complete coincidence that I respond to it that way. Or it might not be a coincidence, but rather the result of my having been conditioned to respond to it that way. It might be something else altogether, or some combination.
All of that said, I have no problem categorizing her behavior as “apologizing.”
I often find myself apologizing for things in ways that feel automatic, and I sometimes apologize for things in ways that feel deliberate. I have quite a bit more insight into what’s going on in my head than my dog’s head when this happens, but much of it is cognitively impermeable, and a lot of the theories above seem to apply pretty well to me too.
Neat, then we agree on all of that. I also would prefer something ad hoc to the “solution” I thought of.
The Dark Arts are as nothing besides the terrible power of signaling!
I’ve read—and I have no idea how much of this is true—that in some Eastern cultures you can get bonus points in a conversation by apologizing for things that weren’t in fact offensive before you started apologizing; or taking the blame for minor things that everyone knows you’re not responsible for; or saying things that amount to “I’m a low status person, and I apologize for it”, when the low-status claim is factually untrue and, again, everyone knows it...
I live in the Northeast US, which isn’t especially “Eastern”, but I’ve nevertheless found that taking the blame for things that everyone knows I’m not responsible for to be a very useful rhetorical trick, at least in business settings.
(Warning: this came out somewhat as a rant. I don’t have the energy to rewrite it better right now.)
Honestly: stories like this terrify me. This is not exaggeration: I feel literal terror when I imagine what you describe.
I like to think that I value honesty in conversations and friendships—not Radical Honesty, the ordinary kind. I take pride in the fact that almost all of my conversations with friends have actual subjects, which are interesting for everyone involved; that we exchange information, or at least opinions and ideas. That at least much of the time, we don’t trade empty, deceptive words whose real purpose is signaling status and social alliance.
And then every once in a while, although I try to avoid it, I come up against an example—in real life too—of this sort of interaction. Where the real intent could just as well be transmitted with body language and a few grunts. Where consciousness, intelligence, everything we evolved over the last few million years and everything we learned over the last few thousand, would be discarded in a heartbeat by evolution, if only we didn’t have to compete against each other in backstabbing...
If I let myself become too idealistic, or too attached to Truth, or too ignorant and unskilled at lying, this will have social costs; my goals may diverge too far from many other humans’. I know this, I accept this. But will it mean that the vast majority of humanity, who don’t care about that Truth nonsense, will become literally unintelligible to me? An alien species I can’t understand on a native level?
Will I listen to “ordinary” people talking among themselves one day, and doing ordinary things like taking the blame for things they’re not responsible for so they can gain status by apologizing, and I will simply be unable to understand what they’re saying, or even notice the true level of meaning? Is it even plausible to implement “instinctive” status-oriented behavior on a conscious, deliberate level? (Robin Hanson would say no; deceiving yourself on the conscious level is the first step in lying unconsciously.)
Maybe it’s already happened to an extent. (I’ve also seen descriptions that make mild forms of autism and related conditions sound like what I’m describing.) But should I immerse myself more in interaction with “ordinary” people, even if it’s unpleasant to me, for fear of losing my fluency in Basic Human? (For that matter, can I do it? Others would be good at sensing that I’m not really enjoying a Basic Human conversation, or not being honest in it.)
Linux Kernel Management Style says to be greedy when it comes to blame.
Here are some relevant paras:
ETA: I take back my initial reaction. It’s not completely different from what TheOtherDave described. But there are some important differences from at least what I described and had in mind:
If someone else already accepted the blame, it doesn’t advise you to try to take away the blame from him and on yourself, especially if he’s really the one at fault!
It doesn’t paint being blamed as being a net positive in some situations, so no incentive to invent things to be blamed for, or to blow them u pout of all proportion
Telling off the one really at fault, in private, is an important addition—especially if everyone else is tacitly aware you’ll do this, even if they don’t always know who was at fault. That’s taking responsibility more than taking blame.
In addition, there’s a difference between a random person taking blame for the actions of another random person; and a leader taking blame for the mistakes of one of his subordinates. As far as I can tell, the situation described in the article you linked to is a bit closer to the second scenario.
See my above comment, I manage to subvert Basic Human conversation fairly well in real life.
I empathize with all of your complaints. Doing things like explicitly pointing out when you’re manipulating other people (like when I said I empathize with all of your complaints) while still qualifying that within the bounds of the truth (like I will do right now, because despite the manipulativeness of disclosure involved my empathy was still real [although you have no real reason to believe so and acknowledge that {although that acknowledgement was yet another example of manipulation ([{etc}]) }]).
For another less self referential example, see the paragraph I wrote way above this where I explicitly pointed out some problems of the norms involved with apologies, but then proceeded to apologize anyway. I think that one worked very well. My apology for apologizing is yet another example, that one also worked fairly well.
(I hope the fact that I’m explicitly telling you all of this verifies my good intentions, that is what the technique depends upon, also I don’t want you to hate me based on what is a legitimate desire to help [please cross apply the above self referential infinitely recursive disclaimer].)
Although in real life, I’m much less explicit about maniuplation, I just give it a subtle head nob but people usually seem to understand because of things like body language, etc. It probably loses some of its effectiveness without the ability to be subtle (or when you explain the concept itself while simultaneously using the concept, like I attempted to do in this very comment). Explaining the exact parts of the technique is hard without being able to give an example which is hard because I can’t give the example through text because of the nature of real life face-to-face communication.
Blargh,.
I have adopted the meta-meta strategy of being slighly blunt in real life but in such a way that reveals that I am 1. being blunt for the purpose of allowing others to do this do me 2. trying to reveal disdain for these type of practices 3. knowingly taking advantage of these type of practices despite my disdain for them. People love it in real life, when it’s well executed. I’m tearing down the master’s house with the master’s tools in such a way that makes them see me as the master. It’s insidiously evil and I only do it because otherwise everyone would hate me because I’m so naturally outspoken.
That sounds really braggy, please ignore the bragginess, sorry.
I APOLOGIZE FOR MY APOLOGY. :(
If you cannot change the world to satisfy your values then your values should change, is what I advocate. To answer your tradeoff example: Choose whichever one you value more, then make the other unachievable negative value go away.
And I don’t know how to solve the problem I mention in my other comment below.
There’s an interesting issue here.
The agent might have a constitution such that they don’t place subjective value on changing their subjective values to something that would be more fulfillable. The current-agent would prefer that they not change their values. The hypothetical-agent would prefer that they have already changed their values. I was just reading the posts on Timeless Decision Theory and it seems like this is a problem that TDT would have a tough time grappling with.
I’m also feeling that it’s plausible that someone is systematically neg karmaing me again.
They don’t “have” to keep going but striving for better is a more optimistic encouragement is it not? I would rather teach someone that they have worth rather than tell them that suicide (Which will undoubtedly have negative effects on their family if they have a family that loves them) is what I want for them too.
Seriously lame comment, man. Suicide is one of the classic (literally) motifs of philosophy, starting with Socrates (or earlier, with Empedocles!) and continuing right up to the modern day with Camus and later thinkers.
Agreed in substance, but I disapprove of your phrasing “seriously lame comment” and downvoted for that.
That shows no evidence that would support the fact that it’s an efficient mentality one should willingly accept. Obviously nature is trying everything it can to preserve the life it has created. You wish to go against the will of life’s plan for preservation.
I’m pretty sure Plato was quoting Socrates.
Or at least claimed to be...
-Jonathan Baron
Oh, and Paul Graham again from the same piece:
“In war you will generally find that the enemy has at any time three courses of action open to him. Of those three, he will invariably choose the fourth.” —Helmuth Von Moltke
(quoted in “Capturing the Potential of Outlier Ideas in the Intelligence Community”, via Bruce Schneier)
There is a corollary of the Law of Fives in Discordianism, as follows: Whenever you think that there are only two possibilities (X, or else Y), there are in fact at least five: X; Y; X and Y; neither X nor Y; and J, something you hadn’t thought of before.
Is this a quotation or paraphrase of some famous quote? Googling “discordianism” “law of fives” “two possibilities” only comes up with a handful of hits, all unrelated except for this lesswrong.com page itself.
Probably this:
found here.
The Principia Discordia was a basis for a lot of the ideas in Illuminatus! by Wilson and Shea. The Illuminati card game doesn’t begin to do the Illuminatus! justice.
Evelyn Baring, Earl of Cromer, Modern Egypt
David Wallace
This criticism of instrumentalism only works in so far as instrumentalism is descriptive, rather than prescriptive.
Paul Graham “What You’ll Wish You’d Known” http://paulgraham.com/hs.html
Just because you are choosing between two theories, doesn’t mean one of them is right.
Atheism is an excellent excuse for skipping church.
Believing there’s no gold under your yard is an excellent excuse for not digging it up.
And adopting a never ending ideological battle with the majority of your community more than makes up for the effort saved.
If you just wanted to be lazy, there’s always agnosticism.
Doesn’t always work for me...
Reversed stupidity is not intelligence!
Almost the same as the one Eliezer used here
The quote in that link makes a good point: If one gives you an excuse to be lazy, then you might be privileging the hypothesis; it could be that it was only raised to the level of attention so that you can avoid work. Thus, the lazy choice really does get a big hit to its prior probability for being lazy.
But it’s still false that the other one is probably right. In general, if a human is choosing between two theories, they’re both probably insanely wrong. For rationalists, you can charitably drop “insanely” from that description.
Your first paragraph is a good analysis (enough to merit an upvote of the comment as a whole). Your second seems redundant; I don’t think anyone would interpret the quoted phrase of non-technical English to mean that you should actually raise your estimate of the theory that doesn’t permit laziness relative to other theories not under consideration, and if you have two theories both of which are equally wrong, it doesn’t matter much what you do to differentiate them.
Paul Graham, “Is It Worth Being Wise?” http://paulgraham.com/wisdom.html
Noticing this moment is important!
Of course, we shouldn’t stop when we notice this. We should keep getting more specific, and we should begin testing whether we are mistaken.
More accurately, we should test more specific things, then become more specific. First make the test, then update the beliefs.
I think we’re splitting unnecessary hairs here; obviously we shouldn’t update our belief to something more specific than we can justify. At the same time, we want to formulate hypotheses in advance of the tests, and test whether these hypotheses are mistaken or worthy of promotion to belief, which to me seems a perfectly reasonable interpretation of what shokwave wrote.
-- Robert Anton Wilson
Contrarians of LW, if you want to be successful, please don’t follow this strategy. Chances are that many people have raised the same possibility before, and anyway raising possibilities isn’t Bayesian evidence, so you’ll just get ignored. Instead, try to prove that the stuff is bullshit. This way, if you’re right, others will learn something, and if you’re wrong, you will have learned something.
For what it’s worth, some context:
— http://media.hyperreal.org/zines/est/intervs/raw.html
Wilson had a tendency to come across as a skeptic among mystics and a mystic among skeptics.
Most scientists, skeptics, theists, and new agers of various stripes share a common (and not necessarily wrong) belief in the truth. They differ primarily in how they believe one gets to the truth, and under what conditions, if ever, one should change one’s mind about the truth.
Robert Anton Wilson was unusual in that he really tried to believe multiple and contradictory claimed truths, rather than just one. For instance, on Monday, Wednesday and Friday he might believe astrology worked. Then on Tuesday, Thursday, and Saturday he’d believe astrology was bullshit. On Sunday he’d try to believe both at the same time. This wasn’t indecision but rather a deliberate effort to change his mind, and see what happened. That is, he was brain hacking by adjusting his belief system. He was not walled in by a need to maintain a consistent belief system. He deliberately believed contradictory things.
Call a believer someone who believes proposition A. Call a nonbeliever someone who believes proposition NOT A. Call an a-gnostic someone who doesn’t assign a much higher probability to one of A and NOT A. Wilson would be a multi-gnostic: that is, someone who believes A and believes NOT A, someone who is both a believer and a non-believer. This is how he came across as a skeptic among mystics and a mystic among skeptics. He was both, and several other things besides.
I doubt I can do much to prove a lot of the ‘core’ concepts of rationality, but I can do a lot to point people towards it and shake up their belief that there isn’t such a proof.
(1) Insisting that those who disagree with you prove their opinions sets too high a bar for them. Being light means surrendering to the truth ASAP.
(2) Raising possibilities is Bayesian evidence, assuming the possibility-raiser is a human, not a random-hypothesis generator.
Yeah, and if the possibility-raiser is a human who would have provided evidence if they had any, then raising possibilities without evidence is Bayesian evidence in the other direction :-)
I think “try to prove” was an importantly different word choice from “prove” in cousin_it’s comment. The point is that in the context of a “new age” movement, it may be enough to raise the possibility; people really may not be thinking about it. In the context of Less Wrong, that is not usually enough; people are often already thinking about evidence for and against.
-Carl Winfeld
-G.K. Chesterton
Related: this slide
This struck me as an odd position for a Christian apologist. I know that if I didn’t see us all as idiots, I might think we all deserved to die—oh, wait.
I’m not sure Chesterson deserves the epithet of apologist. Christian yes… evangelist, of a sort. I see him as a cut above the apologist class of Christian commentators.
I don’t know that “apologist” counts as a natural class, but he definitely produced Christian apologetics. He may have preferred to call them ‘refutations’ of non-Christian or atheist doctrines.
Of course, if you can compute the way an Argus would see an obscured object, or a Briareus would approach a dexterity-testing-task, that might be useful in evaluating our approaches to similar problems.
~Paul Graham
~ Zach Weiner, SMBC #2559
(1) Do people act more rationally when their interests are more directly concerned? (2) Are scientists’ interests more directly concerned with winning grants than with making correct scientific inferences?
If the answer to both is “yes,” then I think we should raise our confidence in jackal rituals relative to the current methodologies of statistical inference.
Fortunately for jackals, there’s an unjustified independence assumption here. Other stuff I’ve read strongly suggests that the outcomes of published research are strongly influenced by the expectations of the researchers about future grant money.
Hell, no. Religion (especially the more commandment-heavy ones like Islam and Orthodox Judaism) being the best example, with interpersonal relationships running a close second.
The idea of that strip, as I understand it, is that scientists pretty much only act rationally inside the lab.
I think you’re reading into a joke much too strongly.
Confucius
“Well it’s alright for you, Confucius, living in 5th Century feudal China. Between all the documentation I have to go through at work, and all the blogs I’m following while pretending to work, and all the textbooks I have to get through before my next assignment deadline, I don’t have time to read!”
-Henry G. Felsen
Why that citation?
Edit: Question answered below.
What’s wrong with my citation?
I did some checks and that appears to be said by Darrell Huff. Links below:
http://www.anvari.org/fortune/Miscellaneous_Collections/211181_proper-treatment-will-cure-a-cold-in-seven-days-but-left-to-itself-a-cold-will-hang-on-for-a-week.html
http://www-stat.wharton.upenn.edu/~steele/HoldingPen/Huff/Huff.htm
http://motd.ambians.com/quotes.php/name/linux_medicing/toc_id/1-1-20/s/47
http://www.pithypedia.com/?similarquotes=Proper+treatment+will+cure+a+cold+in+seven+days%2C+but+left+to+itself%2C%3Cbr%2F%3Ea+cold+will+hang+on+for+a+week.
Are you sure it was Henry G. Felsen?
According to Darrell Huff, it was first said by Henry G. Felson:
Huff, Darrell. How to lie with statistics. New York: Norton, 1993.
That answers my question, thanks! In my experience, any citation that does not refer to some printed reference should not be believed—a line saying “as quoted in How to lie with statistics by Darrell Huff” was what I was looking for.
Evelyn Baring, Earl of Cromer, Modern Egypt
-Tim Ferriss, The 4-Hour Workweek
Has anyone tried to put Ferriss’s 4-Hour Workweek plan into practice? If so, did it make you better off than you were a month ago?
EDIT: Ferriss recommends (among other things) that readers invent and market a simple product that can be sold online and manufactured in China, yielding a steady income stream that requires little or no ongoing attention. There are dozens of anecdotes on his website and in his book that basically say “I heard that idea, I tried it, it worked, and now I’m richer and happier.” These anecdotes (if true) indicate that the plan is workable for at least some people. What I don’t see in these anecdotes is people who say “I really didn’t think of myself as an entrepreneur, but I forced myself to slog through the exercises anyway, and then it worked for me!”
So, I’m trying to elicit that latter, more dramatic kind of anecdote from LWers. It would help me decide if most of the value in Ferriss’s advice lies in simply reminding born entrepreneurs that they’re allowed to execute a simple plan, or if Ferriss’s advice can also enable intelligent introverts with no particular grasp of the business world to cast off the shackles of office employment.
I have, and yes it made me much better off (although I wouldn’t really describe it as a “plan”, since its more “meta” than I think of “plans” as being.)
Some more anecdotal evidence.
Cool! So, what was your pre-4HWW lifestyle like, and how did it change?
There are other resources that recommend this practice. Steve Pavlina is currently running a series on passive income on his blog that looks interesting as well.
I don’t know if the recommendations made in 4-Hour workweek or that blog are sustainable in the real world without a large amount of “luck”.
-- Paul Graham
(Arguably a decent philosophy of life, if a bit harshly expressed for my taste.)
Hey, I can hack and whine at the same time!
Attempting this just reallocates all whining to being about inability to start hacking.
Kane: Quit griping!
Lambert: I like griping.
(from Alien)
Might be a better phrasing? It also accounts for doing good things even if you can’t solve the current problem.
So long as it doesn’t lead to “We have to do something; X is something; ergo, we must X!”
True, but very few things are less effective than whining.
Actually, while whining rarely accomplishes anything, a lot of things anti-accomplish something, i.e., they make the problem worse.
True. Perhaps:
“If you can find it” invites beliefs. Do something effective, or pick a different topic.
Whining aims to let people know that there is a problem that needs to be solved. That sounds like a relatively effective way to let the world know that we still have much to do.
I get what you are saying but, be that as it may, knowing that there is still much to do and actually doing something about it are two completely different things. And whining may also prove to be counterproductive since it is often perceived to be annoying thus people are less likely to help or take you seriously.
Actually, most people who find whining annoying make every attempt to eliminate it. It can be counter-productive if the story it has to tell is perceived in an incorrect manner. You will do nothing if you do not know that there are things to do.
The “stop whining” part is the harsh part; the “start hacking” part is beautiful.
--Hazel, Tales of MU
Quintilian
-Hilary Putnam
-Bas van Fraassen
One quote per post, please.
Edit: Belated thanks!
I’m assuming Jayson_Virissimo felt they were strongly related.
Indeed he did—but the two quotes were in the same comment until after I posted my request. That’s why there’s an asterisk after the timestamp on the original quote post—the comment was edited to remove the Bas van Fraassen quote.
Ah, my apologies. That’s what I get for reading the thread 11 days late. :-)
And that’s what I get for not editing my comment to say, “Edit: Thanks!”
Speaking of which....
The second quote is responding to the claim in the first quote. It is a pseudo-conversation.
Putnam of all people really should have known better than to use the word ‘miracle’.
How do you figure??
There is no dominant conceptual analysis of ‘miracle’ such that Putnam’s sentence has a clear and distinct meaning. (I may be incorrect about this; I do not follow Philosophy of Religion.) Of course, since Putnam was writing to an extremely secular audience (by American standards), ‘miracle’ is a useful slur that essentially translates to ‘WTF is this I don’t even’.
— Kilmore Free Press; Kilmore, Victoria, Australia; 14 December 1916.
A version of this story is found in Aleister Crowley’s Magick in Theory and Practice, and a paraphrase is quoted in Robert Anton Wilson’s Masks of the Illuminati, attributed to a fictionalized Crowley; that version may be found here.
Love the story, but the punchline shouldn’t be spoiled in the title!
It’s that way in the Australian original, although not in Crowley’s or Wilson’s version.
Agree. Even though anyone commenting or voting in this thread has already read the story, there are still others who haven’t. Please edit the offending word out of the title, and I’ll upvote the original post.
Please do not upvote this comment if you’ve already upvoted the original post.
Is this a true story, and if so, did it work?
-- illdoc1 on YouTube
I’m not sure I get this. Could you explain, please?
Straightforward consequentialism.
If you hurt someone in an easily avoidable way, they’ll respond to the hurt and not to what’s in your heart.
I could go on a bit longer, but I’m drunk and this seems like plenty.
Can we really hold someone responsible if he had no choice and his brain forced him to steal?
Edit: Missed the “you” in Ezekiel’s question; sorry.
I think it’s just about pragmatism vs. philosophical reasoning or Deep Wisdom.
Great video, too.
Nassim Taleb
-- C.S. Lewis, The Screwtape Letters (from memory—I may have the exact phrasing wrong).
You can replace “goodness” in this sentence with almost anything that tends to get flippantly rejected without thought.
Good memory. The original reads:
Not sure if finding something funny in the context of a joke necessarily leads to one not taking it seriously in other contexts. [E.g. when xkcd and smbc make science jokes I don’t think my belief in the science they are referencing diminishes.]
When xkcd and smbc make science jokes, they’re real jokes written by clever humans.
Flippancy is more like Dell’s recent “shut up bitch” scandal and the it’s a joke, laugh reactions to it. Mads Christensen presented no substantive evidence that women are unable to contribute to IT, he just tried to train the crowd to regard the very idea of a capable woman as if it were funny.
The bit about “trained to act as if” is very astute. The same training can be applied to overvaluing things with little or no apparent value.
– Kurt Vonnegut
But why?
Because otherwise, you might self-modify into an agent that’s worse at achieving universal instrumental goals than you are now, or one with less achievable terminal goals. Wouldn’t that suck? Be artificial, but do so carefully.
I’ve been looking up some American people (radical activists/left-wing theorists/etc) whom I knew little about but felt surprised about how they’re a byword and evil incarnate to every right-wing blogger out there. I don’t have any political or moral judgment about what I’ve read in regards to those (or at least let’s pretend that I don’t), but incidentally I found a nice quote:
Saul Alinsky, Rules for Radicals
And here’s some rather… more spicy stuff from him:
Alinsky is interesting to me because it seems like he was one of the first to notice a new, likely to be effective method of social change—and he used up all the effectiveness of the technique.
I wouldn’t expect non-violent protest (in America) to be capable of that kind of social change in the future, because those in power have learned how to deal with it effectively (mass arrest for minor infractions and an absolute refusal to engage in political grandstanding). By this point, mass protests are quite ineffective at creating social change here in the US (consider the relatively pointlessness of the Occupy movement)
I’m sure there are other examples of techniques of social change becoming totally ineffective as authorities learned how to respond better, but I can’t think of any off the top of my head.
I’d also like to mention that the American Right’s treatment of Alinsky is really depressing. Just one random quote: “Alinsky got what he wanted in the form of 90% illegitimacy rates among American blacks and poverty wholly dominated by single mothers.”
Really? A guy who taught little people how to stand up for themselves in ruthless tribal politics… somehow single-handedly (or with his evil college student henchmen) caused a complicated social problem that existed since Segregation’s end—instead of, I dunno, making communities more unified and more conscious of the war that is life (like trade unions become with good non-dogmatic leadership)?
(Another stunning lie: “Alinsky’s entire adult life was devoted to destroying capitalism in America — an economic system he considered to be oppressive and unjust.”
He talked of working within the system and changing it slowly and patiently all the time—for moral as well as tactical reasons. “Those who enshrine the poor or Have-Nots are as guilty as other dogmatists and just as dangerous”, he wrote. And: “The political panaceas of the past[2], such as the revolutions in Russia and China, have become the same old stuff under a different name… We have permitted a suicidal situation to unfold wherein revolution and communism have become one. These pages are committed to splitting this political atom, separating this exclusive identification of communism with revolution.”
“Let us in the name of radical pragmatism not forget that in our system with all its repressions we can still speak out and denounce the administration, attack its policies, work to build an opposition political base. True, there is government harassment, but there still is that relative freedom to fight. I can attack my government, try to organize to change it. That’s more than I can do in Moscow, Peking, or Havana. Remember the reaction of the Red Guard to the “cultural revolution” and the fate of the Chinese college students.[1] Just a few of the violent episodes of bombings or a courtroom shootout that we have experienced here would have resulted in a sweeping purge and mass executions in Russia, China, or Cuba. Let’s keep some perspective.”)
Sadly, even M.M. chimed in when that hysteria was at its peak around the 2008 elections, with Obama’s supposed methodological connection to the evil treasonous commie terrorist trumpeted everywhere on the “fringe” websites. And that’s the kind of people most likely to boast of their reasoning and objectivity online?
Mencius also blasted the SDS (Students for a Democratic Society) who used Gandhi’s nonviolent tactics to attack the very literal Ku Klux Klan rule in Mississippi during the so-called Freedom Summer, risking life and limb, and a small part of whose members formed the semi-violent terrorist group Weather Underground a decade later.
[1] Yep, the “Cultural Revolution” was less a government-initiated purge in the image of 1937 than it was a little civil war between two slightly different factions of zealots.
2] For a brilliant example of this madness dressed as conservatism, just look at this idiot. He took Alinsky’s sardonic reference to those revolutions’ hype as “panaceas” as a sign of approval!
America, Fuck Yeah.
P.S. To be fair, here’s a voice of sanity from some libertarian dude, who has the misfortune of posting at a site that even Moldbug rightly called a useless dump.
Your confusing standing up for oneself with mass defecting from social conventions. The fact that modern blacks have learned to confuse the two is a large part of the reason why they’re stuck as an underclass.
It wasn’t nearly as bad at segregation’s end as it is now.
Yes, that’s why black communities today consider members who study hard or try to integrate into mainstream society (outside of racial advocacy) as traitors who are “acting white”.
I don’t know much about any of that, but blaming the first on Alinsky sounds just ridiculous (as well as evokes nasty associations for people who are conscious of antiblack rhetoric throughout U.S. history). Have you looked at his activities? And do you think he only worked with blacks, or resented whites, or what?
http://www.progress.org/2003/alinsky2.htm
The last one might be exaggerated, too. Are successful (non-criminal) black businessmen hated and despised in their communities?
(Overall, you sound a touch mind-killed.)
True, I was exaggerating by blaming him for the effects of the movement he was a part of.
No, and I’m sure he did some similar damage to some white communities as well.
Well, depends on how they succeeded (someone who succeeded in sports or music is more accepted then someone who succeeded through business).
What about yourself? At the risk of engaging in internet cold reading I think you were so scarred by what you perceive as “right wing technocracy” as expressed by Moldbug and some of his fans on LW that you’re desperately looking for any ideology/movement that seems strong enough to oppose it.
Replied elsewhere.
Well, there’s a grain of truth to that, but I’ll try not to compromise my ethics in doing so. I’d put it like this: I have my ideology-as-religion (utopian socialism, for lack of a better term) and, like with any other, I try to balance its function of formalizing intuitions versus its downsides of blinding me with dogma—but I’m open to investigating all kinds of ideologies-as-politics to see how they measure against my values, in their tools and their aims.
Also, I consider Moldbug to be relatively innocent in the grand scheme. He says some rather useful things, and anyways there are others whose thoughts are twisted far worse by that worldview I loathe; he’s simply a good example (IMO) of a brilliant person exhibiting symptoms of that menace.
My good sir if you are a utopian socialist, it unfortunately seems to me that you are striving to treat a fungal infection while the patient is dying of cancer.
I said it’s my ideal of society, not that I’d start collectivizing everything tomorrow! Didn’t you link that story, Manna? If you approve of its ideas, then you’re at least partly a socialist too—in my understanding of the term. Also, which problems would you call “cancer”, specifically?
Oh I didn’t mean to imply you would! But surely you would like to move our current society towards that at some (slow or otherwise) rate, or at least learn enough about the world to eventually get a good plan of doing so.
Nearly every human is I think. Socialism and its variants tap into primal parts of our mind and its ethical and political intuitions. And taking seriously most of our stated ethics one is hard pressed to not end up a libertarian or a communist or even a fascist. Fortunately most people don’t think too hard about politics. I don’t want the conversation to go down this path too far though since I fear the word “socialist” is a problematic one.
Specifically the great power structures opposing moves towards your ideal. It almost dosen’t matter which ideal, since those that I see would oppose most change and I have a hard time considering them benevolent. Even milquetoast regular leftism thinks itself fighting a few such forces, and I would actually agree they are there. You don’t need to agree with their bogeyman, surely you see some much more potent forces shaping our world, that don’t seem inherently interested in your ideals, that are far more powerful than.… the writer of a photocopied essay you picked up on the street?
For Moldbug himself points out, since the barrier to entry to writing a online blog is so low, absent other evidence, you should take him precisely as seriously as a person distributing such photocopied essays. How many people have read anything by Moldbug? Of those how many agree? Of those how many are likely to act? What if you take the entire “alternative” or “dissident” or “new” right and add these people together. Do you get million people? Do you even get 100 thousand? And recall these are dissidents! By the very nature of society outcasts, malcontent’s and misfits are attracted to such thinking.
While I have no problem with you reading right wing blogs, even a whole lot of them, since I certainly do, I feel the need to point out, that you cite some pretty obscure ones that even I have heard about let alone followed, dosen’t that perhaps tell you that you may be operating under a distorted view or intuition of how popular these ideas are? By following their links and comment section your brain is tricked into seeing a different reality from the one that exists, take a survey of political opinion into your hands and check the scale of the phenomena you find troubling.
Putting things into perspective, It seems a waste to lose sleep over them, does it not? Many of them are intelligent and consistent, but then so is Will Newsome and I don’t spend much time worrying about everlasting damnation. If you want anything that can be described as “utopian” or “socialist” your work is cut out for you, you should be wondering how to move mountains, not stomp on molehills.
That’s a good comment, thanks. You’ve slightly misunderstood my feelings and my fears, though. I’ll write a proper response.
In brief, I fear alt-right/technocratic ideas not because they’re in any way popular or “viral” at present, but because I have a nasty gut feeling that they—in a limited sense—do reflect “reality” best of all, that by most naive pragmatist reasoning they follow from facts of life, and that more and more people for whom naive reasoning is more important than social conventions will start to adopt such thinking as soon as they’re alerted to its possibility.
And, crucially, in the age of the internet and such, there will be more and more such under-socialized, smart people growing up and thinking more independently—I fear it could be like the spread of simplified Marxism through underdeveloped and humiliated 3rd-world countries, and with worse consequences. See the Alinsky quote above—“revolution and communism have become one”. If rationalism and techno-fascism become “one” like that, the whole world might suffer for it.
I’m following you from your links in “Nerds are Nuts” and I would like to restate your second paragraph to make sure I have your beliefs rights.
The reason the alt-right is scary is not because they are wrong in their beliefs about reality, but because they are correct about the flaws they see in modern-leftism and this makes their proposals all the more dangerous. Just because a doctor can diagnose what ails you, it does not follow that he knows how to treat you. The Alt Right is correct in it’s diagnosis of societal cancers but their proposals look depressingly closer to leeching than to chemo-therapy.
Is this an accurate restatement?
In all frankness, that’s how I bellyfeel it.
What positive beliefs about politics do you have in light of your fear of necromancy and cancer? My intuition says some form of pragmatic Burkean conservatism but I don’t want to typecast you.
Well, I respect Burke a lot, but my true admiration goes out to people like Chesterton (a big fan of the French Revolution) and Kropotkin and Orwell and maybe even the better sort of religious leader, like John Paul II—the ones who realize the power and necessity of ideology-as-faith, but take the best from both its fronts instead of being tied down on one side. In short, I love idealism.
(If forced to pick among today’s widely used labels, though, I’d be OK with “socialist” and not at all with “conservative”.)
I thought about this on and off the rest of yesterday and my belief is that these two statements are key.
What I get from this is the divide between epistemological and instrumental. Using that classic lesswrong framework I’ve come to this as a representation of your views:
In order to understand the world, if you are going to err, err on the side of Cynicism. But, if you are going to live in it and make it better, you have to err on the side of Idealism.
Cynicism is epistemologically useful but instrumentally desctructive (Explained by the fact you agree with alt-right in the pessimistic view of the world and the reasons things are not as good as they could be.)
Idealism is instrumentally useful but epistemologically destructive. (Explained by the fact you regard ideology-as-faith as vitally useful, but that doesn’t make faith true.)
Is this a fair reading?
I struggled with something similar a while ago, and Vladimir_M had a different take.
I really like summarizing to make sure I get things right. Watch as I prove it!
When dealing with real world morality and goal seeking behavior we seem forced to stare in the face the following facts:
We are very biased.
We could be more rational
Our rationality isn’t particularly good at morality.
Complicating this are the following:
Heuristics generally work. How much rationality do you need to out compete moral and procedural heuristics?
Just how rational can we get. Can low IQ people become much more rational, or are we forced to believe in a cognitive and rationality based elite?
Should we trust moral reasoning or heuristics at all?
I’ve seen the following conclusions drawn so far by people who take bias seriously: (There may be more, this is what I’ve encountered. Also the first two are just jokes I couldn’t resist)
Lovecraftian: The world is ruled by evil Gods beyond imagination. I have seen too much! Go back into the warm milkbath of ignorance! Chew your cud you cows and never think of the slaughter to come!
Existientialism: Everything sucks forever but let’s not kill ourselves because it’s better to push a rock up a mountain or something. We can never know anything and nothing can ever mean anything so we should talk about it forever. Give me Tenure! Linkin Park and Tenure!
Moldbuggery: Bias is bad, real fucking bad. The current systems don’t encourage rationality all that much either. Only a cognitive elite can ever become debiased enough to run things and they should only be trusted if we get a system that aligns with the interests of the subjects. (Ex: Aurini, GLADOS, Konkvistador, Moldbug, Nick Land)
[I had a section on Robin Hanson, but I don’t think I understand it well enough to summarize on reflection, so “This Page Left Blank”]
Old Whiggish: We are very biased and ought to trust intuition, tradition and reason roughly in equal measure. We pride reason too much and so people who try to be perfectly rational are worse reasoners than those who allow a little superstition in their life. Our Heuristics are better than we think. If it works, we should keep it even if it isn’t true. (Ex: Taleb, Derbyshire, Burke, Marcus Aurelius. Almost Jonathan Haidt post “The Righteous Mind” but not quite)
Rational Schizophrenia: A pure cynicism about how things are should be combined with an idealism of how to act. [See above for Multithreaded’s advice]
Yudkowskyianism: Bias is very bad but our prospects for debiasing is less pessimistic than either of those make it out to be. Rationality is like marital arts, any can learn to use leverage regardless of cognitive strength. Though there are clear ways in which we fail, now that we have Bayesian Probability theory derived from pure logic we know how to think about these issues. To abuse a CS Lewis quote: “The Way has not been tried and found wanting; it has been found difficult and left untried.” Try it before giving up because something is only undoable until somebody does it. (Ex: Lukeprog, Yudkowsky)
How does that strike you as the current “Rationality landscape?” Again I’m largely new here as a community member so I could be mischaractizing or leaving ideas out.
The first glance, as usual, reveals interesting things about one’s perception:
That’s honestly how I read it at first. Ha.
BTW Konkvistador belongs in better company (nothing against the others); I’ve come to admire him a little bit and think he’s much wiser than other fans of Moldbug.
Oh, and speaking of good company… “pure cynicism about how things are combined with an idealism of how to act”—that sounds like the ethics that Philip K. Dick tenatively proposes in his Exegesis; shit’s fucked, blind semi-conscious evil rules the world, but there’s a Super-Value to being kind and human even in the face of Armageddon.
I asked Konkvistador if he endorsed the Moldbuggery statement in IRC and he liked it. But I think I want to decontextualize the attitudes toward bias and debiasing So I can better fit different authors/posters together. :/
I’ve come up with /fatalism/pessimism/elitism/rational schizophrenia/optimism . With that breakdown I can put Konvistador in the same category with Plato. I love the name rational schizophrenia too much to give it up.
I liked it too, thanks! :)
.
Huh… yeah! I’d sign under that. And, when you phrase it so nicely, I’m sure that a few others here would.
I’d endorse too (with appropriate caveats about what part of the alt-right I struggle to reject), but the meta-ethical point Karmakaiser is making doesn’t help decide what ethical positions to adopt—only what stance one should take towards one’s adopted moral positions.
Also, there’s an interesting writer with agreeable sentiments coming up on my radar after 30 seconds of googling. His name’s Garret Keizer.
http://www.motherjones.com/politics/2005/03/left-right-wrong
Shit, I’d better start reading this guy!
I see, so this is why you seem to often bring up such discussion on LessWrong? Because you see it as a repository of smart, under-socialized, independent thinkers? I do to a certain extent and in this light, your most recent writing appears much more targeted rather than a overblown obsession.
Do you think this might already be happening? The naive social conventions ignoring utilitarianism we often find ourselves disagreeing with seems to be remarkably widespread among baseline LessWrongers. One merely needs to point out the “techno-facist” means and how well they might work and I can easily see well over a third embracing them, and even more, should criticism of “Cathedral” economic and political theory become better understood and more widespread.
But again remember the “alternative right” has plenty of anti-epistemology and mysticism springing from a fascination with old fascist and to a lesser extent new left intellectuals, this will I think restrain them from fully coalescing around the essentially materialist ethos that you accurately detect is sometimes present.
And even if some of this does happen either from the new right people or from “rationalists” and the cognitive elite, tell me honestly would such a regime and civilization have better or worse odds at creating FAI or surviving existential risk than our own?
But recall what Vladimir_M pointed out, in order to gain economic or political power one must in the age of the internet be more conformist than before, because any transgression is one google search away. Doesn’t this suggest there will be some stability in the social order for the foreseeable future? Or that if change does happen it will only occur if a new ideal is massively popular so that “everyone” transgresses in its favour. Then punishment via hiring practices, reputation or law becomes ineffective.
Also: a third of LWers embracing technofascism? Is that a reference to a third of angels siding with Lucifer in Paradise Lost? Or was this unintended, a small example of our narrative patterns being very similar from Old Testament to Milton to now?
I’m glad you caught the reference. :)
Surviving existential risk, probably. But, unlike today’s inefficient corrupt narrow-minded liberal oligarchy, such a regime would—precisely because of its strengths and the virtues of people who’d rise to the top of it (like objectivity, dislike of a “narrative” approach to life and a cynical understanding of society) - be able to make life hardly worth living for people like us. I don’t know whether the decrease in extinction risk is worth the vastly increased probability of stable and thriving dystopia, where a small managerial caste is unrestrained and unchallenged. Again, democracy and other such modern institutions, pathetic and stupid as they might be from an absolute standpoint, at least prevent real momentous change.
And their “F”AI could well implement many things we’d find awful and dystopian, too (e.g., again, a clean ordered society where slavery is allowed and even children are legally chattel slaves of their parents, to be molded and used freely) - unlike something like this happening with our present-day CEV, it’d be a feature, not a bug. In short, it’s likely a babyeater invasion in essense.
(more coming)
I want to hear more about the Moldbuggian dystopia. Should make excellent SF.
I’m writing it! In Russian, though.
I think your idea that for people’s lives to be worth living they need to have certain beliefs is one of your ugliest recurring themes.
I’m a moral anti-realist through and through, despite believing in god(s). I judge everyone and their lives from my own standpoint. Hell, a good citizen of the Third Reich might’ve found my own life pointless and unworthy of being. Good thing that he’s shot or burnt, then. There’s no neutral evaluation.
You sound like a subjectivist moral realist.
Possibly even what we tend to call “subjectively objective” (I think we should borrow a turn of phrase from Epistemology and just call it subject-sensitive invariantism).
You don’t sound like a moral anti-realist at all.
Keep in mind that while every improvement is a change, most potential changes are not improvements and for most ideals, attempting to implement them leads to total disaster.
Yep. Both him and me have stressed the first half of that several times in one form or other. However, it’s nonsense to say that trying to implement ideals is bad, period, because the problem here is that humans are very bad at some things that would be excellent in themselves—like a benevolent dictatorship. If, for example, we had some way to guarantee that one would stay benevolent, then clearly all other political systems should get the axe—to an utilitarian, there’s no more justification for their evils! But in reality attempts at one usually end in tears.
However, trying to, say, keep one’s desk neat & organized is also an ideal, yet many people, particularly those with OCD, are quite good at implementing it. It is clear, then, that whatever we do, we should first look to psychological realities, and manipulate ourselves in such a way that they assist our stated goals or just don’t hinder them as much.
You left these quotes unsourced:
They’re not from “real” articles by “real” journalists/propagandists or whatever, just from random blogging idiots. I simply picked a couple of representative ones.
Quote 1: http://ricochet.com/main-feed/Buckley-vs.-Alinsky (see comments)
Quote 2: http://www.escapetyranny.com/2011/04/30/fascinating-video-of-the-young-bill-buckley-interviewing-saul-alinsky/
Goddamn, the second guy is just too dumb to breathe, but the first one freaks me out. Apparently he’s one of those peculiar Catholics who never heard of the New Testament and its values. And maybe a “rationalism”-worshipper, too… those traits, as I’ve seen in that corner of the blogosphere, aren’t as antagonistic as one might assume.
(Yes, Buckley might’ve been a decent man, but he shouldn’t have gone on TV with that voice. Alinsky’s is little better, but at least he sounds remotely like a public speaker. This might be just some kinda distortion on the record, I dunno.)
Representative of what? Why not give representative quotes from the very best and brightest Alinsky critics?
For instance:
Later:
Later
That’s decent and interesting criticism. Indeed, Alinsky appears to have been a hardcore Syndicalist, and both Buckley and me are to the right of him, although Buckley’s a lot further. However, that last one is very dubious to me:
Since Marx, leftists have probably heard this kind of argument in most debates: advanced civilization generates—or will eventually—so much charity in all its forms (through both tradition and individual kindness) as to cure most of the lower classes’ problems and thus make many concerns of unfairness and inequality irrelevant.
Alinsky clearly understood the problem with that: charity is in itself a status race and a status pump; it can be wielded with malice and used to keep people down. Just look at Africa and how we’re trying to drown it in money instead of coming over there en masse and applying real help, manually. (Which is also problematic status-wise, but at least it might actually improve a society.)
The argument is not that, for example, the United States, is perfect. It’s that whatever Marxists replace it with will be worse.
A lot of people “understand” this problem in the sense that they know it exists in the existing system. Unfortunately, they frequently have no better understanding of the causes and potential solutions than some version of “the current system has these problems because it is evil/corrupt, once we replace it with our new good/pure system these problems will magically go away”.
That’s what we were doing until leftists forced us to stop on the grounds we were “oppressing” them.
Note: If you think colonialism was indeed bad, what makes you thing doing something similar again will turn out any different?
I’m modestly familiar with the works of Marx, but I don’t know what “syndicalism” is. And I don’t know what proposal you’re making, or alluding to, with this:
Sounds ominous!
I think the Africa reference is to perspectives found in books like Dead Aid
I think Moyo and other aid critics don’t advocate that Russians come to Africa en masse and apply real help, manually.
Relatedly, if we don’t want to think about a situation, we frequently convince ourselves that we’re powerless to change it.
Less relatedly, I am growing increasingly aware of the gulf between what is implied by talking about “people” in the first person plural, and talking about “people” in the third person plural.
Oh lawd!
Now just imagine what Anonymous could’ve done today with him around!
I weakly suspect that this was in fact the inspiration for /b/’s infamous “Pool’s Closed” raids.
It’s amazing what you can accomplish if you can convince a large enough group of people that defecting from social norms is a good idea.
Yep. Maybe we’d do well to experience something like that at least once and learn from it, either in an apolitical act or acting with one’s preferred tribe. My BF is a friend/assistant of a weird Russian actionist group, so he knows a bit about this kind of stuff. He tells me it feels liberating in a way.
(On that note, Palahniuk-inspired fight clubs are also somewhat popular as a transgressive/liberating activity among Russian young men. There’s graffiti advertizing one all around my middle-class neighbourhood. I’m not gonna try one, though; I can withstand some hardship but loathe physical pain.)
If you’re interested in the general idea, but don’t want to just go to some basement and get beat up in an uncontrolled environment, Система Кадочникова works up in a slow and controlled manner to getting punched without having it bother you unduly; I think the training can invoke many of the same feelings (although I’ve never done an actual fight club for basically your reasons).
Heh, thanks. I’d rather look into some more basic street-fighting classes, though. To tell the truth, martial arts scare me a little bit with how much spiritual dedication they demand.
I’ve got a few simple self-defence and outdoor survival lessons from a 3-month class I attended after high school, it was pretty neat. Then, half a year ago, when I got robbed at knifepoint by some homeless drunk, at least I didn’t do anything stupid. He wasn’t all that big and had a really tiny penknife, I had a very thick winter coat and in retrospect I could’ve stunned him with an uppercut—we practiced sudden attacks in the class, as the lesson was basically “A serious fight lasts for one blow”… but with the fear and the adrenaline and the darkness I perceived the knife to be about 20cm long − 3 times larger than it really was—and decided not to resist. He pressed me against a wall, told me to give my phone and dashed.
I went home, called the police, and to my surprise they got him that very night, as they had his picture from a drunken brawl before; he hadn’t even pawned my phone. I was really calm and collected while dealing with the cops and all that (all in all, they sure surpassed my rather low expectations of the Russian police), but later I felt rather sick… a little like being raped, I presume. I got over it quickly, though; I feel a little sorry for that shit-stain and his inhuman life.
(By the way, that bit with the knife sure was funny in retrospect. When the cops showed me the evidence and asked to confirm it, I initially said something like “Well, yeah, the blade had the same shape… but it was at least two times bigger, I swear! Might he have another knife or something?” The detective was kinda amused.)
Interpreted as a conditional statement this is almost certainly false (I completed a degree in political science even though half-way through I understood that me trying to achieve “positive change” was hopeless). What do you think he means? How could we test such a claim?
One rather large-scale example, discussed in this community since the beginning of time: deathism and the general public’s attitude (or lack thereof) to cryonics.
“You know, given human nature, if people got hit on the head by a baseball bat every week, pretty soon they would invent reasons why getting hit on the head with a baseball bat was a good thing.”—Eliezer
Good example.
I think the quote can be interpreted as likely to be true of many people rather than absolutely true of everyone.
Yes, this appears to be the most charitable interpretation.
“Our gods are dead. Ancient Klingon warriors slew them a millenia ago; they were more trouble than they were worth.”
Lt. Cmdr. Worf, regarding Klingon beliefs
As badass as this bit of Klingon mythology may be, I’m not sure I see the relevance to rationalism. If I understand correctly, then what was considered “more trouble than they were worth” were the actual, really existing gods themselves, and not the Klingons’ belief in imagined gods.
I was thinking in terms of moral realism and appropriate ambition rather than atheism or epistemology. The right response to a tyrannical or dangerous deity is to find a way to get rid of it if possible, rather than coming up with reasons why it’s not really so bad.
Ah, I see. I hadn’t thought of it that way.
Upvoted.
Evelyn Baring, Earl of Cromer, Modern Egypt
-- Anais Nin
This misses the point. There shouldn’t be any mystery left. And that’ll be okay.
With perfect knowledge there would be no mystery left about the real world. But that is not what “sense of wonder and mystery” refers to. It describes an emotion, not a state of knowledge. There’s no reason for it to die.
Nicely said. I’d like to add that perfect knowledge can only be of the knowable. The non-knowable is irreducibly wondrous and mysterious. The ultimate mystery, why there is something rather than nothing, seems unknowable.
There’s plenty of inherently unknowable things around. For instance, almost all real numbers are uncomputable and even undefinable in any given formal language.
You can’t stop looking for flaws even after you’ve found all of them, otherwise you might miss one.
Also: http://en.wikipedia.org/wiki/Unexpected_hanging_paradox
Not sure why you brought this up, but as long as you did, I’d like to share my resolution of this paradox. Basically, it hinges on the definition of a surprise. If the prisoner is spared on Wednesday, he will know that he is doomed on Thursday or Friday, but is ignorant of which of these possibilities is true. So when Thursday dawns, whatever outcome obtains will be surprising. To say we are surprised by an event is simply to say that we cannot predict it in advance. Therefore, you can only reason about surprise looking forward in time, not backward. Or look at it this way. What if the judge told the prisoner that he was going to draw a slip of paper from a hat contain five slips, labeled Monday thru Friday and execute him on that day. Whatever day is chosen will be a surprise to both judge and prisoner.
http://xkcd.com/1050/
I think that the relevant distinction is “is it really horribly unpleasant and I make no progress no matter how long I spend and I don’t find correct output aesthetically pleasing.”
“Weird” is a statement about your understanding of people’s pride, not a statement about people’s pride.
Proud of not learning math includes math like algebra or conversation of units. That sort of math, which might be taught in elementary school, is practically useful in daily life. Being proud of not knowing that kind of math is profoundly anti-learning. The attitude applies equally to learning anything, from reading to history to car mechanics.
Something a not-especially-mathsy friend of mine said a while back:
Then how do you explain, in your model, the comic’s implicit observation that people do not apply this same attitude to to learning to play music, cook, or speak a foreign language? Let’s try to fit reality here, not just rag on people for being “anti-learning” in the same way others might speak of someone being “anti-freedom”.
Briefly, cognitive bias of some kind. Compartmentalization. Belief that what I like and enjoy is good and worthwhile , and what I dislike is bad and useless. It’s the failure to apply the lesson from a favored domain to an unfavored one that is the worthwhile point of the author’s statement.
Not many people are required to take cooking classes, hardly any goes through 20 years after graduating without ever needing to cook, and there are lots people “proud” of not learning foreign languages. And playing music is higher-status than doing maths.
--Heartiste (the blogger formerly known as Roissy), on useful stereotypes. Source.
Which is why you shouldn’t dismiss Jesus’s face on a toast as pareidolia.
-- Alcatraz Smedry in Alcatraz versus the Evil Librarians, by Brandon Sanderson.
That did not go in anything like the direction I expected. :-)
--Steve Sailer, commenting on cultural changes and words
Source.
Interesting. But those words are still used to promote. Impossible for me to say whether they are used that way less now than before… I guess I will take Sailer’s word for it?
From Terry Pratchett´s Unseen Academicals (very minor/not significant spoilers):
If you feel the need to put the quote in rot13 to avoid spoilers, it’s probably not worth posting at all (I don’t think that this quote spoils anything significant about the plot in any case.)
I see. I think the quoted text is very representative of rational thinking, but since I personally don´t like spoilers/previews very much, I opted for caution and rot13ed it. My thinking was that an unseen quote can be seen later if so wished, but it is harder to forget something already read. But perhaps for most people the discordance of seeing a lone rot13ed text has a negative utility that is lower than that of reading a very minor spoiler/preview? If that is so, I will unrot13 it.
In any case, thank you for your input. For now, I will edit the parent so that it is clear that the severity of the spoiler is very low.
It is not merely that a stock of true beliefs is vastly more likely to be helpful than a stock of false ones, but that the policy of aiming for the truth, of having and trying to satisfy a general (de dicto) desire for the truth—what we might simply call “critical inquiry”—is the best doxastic policy around. Anything else, as Charles Peirce correctly insists, lead to “a rapid deterioration of intellectual vigor.”—Richard Joyce, The Myth of Morality (2001) p. 179.
- David Mamet
ETA: Gwern checked the book and posted the relevant section below. I got it backwards—seven to twelve are the ages most likely to die. Six and under are more likely to survive.
Actually, there’s something rather like that in Deep Survival, a book that’s mostly about wilderness survival. IIRC, six to twelve year olds are more likely to survive than adults, and it’s because of less fear of embarrassment.
However, the author didn’t go into a lot of details about which mistakes the adults make—I think it was that the kids seek cover, but the adults make bad plans and insist on following through with them.
Downloading the book, pg236, you forgot one interesting detail:
http://wiki.lesswrong.com/wiki/Valley_of_bad_rationality ?
All I can say at the moment is WOW.
I think I read that book, but I can’t put my hands on it just this second.
-- Warren Ellis, Transmetropolitan
“Nothing matters at all. Might as well be nice to people. (Hand out your chuckles while you can.)”
A Softer World. I’ve always loved that webcomic, as sappy as it is.
(Mouse over a strip to see its last sentence.)
Also:
“You were my everything. Which, upon reflection, was probably the problem.”
“Overreaction: Any reaction to something that doesn’t affect me.”
“Civilization is the ability to distinguish what you like from what you like watching pornography of. (And anyway, why were you going through my computer?)”
“The Internet made us all into cyborgs with access to a whole world of information to back up whatever stupid thing we believe that day. (The Racist Computer Wore Tennis Shoes)”
“Everyone wants someone they can bring home to mom. I need someone to distract my mom while I raid the medicine cabinet. (Someone who thinks suggested dosages are quaint.)”—that’s not a rationality quote, but it’s how my boyfriend thinks and operates.
Walter Lewin
In general, science is only boring when you don’t understand it.
Even people who love science often regard areas other than their field of expertise as dull. In reality, I suspect that if they took the time to better understand those “dull” specialties they’d find them fascinating as well.
Carefully, you might have reversed cause and effect there.
It goes without saying that things you can’t comprehend are boring regardless of their actual content- nobody wants to re-read their favorite 1,000 page novel as a PGP encrypted string for example. It’s also a fact that scientists don’t have the knowledge to comprehend the interesting bits in a field they haven’t specialized in. So there’s no plausible route by which people could really know if a field they lack expertise in truly would be dull to them or not, even if it would in fact be dull: they must be assuming it to be dull despite a lack of comprehension. I could be wrong about the cause and effect, but I could not have reversed it. This raises the question of how people get into a field at all in the first place, when it’s still gibberish to them.
To be honest though, I was merely generalizing from my own experience. I’ve yet to find any branch of science that didn’t fascinate me upon close inspection- I’ve been in many situations where I had no real choice but to study something in detail which i didn’t expect to be interesting but needed the knowledge towards a specific end goal. Every time it seemed initially dull and pointless while I was struggling with the nomenclature and basic concepts, until I reached a critical point whereby it became intensely interesting.
However, it’s true that I’ve chosen to do inter-disciplinary work, and that could be due to me having some unusual trait whereby everything is interesting to me.
Heh… now I’m feeling nostalgic about Prof Lewin’s freshman physics lectures.
Haven’t thought about them in years.
- Laura van Dernoot Lipsky
-- American Gods by Neil Gaiman.
This quote hides a subtle equivocation, which it relies on to jump from “you have X” to “you do not have X” without us noticing.
If I have a map I can look at it, draw marks on it and make plans. I can also tear it to pieces and analyse it with a mass spectrometer without it damaging the territory. Make the map I start with more accurate and I can draw on it in more detail and make more accurate analysis. Make the map nearly perfect and I can get nearly perfect information from the map without destroying breaking anything in the territory. Moving from ‘nearly perfect’ to ‘perfect’ does not mean “Oh, actually you don’t have one territory and also one map. You only have this one territory”.
As a practical example consider a map of a bank I am considering robbing. I could have blueprint of the building layout. I could have detailed photographs. Or I could have a perfect to-scale clone of the building accurate in every detail. That ‘map’ sounds rather useful to me.
Imprecision is not the only purpose of a map.
I know this is probably an ad hominem but isn’t Gaiman the guy who wrote Doctor Who episodes? The worst sci-fi show ever.
Many, many writers have written for Doctor Who. Gaiman has done many, many things in his writing career besides writing for Doctor Who. And Doctor Who is a cultural phenomenon larger than any trite dismissal of it.
Whether or not it’s a large cultural phenomenon has nothing to do with how sensible the material is. It’s actually probably brilliant fantasy I would agree, but if I’m looking for good sci-fi it’s a bore fest.
He was a guest writer a couple times. He’s better known for fantasy novels and comics.
Doctor Who is one of my favourite shows (top five, higher if we count only shows that are still running.) I don’t know to what extent knowledge of our different preferences regarding Doctor Who could be used to predict differences in our evaluations of the rationality of a given Gaiman quote.
Oh I completely agree. It’s just my experience of Doctor Who has been that it’s a well of irrational story lines. For example why would the TARDIS have a soul?
There does seem to be an awful lot of arbitrariness involved in the plotlines. For whatever reason it doesn’t seem to contain much of the particular kind of irrationality that I personally detest so for me it is just a fun adventure with increasingly pretty girls.
It is closer to an extremely advanced horse than an extremely advanced car. That doesn’t bother me too much. Some of the arbitrary ‘rules’ of time travel are more burdensome.
What gets me is that you can change the past except when you can’t. They’ve tried to explain it away using “fixed points” which can’t be changed but even that doesn’t hold together.
For instance, the Doctor just admitted that he could change things that he thought he couldn’t change and 1) brought back Gallifrey from the Time War, and 2) brought it back into our universe prevent his death. If I were him, this would be the point where I’d say “Maybe I should go and see if I can bring back Rose’s father too. Then I can start on Astrid, and maybe that girl from Waters of Mars”.
And Gaiman’s episode was bizarre. he had the Tardis acting like a stereotypical wife when at the same time the Tardis crew included an actual husband and wife and they didn’t act towards each other like that. And if the Tardis is sentient there’s no reason he couldn’t hook a voice box into it, except that doing so, thus actually following the logical implications of the Tardis being sentient, would mess up the rest of the series. That episode was just a blatant case of someone wanting to write his pet fan theory into the show and getting to do so because he is Neil Gaiman.
The series also takes a negative attitude towards immortality, despite the Doctor living for a long time.
I’m also sick and tired of the Doctor deciding that a problem whose only obvious solution is violence and killing can be magically solved if he just refuses to accept that the solution is violence and killing. In the real world, such a policy would lead to even more killing.
puts on Doctor Who nerd hat
Those were two different forms of “can’t change this thing”. The time lock prevented him from interfering with the time war at all, to the point where he couldn’t even visit—an artificial area-denial system. Fixed points, on the other hand, are … vague, but essentially they are natural (?) phenomena where Fate will arbitrarily (?) ensure you can never change this thing. They serve to allow for time travel stories designed for can’t-change-the-past systems of time travel, Oedipus Rex (or time-turner) style.
The Doctor has tried to change fixed points, in The Waters of Mars. It didn’t go well, and was portrayed as him going a bit mad with hubris.
Does it?
It seems to me that it takes a neutral stance; immortality is unquestionably good for individuals (even the Master! He’s evil!), but most of the ways to achieve it are governed by sci-fi genre convention that Things Will Go Wrong, and people don’t seem motivated to share it with humanity much.
Well … yeah. That’s really very annoying, and the writers seem to have latched onto it recently.
Then again, this is the same character who, y’know, killed everyone in the Time War. And showed he was willing to do it again in the anniversary special, even if he found a Third Option before they actually did it.
And, hey! The TARDIS was always intelligent. And it’s location in mind-space clearly isn’t designed for human interaction, even when “possessing” a rewritten human brain. And she wasn’t a stereotypical wife. And …
takes off Doctor Who nerd hat
OK, that’s probably enough offtopic nitpicking for one day.
Well, this is sort of off-topic, but on the other hand, a lot of this has to do with the side the show takes on topics of LW interest.
He didn’t just think he couldn’t change the destruction of Gallifrey because he was locked out of visiting. In the anniversary special, he was there, but first decided he couldn’t change history and had to let the destruction proceed as he had previously done it. He later got an epiphany and decided he could change history by just making it look like the planet was destroyed.
Likewise, in the Christmas special he couldn’t change his own death because he had already seen its effects and knew it was going to happen. He was there—he wasn’t locked out or unable to visit.
If he could get around that for his own death, it’s about time he start doing it for all the others.
I don’t believe that. For instance, look at the Doctor’s lecture to Cassandra (several years ago). Furthermore, the genre convention that immortality goes wrong is part and parcel of how much of the genre opposes immortality. Sci-fi loves to lecture the audience on how something is wrong in real life by showing those things going wrong for fantasy reasons (http://tvtropes.org/pmwiki/pmwiki.php/Main/SpaceWhaleAesop and http://tvtropes.org/pmwiki/pmwiki.php/Main/FantasticAesop).
It’s not the character so much as the story. The story clearly sends the message that it’s a bad idea to do such things and that there always is a third option.
It’s all burdensome to me.
Yet more of St. George:
Notes on the Way
I find that hard to believe. I would expect even a wasp to notice this.
Yes, before anyone pitches in with that observation, M.M. would surely quote the above with some glee. I’m confident that he’d refrain from posting the essay’s ending, though:
[1] Okay, that’s the one bit Orwell got wrong… maybe. Industrial murder did mark everything forever, though.
Why? My mental model of M.M., admittedly based on the very few things of his that I’ve read, has him not disagreeing with the above section significantly.
He’s very firmly against all past and future attempts to bring forth the aforementioned Kingdom of Heaven (except, needless to say, his own—which has the elimination of hypocrisy as one of its points). He sneers—I have no other word—at patriotic feeling, and wages a one-man crusade against ideological/religious feeling. He might dislike hatred, but he certainly believes that greed and self-interest are “enough”—are the most useful, safe motives one could have. Etc, etc, etc, etc, etc.
Orwell wasn’t exactly a supporter of patriotism or religion either. In fact, in paragraphs you quoted you can see Orwell sneering at religion even as he admits that it can serve a useful purpose. My understanding of Moldbug’s position on religion is that its pretty similar, i.e., he recognizes the important role religion played in Western Civilization including the development of science even if he doesn’t like what it’s currently evolved into.
No offence, but I think you need to read a dozen of his post-1939 essays before we even talk about that. He was a fervent British patriot, occasionally waxing nostalgic about the better points of the old-time Empire—even as he was talking about the necessity of a socialist state! - and a devout Anglican for his entire life (which was somewhat obscured by his contempt for bourgeois priesthood).
You’re simply going off the one-dimensional recycled image of Orwell: the cardboard democratic socialist whose every opinion was clear, liberal and ethically spotless. The truth is far more complicated; I’d certainly say he was more of a totalitarian than the hypocritical leftist intellectuals he was bashing! (I hardly think less of him due to that, mind.)
I don’t see how this brutality was lacking when humans were more religiously observant. Furthermore, the quote seems to argue for religion.
Meaning the conclusion and the conclusion’s reasoning are both wrong.
Not much revolutionary or counter-revolutionary terror, no death camps, comparatively little secret police. Little police and policing in general, actually; you could ride from one end of Europe to another without any prior arrangements, and if you looked alright everyone would let you in. The high and mighty being content with merely existing at the top of traditional “divinely ordained” hierarchy and not having the Will zur Macht that enables really serious tyranny, not attempting to forge new meanings and reality while dragging their subjects to violent insanity.
I agree that it was a cruel, narrow-minded and miserable world that denied whole classes and races a glimpse of hope without a second thought. But we went from one nightmare through a worse one towards a dubious future. There’s not much to celebrate so far.
It argues for a thought pattern and attitude to life that Christianity also exhibits at the best of times, but against the belief in supernatural.
Much of this is simply not the case or ignores the largescale other problems. It may help to read Steven Pinker’s book “The Better Angels of Our Nature” which makes clear how murder, and warfare (both large and small) were much more common historically.
I’ve read a summary. I’m mostly playing the devil’s advocate with this argument, to be honest. I have a habit of entertaining my far-type fears perhaps a touch more than they deserve.
What exactly was the war on heresy?
Peasant revolts based on oppressive governance costs didn’t happen?
If we don’t count the denial of a glimpse of hope to “whole classes and races” (and genders) of people, then most of what I personally don’t approve of in the time period drops out. But even if that isn’t included in the ledger, it wasn’t all that great for the vast majority of white Christian men.
Dude, I completely agree. I’m far from a reactionary. I’m just thinking aloud. Might the 20th century have indeed been worse than the above when controlled for the benefits as well as downsides of technical progress? I can’t tell, and everyone’s mind-killed about that—particularly “realist” people like M.M., who claim to be the only sane ones in the asylum.
Let’s cash this out a little bit—Which was worse, the heresy prosecutions of the Medieval era, or the Cultural Revolution? I think the answer is the Cultural Revolution, if for no other reason than more people were affected per year.
But that’s based on technological improvement between the two time periods:
More people were alive in China during the Cultural Revolution because of improvements in food growth, medical technology, and general wealth increase from technology.
The government was able to be more effective and uniform in oppressing others because of improvements in communications technology.
Once we control for those effects, I think it is hard to say which is worse.
In contrast, I think the social changes that led to the end of serious calls for Crusades were a net improvement on human, and I’m somewhat doubtful that technological changes drove those changes (what probably did drive them was that overarching unifying forces like the Papacy lost their legitimacy and power to compel large portions of society). Which isn’t to say that technology doesn’t drive social change (consider the relationship between modern women’s liberation and the development of reliable chemical birth control).
As a percentage of total planetary population, a large number of historical wars were worse than any 20th century atrocity. Pinker has a list in his book, and there are enough that they include wars most modern people have barely heard of.
I’m trying to compare apples to apples here. Wars are not like ideological purity exercises, nor are they like internal political control struggles (i.e. suppressing a peasant revolt, starving the Kulaks).
I’d have to get a better sense of historical wars before I could confidently opine on the relative suffering of the military portions of WWII vs. the military portions of some ancient war. And then I’d have to decide how to compare similar events that took different amounts of time (e.g. WWI v. Hundred Years War)
The line between these is not always so clear. Look at the crusade against the Cathars or look at the Reformation wars for example.
I agree that the categories (war, ideological purification, suppression of internal dissent) are not natural kinds.
But issue is separating the effects of ideological change from the effects of technological change, so meaningful comparisons are important.
Keep in mind that to take such ideas seriously and try give them a fair hearing is in itself transgression, regardless if you ultimately reject or embrace them.
You mean then, or now?
Remember what happened to Larry Summers at Harvard when he merely asked the question?
Does the phrase “Denier” cause any mental associations that weren’t there in the late 90s?
At least Copernicus was allowed to recant and live his declining years in (relative) peace.
Nicolaus Copernicus was never charged with heresy (let alone convicted). Moreover, he was a master of canon law, might have been a priest at one point, was urged to publish De Revolutionibus Orbium Coelestium by cardinals (who also offered to pay his expenses), and dedicated the work to Pope Paul III when he did get around to publishing it. Also, one of his students gave a lecture outlining the Copernican system to a crowd that included Pope Clement VII (for which he was rewarded with an expensive Greek Codex). Even had he lived two more decades, it is very unlikely he would ever have been charged with heresy.
And on that note the Galileo affair was an aberration—it’d be unwise to see it as exemplary of the Church’s general attitude towards unorthodox science. The Church was like half Thomist for Christ’s sake.
For instance, most instances of heresy were crushed successfully without them bearing fruit or gaining influence. (In some part because most incidences of heresy are actually false theories. Because most new ideas in general are wrong.) The Galileo incident was an epic failure of both religious meme enforcement and public relations. It hasn’t happened often! Usually the little guy loses and nobody cares.
(The above generalises beyond “The Church” to heavy handed belief enforcement by human tribes in general.)
Right, but note I said unorthodox science. Heresy was crushed, but it wasn’t common for scientific theories to be seen as heretical. Galileo just happened to publish his stuff when the Church was highly insecure because of all the Protestant shenanigans. Heretical religious or sociopolitical teachings, on the other hand, were quashed regularly.
Yes, and Summers has gone on to be a presidential adviser.
― Carlos Ruiz Zafón, The Angel’s Game
The Psychologist Who Wouldn’t Do Awful Things to Rats by James Tiptree
“Contradictions do not exist. Whenever you think you are facing a contradiction, check your premises. You will find that one of them is wrong.”
Or that you’ve made an invalid inference.
Or that both of them (to reference a previous Rationality Quotes entry on arguments) are wrong.
source?
Pretty sure that was Francisco d’Anconia aka Superman, in Ayn Rand’s Atlas Shrugged.
That’s correct.
Philip K. Dick, The Man in the High Castle
On Fun Theory; by a great, drunken Master of that conspiracy:
-- Marisa Kirisame, in her Grimoire
Edmund Burke on Richard Price, in “Reflections on the Revolution in France” which I am reading for the first time. This Richard Price, who is fascinating. Here is the sermon Burke was complaining about.
Haha, it’s hard not to feel a twitch of self-righteous liberal superiority upon reading Burke’s words. Even though none of us is really “liberal” in this regard—privately almost none of us value freedom of opinion more than spreading one’s own opinions; we’re just bound by a prisoner’s dilemma in this regard. Our age is more polite and hypocritical about it, though.
From this I can’t quite tell whether your first impulse/twitch was to side with Burke or Price.
I think I don’t value spreading my opinions at all. At least, I’m not interested in moving “public opinion.”
--John Derbyshire, source
Relevant.
What is the intended extension of “political stupidity” in this quote? (Intended by you in quoting it; I can hardly demand that you engage in telepathy.)
What do you think in the context of the link I called “Relevant”?
Double post
-E.T. Jaynes
The discovery of truth is prevented most effectively, not by false appearances which mislead into error, nor directly by weakness of reasoning powers, but by pre-conceived opinion, by prejudice, which as a pseudo a priori stands in the path of truth and is then like a contrary wind driving a ship away from land, so that sail and rudder labour in vain.
Arthur Schopenhauer “On Philosophy and the Intellect”
Rephrase: why was the post retracted?
Why the strike through?
This post has been retracted.
We, humans, use a frame of reference constructed from integrated sets of assumptions, expectations and experiences. Everything is perceived on the basis of this framework. The framework becomes self-confirming because, whenever we can, we tend to impose it on experiences and events, creating incidents and relationships that conform to it. And we tend to ignore, misperceive, or deny events that do not fit it. As a consequence, it generally leads us to what we are looking for. This frame of reference is not easily altered or dismantled, because the way we tend to see the world is intimately linked to how we see and define ourselves in relation to the world. Thus, we have a vested interest in maintaining consistency because our own identity is at risk.
--- Brian Authur, The Nature of Technology
--Chapter One, “The Coin”, by Muphrid; see also “Joy in the Merely Real”
Huh? That doesn’t seem strange at all. It’s the first place I would have guessed—based on it being really extreme, really big and really cold.
I guess I can’t get so much of “truth is strange, update!” kick out of this one as intended...
“Cold” isn’t typically associated with “dry” in most people’s mental maps, as rain tends to be cold, and snow is very cold, and even the most commonly encountered form of ice (icecubes) melts quick enough too; and therefore generally most of everyday coldness gets anti-associated with dryness.
Ofcourse Antarctica is not everyday coldness—the ice in most of Antarctica is very far from temperatures that would make it liquid… But I understand how it could surprise someone who hadn’t thought it through.
And glass is a slowly flowing liquid.
No it isn’t.. But from that same ‘misconceptions’ list I discovered that meteorites aren’t hot when they hit the earth—they are more likely to be below freezing. “Melf” had been deceiving me all this time.
Rephrasing:
Your point is that this heuristic will leave you vulnerable to believing false beliefs you come in contact with? (Good point!)
That could have been more clearly put the first time...
Taboo dry—does that mean “containing little water” or “containing little liquid water”?
Either, when it comes to the part of Antarctica in question.
“A little simplification would be the first step toward rational living, I think.” ~ Eleanor Roosevelt
http://www.inspiration-oasis.com/eleanor-roosevelt-quotes.html
- Martin Luther King Jr.
- David Mamet
Here’s to the crazy ones. The misfits. The rebels. The trouble-makers. The round heads in the square holes. The ones who see things differently. They’re not fond of rules, and they have no respect for the status-quo. You can quote them, disagree with them, glorify, or vilify them. But the only thing you can’t do is ignore them. Because they change things. They push the human race forward. And while some may see them as the crazy ones, we see genius. Because the people who are crazy enough to think they can change the world, are the ones who do.
--Apple’s “Think Different” campaign
Dear people who are “difficulty with everyday tasks” crazy and not “world-changing genius” crazy: we got ripped off.
Those two sets aren’t always disjoint.
Didn’t say they were. Tesla, Erdős.
Upvoted because the idea is good, although I think that a lot of people have already pointed out the irony of “be a rebel by buying our mass-produced product!” slogans in general. (Tangent: In Stross’s “Jennifer Morgue” this irony is used as part of a demonic summoning ritual to zombiefy people.)
-Jonathan Baron
Ah, but goals and desires are different things.
Vince Lombardi
And if both contestants think they can win, this maxim gets to be right 100% of the time!
I thought it was that no matter who wins that causes him to become sure of his ability.
Well, if a person is really good at what they do, that could cause them to become confident in their ability to do it well. But if they’re really bad at it, that could also cause them to be confident in their ability to do it well.
Puting whenever you think in terms of fights don’t do a good job. People come rapid with ferocious comments.
If you’re a football (American, not Eurasian) coach you’re routinely going to frame your aphorisms in terms of battles, or “fights” as you put it.
-Ray Dalio, Principles
E.T. Jaynes on the Mind Projection Fallacy and quantum mechanics:
“[T]he mysteries of the uncertainty principle were explained to us thus: The momentum of the particle is unknown; therefore it has a high kinetic energy.” A standard of logic that would be considered a psychiatric disorder in other fields, is the accepted norm in quantum theory. But this is really a form of arrogance, as if one were claiming to control Nature by psychokinesis.”
Explanation for the down votes please?
Very good question. People may disagree with the quote, or may think that out of context it misrepresents Jaynes. In the most charitable interpretation that occurs to me, they think you overestimate the clarity and usefulness of the quote.
I did not downvote, and did not see the post until after it had been redacted. Hairyfigment’s description is pretty good. To that, I would add that I recognize the passage from Jaynes that you’re quoting, and I do understand why it seems particularly valuable. However, a while after reading it, or without ever having read that particular passage, I do have to say that the section you quoted is much less useful, powerful, whatever, without the remainder of the passage.
It also could have been downvoted by the substantial number of users on less wrong who just generally dislike the present state of discussion on quantum physics.
redacted
Who’s this by?
redacted
I’m afraid quoting yourself isn’t allowed, sorry!
I’ve been thinking of starting a “quote yourself” thread.
Or a “quote yourself, or quote comments/posts on LW/OB” thread?
We have those periodically. They are limited to one per quarter by executive order, but they are not popular enough to sustain that frequency.
ETA: Here’s the relevant tag: http://lesswrong.com/tag/lwob_quotes/
I agree that “do not quote yourself” is probably not a necessary rule for those threads.
redacted
redacted