Open Thread August 31 - September 6
If it’s worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the ‘open_thread’ tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
Hilary Putnam, one of the most famous philosophers of the twentieth century, has a blog
Tumblr user su3su2u1 (probably most known to LWers for his critiques of HPMOR’s scientific claims, and subsequent fallout with Eliezer) has an interesting post about MIRI’s research strategy. I think it has some really good ideas. What do other folks think?
It seems like a lot of focus on MIRI giving good signals to outsiders. The “publish or perish” treadmill of academia is exactly why privately funded organizations like MIRI are needed.
The things that su3su2u1 wants MIRI to be already exist in academia. The whole point of MIRI is to create an organization of a type that doesn’t currently exist, focused on much longer term goals. If you measure organizations on the basis of how many publications they make, you’re going to get a lot of low-quality publications. Citations are only slightly better, especially if you’re focused on ignored areas of research.
If you have outside-view criticisms of an organization and you’re suddenly put in charge of them, the first thing you have to do is check the new inside-view information available and see what’s really going on.
Ever since I started hanging out on LW and working on UDT-ish math, I’ve been telling SIAI/MIRI folks that they should focus on public research output above all else. (Eliezer’s attitude back then was the complete opposite.) Eventually Luke came around to that point of view, and things started to change. But that took, like, five years of persuasion from me and other folks.
After reading su3su2u1′s post, I feel that growing closer to academia is another obviously good step. It’ll happen eventually, if MIRI is to have an impact. Why wait another five years to start? Why not start now?
+1
Just because MIRI researchers’ incentives aren’t distorted by “publish or perish” culture, it doesn’t mean they aren’t distorted by other things, especially those that are associated with lack of feedback and accountability.
If MIRI doesn’t publish reasonably frequently (via peer review), how do you know they aren’t wasting donor money? Donors can’t evaluate their stuff themselves, and MIRI doesn’t seem to submit a lot of stuff to peer review.
How do you know they aren’t just living it up in a very expensive part of the country doing the equivalent of freshman philosophizing in front of the white board. The way you usually know is via peer review—e.g. other people previously declared to have produced good things declare that MIRI produces good things.
How did science get done for the centuries before peer review? Why do you place such weight on such a recently invented construct like peer review (you may remember Einstein being so enraged by the first and only time he tried out this new thing called ‘peer review’ that he vowed to never again submit anything to a ‘peer reviewed’ journal), a construct which routinely fails anytime it’s evaluated and has been shown to be extremely unreliable where the same paper can be accepted and rejected based on chance? If peer-review is so good, why do so many terrible papers get published and great Nobel-prize-winning work rejected repeatedly? If peer review is such an effective method of divining quality, why do many communities seem to get along fine with desultory use of peer review where it’s barely used or left as the final step long after the results have been disseminated and evaluated and people don’t even bother to read the final peer-reviewed version (particularly in economics, I get the impression that everyone reads the preprints & working papers and the final publication comes as a non-event; which has caused me serious trouble in the past in trying to figure out what to cite and whether one cite is the same as another; and of course, I’m not always clear on where various statistics or machine learning papers get published, or if they are published in any sense beyond posting to ArXiv)? And why does all the real criticism and debate and refutations seem to take place on blogs & Twitter if peer-review is such an acid test of whether papers are gold or dross, leading to the growing need for altmetrics and other ways of dealing with the ‘post-publication peer review’ problem as journals increasingly fail to reflect where scientific debates actually are?
I’ve said it before and I’ll said it again: ‘peer review’ is not a core element of science. It’s barely even peripheral and unclear it adds anything on net. For the most part, calls for ‘peer review’ are cargo culting. What makes science work is replication and putting your work out there for community evaluation. Those are the real review by peers.
If you are a donor who wants to evaluate MIRI, whether some arbitrary reviewers pass or fail its papers is not very important. There are better measures of impact: is anyone building on their work? have MIRI-specific claims begun filtering out? are non-affiliated academics starting to move into the AI risk field? Heck, even citation counts would probably be better here.
Is this an “arguments as soldiers” thing? Compare an isomorphic argument: “how did medicine get done for the centuries before antibiotics.”
Leaving aside that this an argument from authority, there is also selection bias here: peer review may well not be crucial—if you happen to be of Einstein’s caliber. But: “they also laughed at Bozo the Clown.” I am sure plenty of Bozos are enraged at peer review too, unjustly rejecting their crap.
There is a stochastic element to peer review, but in my experience it works remarkably well, given what it is. Good papers are very likely to get a fair shake and get published. I routinely get very penetrating comments that greatly improve the quality of the final paper. I almost always get help with scholarship from reviewers (e.g. this is probably a good paper to cite.) A bigger issue I saw was not chance, but ideology from reviewers. I very occasionally get bad reviews (<5% chance) and associate editors (people who handle the paper and assign reviewers) are almost always helpful in such cases.
I asked you this before, gwern, how much experience with actual peer review (let’s say in applied stats journals, as that is closest to what you do) do you have?
Absolute numbers are kind of useless here. Do you have some work in mind on false positive and false negative rates for peer review?
I don’t think we disagree here, I think this is a form of peer review. I routinely do this with my papers, and am asked to look over preprints by others. I think this is fine for certain types of papers (generally very specialized or very large/weighty ones).
The worry is MIRI’s conception of what a “peer” is basically ignores the wider academic community (which has a lot of intellectual firepower), so they end up in a bubble. The other worry is people who worry about getting tenured are incentivized to be productive (albeit imperfectly). MIRI is not incentivized to be productive except in some vague “saving the world” sense. And indeed, MIRI appears to be remarkably unproductive by academic standards. The guy who really calls the shots at MIRI, EY, has not internalized academic norms and appears to be fairly hostile to them.
Honestly, you sound a bit angry about peer review.
That’s not isomorphic. To put it bluntly, medicine didn’t. It only started becoming net beneficial extremely recently (and even now tons of medicine is harmful or a pure waste), based on copying a tremendous amount of basic science like biology and bacteriology and benefitting from others’ discoveries, and importing methodology like randomized trials (which it still chafes at) and not by importing peer review. Up until the very late 1800s or so, you would have been better off often ignoring doctors if you were, say, an expecting mother wondering whether to give birth in a hospital pre-Semmelweiss. You can’t expect too much too much help from a field which published its first RCT in 1948 (on, incidentally, an antibiotic).
I include it as a piquant anecdote since you seem to have no interest in looking up any of the statistical evidence on the unreliability and biases (in the statistical senses) of peer review, or the absence of any especial evidence that it works.
That is not what I am saying. I am saying, ‘if you think MIRI is Bozo the Clown, get a photograph of its leader and see if he has a red nose! See if his face is suspiciously white and the entire MIRI staff saves a remarkable amount on gas purchases because they can all fit into one small car to run their errands! Don’t deliberately look away and simply listen for the sound of laughter! That’s a terrible way of deciding!’
No, they’re not, or at the very least, you need to modify this to, ‘after being forced to repeatedly try solely thanks to the peer review process, a good paper may still finally be published’. For example, in the NIPS experiment, most accepted papers would not have been accepted given a different committee. Unsurprisingly! given low inter-rater reliabilities for tons of things in psychology far less complicated, and enormous variability when n=1 or 3.
Yes, any of it. They all say that peer review is not a little but highly stochastic. This isn’t a new field by any means.
I have little first-hand experience; my vitriol comes mostly from having read over the literature showing peer-review to be highly unreliable, and biased, from the unthinking respect and overestimation of it that most people give it, being shocked at how awful many published studies are despite being ‘peer reviewed’, and from talking to researchers and learning how pervasive bias is in the process and how reviewers enforce particular cliques & theories (some politically-motivated) and try to snuff opposition in the cradle.
The first represents a huge waste of time; the second hinders scientific progress directly and contributes to one of the banes of my existence as a meta-analyst, publication bias (why do we have a ‘grey literature’ in the first place?); the third is seriously annoying in trying to get most people to wake up and think a little about the research they read about (‘but it’s peer-reviwed!’); and the fourth is simply enraging as the issue moves from an abstract, general science-wide problem to something I can directly perceive specifically harming me and my attempts to get accurate beliefs.
(Well, actually I think my analysis of Silk Road 2 listings is supposed to be peer-reviewed, but the lead author is handling the bureaucracy so I can’t say anything directly about how good or bad the reviewers for that journal are, aside from noting that this was a case of problem #4: the paper we were responding too is so egregiously, obviously wrong that the journal’s reviewers must have either been morons or totally ignorant of the paper topic they were supposed to be reviewing. I’m still shocked & baffled about this: how does an apparently respectable journal wind up publishing a paper claiming, essentially, that Silk Road 2 did not sell drugs? This would have been caught in a heartbeat by any kind of remotely public process—even one person who had actually used Silk Road 1 or 2 peeking in on the paper could have laughed it out of the room—but because the journal is ‘peer reviewed’… Pace the Gell-Man Effect, it makes me wonder about all the papers published about topics I am not so knowledgeable about as I am on Silk Road 2 and wonder if I am still not cynical enough.)
Yes, I have no objection to ‘peer review’ if by what you mean is all the things I singled out as opposed to, and prior to, and afterwards, the institution of peer review: having colleagues critique your work, having many other people with different perspectives & knowledge check it over and replicate it and build on it and post essays rebutting it—all this is great stuff, we both agree. I would say replication is the most important of those elements, but all have their place.
What I am attacking is the very specific formal institutional practice of journals outsourcing editorial judgment to a few selected researchers and effectively giving them veto power, a process which hardly seems calculated to yield very good results and which does not seem to have been institutionalized because it has been rigorously demonstrated to work far better than the pre-existing alternatives (which of course it wasn’t, any more than medical proposals at that time were routinely put through RCTs first, even though we know how many good-sounding proposals in psychology & sociology & economics & medicine go down in flames when they are rigorously tested), but—to go off on a more speculative tangent here—whose chief purpose was to simply make the bureaucracy of science scale to the post-WWII expansion of science as part of the Cold War/Vannevar Bush academic-military-government complex.
If this is the problem with MIRI, I think there are far more informative ways to criticize them. For example, I don’t think you need to rely on any proxies or filters: you should be able to evaluate their work directly and form your own critique of whether it’s any good or if it seems like a good research avenue for their stated goals.
Science is srs bsns. (I find it hard to see why other people can’t get worked up over things like publication bias or aging or p-hacking. They’re a lot more important than the latest outrage du jour. This stuff matters!)
Medicine was often harmful in the past, with some occasional parts that helped, e.g. amputating gangrenous limbs was dangerous and people died, but probably was still a benefit on net. Admiral Nelson had multiple surgeries and was in serious danger of infection and death afterwards, but he would have been a goner for sure without surgery.
Science was pretty similar, it was mostly nonsense with occasional islands of sense. It didn’t really get underway until, what, Francis Bacon wrote about biases and empiricism? That is not very long ago. The early “gentlemen scholars” all did informal peer review by sending their stuff to each other (they also hid discoveries from each other due to competition and egos, but this stuff happens today too).
Gwern, peer review is my life. My tenure case will be decided by peer review, ultimately. I do peer review myself as a service, constantly. I know all about peer review.
The burden of proof is on MIRI, not on me. MIRI is the one that wants funding and people to save the world. It’s up to MIRI to use all available financial and intellectual resources out there, which includes engaging with academia.
I really think you should moderate your criticism of peer review. Peer review for data analysis papers is very different from peer review for mathematics or theoretical physics. Fields are different and have vastly different cultural norms. Even in the same field, different conferences/journals may have different norms.
I do a lot of theory. When I do data analysis, my collabs and I try to lead by example. What is the point of being angry? Angry outsiders just make people circle the wagons.
This argument seems exactly identical to the argument for trepanning, even including the survivorship bias. (One of the suspected uses of trepanning was to revive people otherwise thought dead.)
While we’re looking at anecdotes, this bit of Nelson’s experience with surgery seems relevant:
I’m not sure I’d count that as a win for surgery, or evidence that he couldn’t have survived without it!
But this means that, unless you’re particularly good at distancing yourself from your work, you should expect to be worse at judging it than a disinterested observer. The classic anecdote about “which half?” comes to mind, or the reaction of other obstetricians to Semmelweis’s concerns.
Regardless, we would expect that, if studies are better than anecdotes, studies on peer review will outperform anecdotes on peer review, right?
It’s not identical because we know, with benefit of hindsight, that amputating potentially gangrenous limbs is a good idea. The folks in the past had solid empirical basis for amputations, even if they did not fully understand gangrene. Medicine was mostly, but not always nonsense in the past. A lot of the stuff was not based on the scientific method, because they had no scientific method. But there were isolated communities that came up with sensible things for sensible reasons. This is one case when standard practices were sensible (there are other isolated examples, e.g. honey to disinfect wounds).
Ok, but isn’t this “incentive tennis?” Gwern’s incentives are clearer than mine here—he’s not a mainstream academic, so he loses out on status. So a “low motive” interpretation of the argument is: “your status castle is built on sand, tear it down!” Gwern is also pretty angry. Are we going to stockpile argument ammunition [X] of the form “you are more biased when evaluating peer review because of [X]”?
For me, peer review is a double edged sword—I get papers rejected sometimes, and at other times I get silly reviewer comments, or editors that make me spend years revising. I have a lot of data both ways. The point with peer review is I sleep better at night due to extra sanity checking. Who sanity-checks MIRI’s whiteboard stuff?
A “low motive” argument for me would be “keep peer review, but have it softball all my papers, they are obviously so amazing why can’t you people see that!”
A “low motive” argument for MIRI would be “look buddy, we are trying to save the world here, we don’t have time for your flawed human institutions. Don’t you worry about our whiteboard content, you probably don’t know enough math to understand it anyways.” MIRI is doing pretty theoretical decision theory. Is that a good idea? Are they producing enough substantive work? In standard academia peer review would help with the former question, and answering to the grant agency and tenure pressure would help with the second. These are not perfect incentives, but they are there. Right now there are absolutely no guard rails in place preventing MIRI from going off the deep end.
Your argument basically says not to trust domain experts, that’s the opposite of what should be done.
Gwern also completely ignores effect modification (e.g. the practice of evaluating conditional effects after conditioning on things like paper topic). Peer review cultures for empirical social science papers and for theoretical physics papers basically have nothing to do with each other.
I would put the start of solid empirical basis for gangrene treatment at Middleton Goldsmith during the American Civil War (dropping mortality from 45% to 3%), about sixty years after Nelson.
I think this is putting too much weight on superficial resemblance. Yes, gangrene treatment from Goldsmith to today involves amputation. But that does not mean amputation pre-Goldsmith actually decreased mortality over no treatment! My priors are pretty strong that it would increase it, but going into details on my priors is perhaps a digression. (The short version is that I take a very Hansonian view of medicine and its efficacy.) I’m not aware of (but would greatly appreciate) any evidence on that question.
(To see where I’m coming from, consider that there is a reference class that contains both “trepanning” and “brian surgery” that seems about as natural as the reference class that includes amputation before and after Goldsmith.)
But this only makes sense if peer review actually improves the quality of studies. Do you believe that’s the case, and if so, why?
I think my argument is domain expert tennis. That is, I think that in order to evaluate whether or not peer review is effective, we shouldn’t ask scientists who use peer review, we should ask scientists who study peer review. Similarly, in order to determine whether a treatment is effective, we shouldn’t ask the users of the treatment, but statisticians. If you go down to the church/synagogue/mosque, they’ll say that prayer is effective, and they’re obviously the domain experts on prayer. I’m just applying the same principles and same level of skepticism.
I am not sure what the relevance of either of these are. If anything, the latter suggests that we need to make the case for peer review field by field, and so proponents have an even harder time than they do without that claim!
I think treating gangrene by amputation was well known in the ancient world. Depending on how you deal w/ hemorrhage/complications you would have a pretty high post-surgery mortality rate, but the point is, it is still an improvement on gangrene killing you for sure.
Actually, while I didn’t look into this, I expect Jewish and Greek surgeons would have been pretty good compared to medieval European ones.
I don’t have data from the ancient world :). But mortality from gangrene if you leave the dead tissue in place is what, >95%? Amputation didn’t have to be perfect or even very good, it merely had to do better than an almost certain death sentence.
Well, because peer review would do things like say “your proof has a bug,” “you didn’t cite this important paper,” “this is an exact a very minor modification of [approach].” Peer review in my case is a social institution where smart knowledgeable people read my stuff.
You can say that’s heavily confounded by your field, the types of papers you write (or review), etc., and I agree! But that is of little relevance to gwern, he thinks the whole thing needs to be burned to the ground.
Not following. The claim “peer review sucks for all X,” is stronger than the claim “peer review sucks for some X.” The person making the stronger claim will have a harder time demonstrating it than the person making the weaker claim. So as a status quo defender, I have an easier time attacking the stronger claim.
I think you missed the meat of my claim; yes, al-Zharawi said to amputate as a response to gangrene, but that is not a solid empirical basis, and as a result it is not obvious that it actually extended lifespans on net. We don’t have the data to verify, and we don’t have reason to trust their methodology.
Now, maybe gangrene is a case where we can move away from priors on whether archaic surgery was net positive or net negative based on inside view reasoning. I’m not a doctor or a medical historian, and the one place I can think of to look for data (homeopathic treatment of gangrene) doesn’t seem to have any sort of aggregated data, just case reports of survival. Perhaps an actual medical historian could determine it one way or the other, or come up with a better estimate of the survival rate. But my guess is that 95% is a very high estimate.
I could, but why? I’ll simply point out that is not science, and that it’s not even trying to be science. It’s raw good intentions.
Suppose that the person on the street thinks that price caps on food are a good idea, because it would be morally wrong to gouge on necessities and the poor deserve to be able to afford to eat. Then someone comes along and points out that the frequent queues, or food shortages, or starvation, are a consequence of this policy, regardless of the policy’s intentions.
The person on the street is confused—but food being cheap is a good thing, why is this person so angry about price caps? They’re angry because of the difference between perception of policies and their actual consequences.
The claim I saw you as making is that peer review’s efficacy in field x is unrelated to its efficacy in field y. If true, that makes it harder for either of us to convince the other in either direction. I, with the null hypothesis that peer review does not add scientific value, would need to be convinced of peer review’s efficacy in every field separately. The situation is symmetric for you: your null hypothesis that peer review adds scientific value would need to be defeated in every field separately.
Now, whether or not our null hypothesis should be efficacy or lack of efficacy is a key component of this whole debate. How would you go about arguing that, say, to someone who believed that prayer caused rain?
Why do you suppose he said this? People didn’t have Bacon’s method, but people had eyes, and accumulated experience. Neolithic people managed, over time, to figure out how all the useful plants in their biome are useful, how did they do it without science? “Science” isn’t this thing that came on a beam of light once Bacon finished his writings. Humans had bits and pieces of science right for a long time (heck, my favorite citation is a two arm nutrition trial in the Book of Daniel in the Old Testament).
We can ask a doc, but I am pretty sure post-wound gangrene is basically fatal if untreated.
What is not science? My direct experience with peer review? “Science” is a method you use to tease things out from a disinterested Nature that hides the mechanism, but spits data at you. If you had direct causal access to a system, you would examine it directly. If I have a computer program on my laptop, I am not going to “do science” to it, I am going to look at it and see what it does.
Note that I am only talking about peer review I am familiar with. I am not making claims about social psychology peer review, because I don’t live in that world. It might be really bad—that’s for social psychologists to worry about. In fact, they are doing a lot of loud soul searching right now: system working as intended. The misdeeds of social psychology don’t really reflect on me or my field, we have our own norms. My only intersection with social psychology is me supplying them with useful mediation methodology sometimes.
I expect gwern’s policy of being really angry on the internet is going to have either a zero effect or a mildly negative effect on the problem.
The consequences of peer review for me is, on the receiving end, is generally people improve my paper (and sometimes are picky for silly reasons). The consequences of peer review for me, on the giving side, is I reject shitty papers, and make good and marginal papers better. I don’t need to “do science” to know this, I can just look at my pre-peer review and my post-peer review drafts, for instance. Or I can show you that the paper I rejected had an invalid theorem in it.
I am making the claim that people who want to burn the whole system to the ground need to realize that academia is very large, and has very different social norms in different corners. A unified criticism isn’t really possible. Egregious cases of peer review are not hard to find, but that’s neither here nor there.
On the subject of medical advice, Scott and Scurvy reminded me of this conversation.
Sure. I think al-Zharawi got observational evidence, but I think that there are systematic defects in how humans collect data from observation, which makes observational judgments naturally suspect. That is, I’m happy to take “al-Zharawi says X” as a good reason to promote X as a hypothesis worthy of testing, but I am more confident in reality’s entanglement with test results than proposed hypotheses.
I very much agree that science is some combination of methodology and principles which was gradually discovered by humans, and categorically unlike revealed knowledge, whose core goal is the creation of maps that describe the territory as closely and correctly as possible. (To be clear, science in this view is not ‘having that goal,’ but actions and principles that actually lead to achieving that goal.)
I asked history.stackexchange; we’ll see if that produces anything useful. Asking doctors is also a good idea, but I don’t have as easy an in for that.
Not quite—what I had in mind as “not science” was confusing your direct experience with peer review and evaluation of the intentions as a scientific case for peer review.
Right now, sure, but we got onto this point because you thought not publishing with peer review means we can’t be sure MIRI isn’t wasting donor money, which makes sense primarily if we’re confident in peer review in MIRI’s field.
Eh. While I agree that being angry on the internet is unsightly, it’s not obvious to me that it’s ineffective at accomplishing useful goals.
“Whole system” seems unclear. It’s pretty obvious to me that gwern wants to kill a specific element for solid reasons, as evidenced by the following quotes:
Would you agree that some parts of the system should be burned to the ground?
Peer review seems like a form of costly signalling. If you pass peer review, it only demonstrates that you have the ability to pass peer review. On the other hand, if you don’t pass peer review, it signals that you don’t have even this ability. (If so much crap passes peer review, why doesn’t your research? Is it even worse than the usual crap?)
This is why I recommend to treat “peer review” simply as a hoop you have to jump through, otherwise people will bother you about it endlessly. To remove the suspicion that your research is even worse than the stuff that already gets published.
Mostly by well-off people satisfying their personal curiosity. Other than that, by finding a rich and/or powerful patron and keeping him amused :-D
I agree that the cult of peer review is overblown. But does MIRI produce any relevant and falsifiable output at all?
I would answer differently than you: “Very inefficiently and with lots of errors”.
As opposed to quick, reliable present-day peer-reviewed science? ;-)
Well, not that this has changed...
What leads you to that conclusion? When do you think peer review began and how do you judge efficiency before and after?
I think this isn’t really cutting to the heart of things—which seems to be ‘reputation among intellectuals,’ which is related to ‘reputation among academia,’ which is related to ‘journal articles survive the peer review process.’ It seems to me that the peer review process as it exists now is a pretty terrible way of capturing reputation among intellectuals, and that we could do something considerably better with the technology we have now.
Anyone suggested a system based on blockchain yet? X-)
I imagine a system where new Sciencecoins could be mined by doing valid scientific research, but then they could be used as a usual cryptocurrency. That would also solve the problem of funding research. :D
I think there’s definitely not enough thought given to this, especially when they say one of the main constraints is getting interested researchers.
Isn’t it “cultish” to assume that an organization could do anything better than the high-status Academia? :P
Because many people seem to worry about publishing, I would probably treat it as another form of PR. PR is something that is not your main reason to exist, but you do in anyway, to survive socially. Maximizing the academic article production seems to fit here: it is not MIRI’s goal, but it would help to get MIRI accepted (or maybe not) and it would be good for advertising.
Therefore, AcademiaPR should be a separate department of MIRI, but it definitely should exist. It could probably be done by one person. The job of the person would be to maximize MIRI-related academic articles, without making it too costly for the organization.
One possible method that didn’t require even five minutes of thinking: Find smart university students who are interested in MIRI’s work but want to stay in academia. Invite them to MIRI’s workshops, make them familiar with what MIRI is doing but doesn’t care about publishing. Then offer them to become co-authors by taking the ideas, polishing them, and getting them published in academic journals. MIRI gets publications, the students get a new partially explored topic to write about; win/win. Also known as “division of labor”.
Really? You can’t think of another reason to publish than PR?
I can.
But PR also plays a role here, and this is how to fix it relatively cheaply. And it would also provide feedback about what people outside of MIRI think about MIRI’s research.
I think the primary purpose of peer review isn’t PR, but sanity checking. Peer reviewed publications shouldn’t be a concession to outsiders, but the primary means of getting work done.
It seems that writing publishable papers isn’t easy.
Yes, GP’s is an extremely myopic and dangerous attitude.
One dictionary definition of academia is “the environment or community concerned with the pursuit of research, education, and scholarship.” By this definition MIRI is already part of academia. It’s just a separate academic island with tenuous links to the broader academic mainland.
MIRI is a research organization. If you maintain that it is outside of academia then you have to explain what exactly makes it different, and why it should be immune to the pressures of publishing.
Low-quality publications don’t get accepted and published. I know of no universities that would rather have a lot of third-rate publications than a small number of Nature publications. I’ll agree with you that things like impact factor aren’t good metrics but that’s somewhat missing the point here.
A very reasonable suggestion, and I’m not just saying that because I have a PhD. I’m saying it because it’s so easy to reinvent the wheel and think you’re doing original research when you’re really just re-discovering other people’s work in a different context. It’s very hard to root out these sorts of errors; when I was doing a PhD I thought the work I was doing in developmental biology was new and unique until about a year later I found that the ‘new’ mathematical problems I had solved had actually been widely used in polymer science for years. I just wasn’t able to find the research because none of the search terms matched.
A link to the wider academic community would do a lot to help in MIRIs goals, and a very good way to do this would be undertaking PhDs. It should be a snap for the MIRI folks...
Do you have any ideas about how it could be made easier to find out whether you’re just rediscovering previous work?
Eliminate context, reduce problems to their abstract fundamentals, collaborate with other people who might have a chance of having been exposed to similar problems in other domains.
Julian Savulescu: The Philosopher Who Says We Should Play God
An interesting paper by the name of Fuck nuance.
Abstract:
No, I’m not kidding, this is the actual abstract at the beginning of the paper.
Technically, it’s about sociological theories, but I feel the general principle applies much more widely.
(Normally I would quote a teaser chunk of the paper here, but this PDF file seems unusually resistant to copy-and-paste-as-text and I don’t feel like manually inserting back all the spaces between the words...)
Nancy Leibowitz was quoting this. Having spent the weekend reading 20th century French philosophers, this was refreshing. From the paper:
It’s not a loose analogy. It’s a literal description of an example of the sort of thing that should happen in the reality underlying the theory.
There is another aspect to nuance that I don’t yet see mentioned in the paper. In French philosophy, the nuance is nuance of interpretation, not an attempt to handle more cases. Many theories are presented without having any cases at all that they handle! Jacques Lacan, for instance, only described one case history during his entire career; he presented detailed theories of personality development with no citations or data.
This happens with many who descend academically from Hegel: Marx, Lacan, Derrida. The model is not “nuanced” in the sense of handling many cases; it is never demonstrated to handle any data at all, or at best one over-simplified case (a general claim, or a particular sentence which the philosopher made up to illustrate the model). The nuance is all in the interpretation. It complexifies the theory without enabling it to handle any more cases—the worst of both worlds.
Thanks for mentioning that I’d already brought up the paper. I’ve got three quotes here.
My last name is Lebovitz.
I think of the way people tend to get it wrong as a rationality warning. I know about those errors because I have an interest in my name, but the commonness of the errors suggests that people get a tremendous amount wrong. How much of it matters? How could we even start to find out?
Sorry for misspelling your name. I don’t think memory errors are rationality errors.
Memory errors have a bearing on rationality because you need accurate data to think about, and one of the primary causes of not remembering something is not having noticed it.
I can say my name twice, spell it, and show people a business card, and still have them get it wrong.
If you want more about how little people perceive, I recommend Sleight of Mind, a book about neurology and stage magic.
Judging by the particular way you mis-spelled the name, I’d guess your memory is more auditory in nature?
It’s not a memory error, it’s a hasty pattern-match error.
Excellent point. These errors are fairly common. When I use this username, I somewhat frequently see people write it as brettel. I guess that means that they interpret it as brett-el, when in reality it’s b-trettel. I can understand this.
Eh, you’re lucky. I always read ‘malcolmocean’ as ‘macromole—wait’.
I think it’s a strong-prior error. There are many different spellings, one or two letters apart, and I pick the one I’ve seen most often.
I agree that it’s a pattern-match error, but I think I’d classify that as a type of memory error.
I think of memory errors as retrieving something other than what was stored. In this case I doubt people “stored” your name correctly—most likely they interpreted it wrong to start with. It’s a perception error, then.
Gwern rubbishes longevity research.
I think he’s taking about the dream of achieving indefinite numbers of healthy years.
However, there are some people who live into their 90s in pretty good health, and they’re far from the majority. What’s the likelihood of just making good health into one’s 90s much more likely? I’m not talking about lifestyle improvement—I’m talking about some technological fix.
So, he’s specifically talking about the failures of previous longevity research. It seems to me that modern longevity research has portions that are considerably better (among other things, the reductionistic view appears to be the dominant view among the top researchers). Consider this section in particular:
That Stambler spent too little time on whether or not they actually got the science right / pushed in the right or wrong direction, and spent too much time focusing on their political persuasion, strikes me as highly relevant and interesting when it comes to scientific history (and the modern versions—namely, choosing who to fund or not, and what experiments to pursue or not).
Gwern also makes a more general claim that aging is too complex for any simple solution to be plausible.
I don’t think SENS is one of the simple approaches Gwern was referring to in context. The simple approaches are things like turning off a genetically coded “mortality switch,” lengthening telomeres, calorie-restriction mimetics, or just getting tons of antioxidants in your diet. Here’s a recent Aubrey de Grey interview.
Could a moderator please nuke the swidon account and all of its posts?
The account is nuked. I need to find out how to remove posts.
agreed.
Someone changed the password on the Username public throwaway account. It’s a shame a troll finally got to it after several years.
It’s worth contacting a moderator and seeing whether they can do anything about it.
Even if they set the password it’s a nature of a public account that the PW can always be set differently.
How about make the password reset automatically every X minutes ?
I actually meant to ask at some point whether the Username account would have protection against people changing passwords willy-nilly, but I didn’t because, you know… information hazards and all that. Didn’t want to give people the idea. But now that it’s happened, I suppose I could ask retrospectively: how come nobody ensured some protection against that?
Because in general a forum that’s designed to allow anonymous comments would allow anonymous comments and not let people go through the hack of using a separate account for it. The account wasn’t created by any moderator but simple by a using who thinks that such an account would be good to have.
While being in infohazard territory: It’s not only possible to change passwords. It’s also possible to delete accounts.
Then nuke the account and recreate it with the old password.
I always assumed that was just one person. I feel like someone died. (Not really. But, how was I supposed to know it was an open account?)
The beauty of the account laid in the fact that it was not publicized, so only people who were long-time lurkers would know about it.
Far out, that was an excellent account and several people had clearly used it to make important contributions.
It would be nice if there was a way to memorialize the posts or something externally. Or, perhaps the moderators could implement an ‘official’ throw-away to protect against this.
I have been a beneficiary of comments from the Username account and believe it does...or did a true service to the community. Thank you for taking it upon yourself to report this and making a new account.
While I had no objection to the existence of the account and in fact used it several times myself, it was a bit annoying to me that someone was using it as his personal account rather than bothering to create his own.
This was a productive use of my time—a panel with Peter Thiel, Audrey de Grey (who I don’t know) and Eliezer Yudhowsky.
Solving a Non-Existent Unsolved Problem: The Critical Brachistochrone
I think you were the person using the username account to post in this style. Thank you for making an account and welcome :)
“Do Artificial Reinforcement-Learning Agents Matter Morally?” Yes, says Brian Tomasik, even present-day ones (by a very small but nonzero amount). He foresees their ethical significance increasing in the near future, and he isn’t talking about strong AI, but an increase in the ordinary applications of reinforcement learning to our technology.
The argument is, briefly: for various claims about what consciousness physically is, RL programs display these features to some extent as well. Therefore they have a nonzero degree of consciousness, and so a nonzero degree of moral standing. Enough that we should be thinking now about guidelines for the ethical creation of such software.
He suggests that, paralleling guidelines for the use of animals in research, RL algorithms should be replaced by others whenever possible, or if they must be used, reduced in number, and driven through rewards, not punishments.
He considers the idea of an organisation of People for the Ethical Treatment of Reinforcement Learners, and the embedding of RL algorithms in humanoid bodies and videogame characters as ways of persuading the public to the idea that they have moral significance.
I would be much more morally concerned about reinforcement learning agents if this were a functional distinction.
He discusses that point in the paper.
I’m looking for for a high quality parenting blog, one with relatively frequent well written content and which might accept guest contribution—or one with a discussion forum that’s not just gossiping. Can be English speaking or German. I’d like to try my hand on some posts before opening my own blog. Any ideas?
Tweet Sized Insight Porn
Hope LW likes it. Open for tweet suggestions.
What hypothesis are you testing, or is gnawing at the back of your mind, in relation to LessWrong, as you surf LessWrong right now? Or perhaps you’re just surfing idly.
For me its: Has anyone experimented with replacing their socialising with friends time with LessWrong exclusively? I wonder if the benefits associated with socialising such as increased well-being can be substituted for interaction in online communities.
Though, I suspect the nature of the community would be a strong determinant of the outcome. For instance, facebook would probably be unhealthy, as would IRC exclusively, but the LessWrong community as a whole excl. the IRL meeting community may be great! I feel like I’ve basically outgrown all my friends who I don’t have some sort of professional relationship with anyway, or who I have a codependent/insecure-attachment towards.
In digital markets with extremely quick liquidity like the stock exchange, Is investing based on macroeconomic factors and megatrends foolhardy? Is it only sensible to invest when one has privellaged information including via analysis of public data at a level no one else has done?
Unpack the question. What do you mean by “foolhardy”? What is your next-best option for your money?
In almost all cases, you should opt not to make a wager on a topic where you are at an information disadvantage. However, investments are not purely a wager—they’re also direction of capital and sharing of risk (and reward) with for-profit organizations. It’s quite possible that you can lose the wager part of your investment and still do fairly well on the long-term rewards of corporate shared ownership.
One shouldn’t expect to systematically beat the market without privileged information. But even “trying to beat the market” (depending on what exactly that strategy entails) or doing what you describe is often better than what most people do in terms of actually growing their savings. Financial securities (especially stocks) have high enough long-run expected returns such that a “strategy” of routinely accidentally slightly overpaying for them and holding them still results in a lot more money than not investing at all.
Not investing is far worse than shoving your money into random stocks and committing to reinvest all dividends for the next 50 years.
Is there absolute utilitty maximisation in portfolio diversification or is that just a risk control mechanism? Could I pick one random stock and put a whole lot of money in it? I suspect I may be commiting the law of large numbers here (or the gambler’s fallacy).
Look at Kelly Betting for some information on why “risk control” is utility maximization.
Presuming you have declining marginal utility for money, picking one random stock gives you the same average/expected monetary outcome, but far lower utility.
If you’re not familiar with it, you should check out www.bogleheads.com for investment/finance advice.
(Not trying to discourage you from discussing this here… just that if you don’t know bogleheads, it’s quite valuable)
It’s purely for risk control, but most people are extremely loss averse and so do well to diversify.
You could. It’s a bet with positive expectation and a really risky one. But people do much dumber things with their money. Having said that, I’d recommend an index fund instead if you’re plopping a whole lot of money in.
Famous neurologist and science popularizer Oliver Sacks has died. Which of his books are your favorites?
Awakenings is a perennial favorite, a cohort of people with severe Parkinsonism given levodopa all at once (and going through the several month long process of becoming nearly completely functional with the quirks that come from excess dopamine, then their brains slowly losing homeostasis in the face of the exogenous uncontrolled neurotransmitters).
Seeing Voices, a look into the perceptions of the deaf and the nuances of signed languages, was fascinating to me.
The macro/micro validity tradeoff
Inability and Obligation in Moral Judgment
One of my professors claimed that postmodernism, and particularly its concept of “no objective truth”, is responsible for much of the recent liberalism of society, through the idea of “live and let live”. (Specific examples given were attitudes towards legalization of gay marriage and drugs.) I pointed out that libertarianism and liberalism predated postmodernism historically, and they said that that’s true, but you can still trace the popularity back to postmodernism.
Is this historically accurate? If not, is there something I can point to that would convince them? It seems to me that the shift in society is much more a shift on the object level questions than on the meta level “should we ban things we disagree with”, but I don’t know very much recent history of philosophy (it isn’t strictly their field either, so I’m justified in not taking them at face value).
Edit: re-asked on latest OT here
I don’t know about history, but this reminds me of a “valley of bad rationality”. Assuming that the historical hypothesis is true, I would treat it as just another example that if your belief system is sufficiently insane, another false belief does not necessarily make it worse, and could actually neutralize some more harmful beliefs. If you map is worse than noise, even beliefs like “there is no reality” could improve your thinking.
Here’s one for the “life pro tips” category since Less Wrong users are mostly male. It seems as though the best way to deal with balding is to catch it as early as possible, because that’s the time drug treatments (well Finasteride at least) are most effective. Of the “big 3″ baldness treatments, ketoconazole shampoo is available over the counter and has few side effects reported online. (It’s also used as an anti-dandruff shampoo.) (EDIT: Looks like it is not recommended to take orally, although I don’t see anyone saying that topical application carries risks. Here’s a study saying it’s about as effective as minoxidil?) I recently noticed that my hairline has receded ever so slightly… after doing some research, I bought some ketoconazole shampoo and am planning to start using it. This brand seems to have fewer bad experience reports and fewer shill reviews on Amazon than other brands. Thoughts? (BTW although it’s the safest, ketoconazole also seems to be the least effective of the balding treatments… you should probably hop on the Finasteride if you have a serious problem. More info.)
BTW, there’s the ‘Boring advice repository’, consider cross-posting or linking to this there, so that it would not get lost.
Catching it early is important for sure. I’ve been using minoxidil for 3 years since my early twenties and my hairline has not receded at all since then, but it also hasn’t recovered much. The generic minoxidil is quite cheap, I pay about 40 dollars a year.
Edit: I haven’t tried Finasteride as I hear rumors of awful sexual symptoms.
How to perform surgery on yourself with Clarity
I do irrational things. The other day I bought a flight interstate, somewhat impulsively, to a conference I knew next to nothing about for complicated reasons. Instantregret, but the cancellation fee is about half the price of the ticket. I also got some art professionally designed for a few hundred dollars, that I didn’t need or want. I’ve also lost thousands gambling and on the stock exchange. I’m stupid in many ways, but I’m also capable enough to be able to share insights from the other side of sanity with the real world, or so I’d like to think. There are some things which I do that aren’t rational, for which the term irrational isn’t very useful, in the same what that people can be ‘not even wrong’, perhaps. But enough self-indulgent psychopity and self-handicapping.
I’m finding it hard recently to concentrate on anything other than surgery—particular self surgery and how and why I ought to perform it. But, Im not a surgeon. And, for this to be rational I ought to have a terminal goal. I don’t have one. In fact, at best I can rationalise that in case I get in a survival situation and have no one to help, I can do it myself. But, that’s extremely unlikely. It’s not even rationalisation since I haven’t made the decision, it’s merely optimism. Being crazy is hard, so looking on the bright side keeps me from feeling like killing myself. At least this new found interest is somewhat amusing and something that is somewhat learnable. Sometimes I get interested in areas for which I have no where near the pre-requisite knowledge to understand, often some technical something in economics or computer science. In those cases, I just end up learning things incorrectly. At least with surgery, it’s somewhat of a practical skill and medical students are often taught things superficially (this leads to this, or this is connected to that) rather than say, (this is proven by that the rem, or demonstrated by this experiment). To celebrate my 100 karma (and it was a difficult journey!) I just thought I would document this experience and what I’m compelled to research to give the more rational among you some insight in what its like to be on the far other side of rationality, and aware of it.
See examples of self-surgery for inspiration. Examples
people who do it are heroic. Don’t be half assed
desensitise yourself by snooping on actual surgeries. From experience in psychiatric wards, it shouldn’t be very hard to sneak into surgical viewing theatres. Minimal social engineering required. Hospitals are shocking with security. Note: Don’t actually do this. Remember, this is just to explain my thinking process which as I mentioned is off the beaten path of sensisbility)
read this guide which is the only guide to self-surgery I can find. Though it suggests reading textbooks, the medical textbooks in the surgery section of my local university’s library don’t seem to be very useful at all in actually how to do surgery. Maybe one has to learn how to do it by watching.
Ok. At this point. Looks like I’ve somehow managed to overcome this little excursion from sensibility. I don’t really care for self-surgery anymore. My testicles feel kinda sore for no apparent reason, but it feels good knowing that at least they’re there and not in a medical waste bin instead.
In the spirit of radical honesty, I’m going to be posting this highly embarrassing comment then try not to think about it. Certainly won’t be my most embarrassing post so far.
Voted up for honesty.
Do you know anything about the difference between the times when your irrational impulses fade and the times when you act on them?
Ahh, the miracle question. I had forgotten about those. Thank you for asking.
My answer is currently no.
Here’s what I currently suspect, but I don’t have the present of mind to be confident in this assessment. I’m particularly vulnerable to gambling and sexual and aesthetic impulses like compulsively listening to music, or staring at art. For instance, just I recently signed up for an international share trading account because I intended to bet about 1⁄4 of my assets (yes, I still am not convinced by either the kelly criterion nor modern portfolio theory since no free lunches!) on this one stock where I had very little knowledge of. Luckly for me, it takes 5 days to process the int. trading account application and I found it hard to get my mind of the stock so I started looking up more in depth information and realised it’s not the undervalued, cheap, super awesome stock I thought it would be.
When I’m with people, I also tend to be less goal-oriented and give into impulses more readily. Another consideration for me is whether these impulses are the same class as say the surgical impulse, since that sounds more delusional than impulsive. None of these categorisations are clear. You’ve inspired me to sit down properly in the near future and map out different behaviours then try to summarise underling commonalities and potential control measures note to self.
The times when irrational impulse fades, in contrast, is times when I can use strict decision theoretic tools to explain to myself why it’s irrational. That’s why LessWrong is my scaffold out of insanity. If I can analyse a particular scenario and see that one particular choice dominates another, or I can model a particular impulse as my tendency to compensate for a sunk cost when I ought to be thinking at the margin, for instance, I can grit my way out of it.
Perhaps things are hardest when I’m dealing with extremely high subjective value options (e.g. jerking off to porn when I’m really horny), or betting a whole lot of money, I get carried away. Temporally, I discount at several orders of magnitude above hyperbolic, perhaps. But honestly, I don’t really know. I’m just chucking intuitions into this comment box. I’ll probably add to this answer at some point for my own reference.
As an aside, I saw your comment this morning and was thinking about it in the shower. Recalling the ‘miracle question’ approach to problem solving made me feel empowered. Later, I listened to a song I hadn’t heard in a while just before going into the shower and realised that it would motivate me to linger less in there cause I anticipated the joy of continuing to listen to it after I got out. Then I thought about how I could suggest that approach to others who had trouble limiting their shower time, and grateful that there are places that I could share that information. At that point, I realised that my mood and anxiety had lifted a bit which I attributed to that sequence of events, cascading from you. I suspect increased self-trust in my ability to handle problems is at the heart of this (so I’ll add that to my mental health checklist in the other thread sometime). So thank you! I’m going to be investigating how I can replicate this again.I did mess it up a bit by feeling very self-congratulatory then rumminating for a while and ultimately not getting out of the shower as promptly as perhaps possible, but hopefully that wouldn’t occur in the future.
How do you get from “no free lunches” to disagreement with either Kelley or portfolio theory?
No free lunches & MPT
I could enunciate it, but wikipedia has an explanation. I honestly don’t understand the Wikipedia explanation, but I would expect that it explains my intuitions in a more technical way than I do. If you have a specific point of disagreement, I’m happy to map out my logic and explore the evidence with you. I vaguely remember reading an article on the topic, too.
Optimal bet sizing and expected utility
I’d expect a theorem to maximise utility via diversification would entail some prediction that the utility of subsequent/other/more investments will be greater than the utility of the first/reference investment. If that isn’t the case, it will lower the average expected utility of one’s portfolio. I don’t see the rationale behind the Kelly criterion as it related to any of my existing knowledge about maximising utility.
MPT: How can I have a specific point of disagreement with something as nonspecific as “I am not convinced by modern portfolio theory because no free lunches”? The particular but of the Wikipedia article you linked to actually says (correctly, so far as I can see) that minimising unsystematic risk through diversification (as indicated by MPT) is “one of the few free lunches available” because unsystematic risk isn’t associated with higher expected returns.
Kelley: Actually most of the paragraph ostensibly about this seems to be still about MPT. Anyway, I’m afraid your expectation is just wrong. Diversifying can be a win even if what you diversify with is (on its own) lower-utility. Suppose someone offers you a bet that will pay you $1M if some event E occurs and cost you $900k if not, and suppose you reckon E very close to 50% likely. You probably don’t take that bet because losing $900k would hurt you more than gaining $1M would help you. Now someone else offers you another bet, where you stand to gain $950k and lose $900k. Clearly you don’t take that bet either, and clearly it’s whose than the first. But now suppose the first bet party’s you when E happens and the second very party’s you when not-E happens. The two bets together are a guaranteed >=$50k gain; provided you trust your counterparties you should absolutely take them. So aging the second bet helped you even though on its own it was worse than the first.
Kelley, really: again I’m not sure what I can say to something as unspecific as “I don’t see the rationale”. I suppose I can briefly explain the rationale, so here goes. 1: if the utility you get from your money is proportional to log (amount), which may or may not be roughly true for you (I think it is for me) then placing a Kelley-sized bet is higher expected-utility than placing a bet of any other size at the same odds. (Assuming your utility I’d unaffected by the event the bet I’d on other than through its effect on your wealth.) 2: your long-term wealth is maximized (with high probability, not just in expectation) by making all your bets Kelley-sized, so if your utility is strongly affected by your wealth in the long term and indifferent to the short term then (almost regardless of exactly how utility depends on long-term wealth) you should place Kelley-sized bets.
Most people are more risk-averse than utility proportional to log wealth would justify. If you are, then your bets should be smaller than Kelley. Most people care about the short term as well as the long. If you do, then again your bets should generally be smaller than Kelley.
[EDITED some time after writing when I noticed a bunch of mobile-device autocorrect errors. Sorry.]
There was a guide online about all factors to consider when being a medical professional in a place with no medical infrastructure. it was basically a “how to do everything” guide. I can’t recall the keyword name to find it now, but it was online and free. Not sure if I should encourage you; but reading a lot more will probably satisfy your interest in the topic.
I’m interested in the guide and haven’t found it without several related google searches. Are you sure the guide wasn’t on a tangential topic?
An argument by Stephen Hsu that boosted-IQ humans will appear before Artifical Intelligence and will co-evolve with AI after that.
Seems to me these two things are incomparable in speed. Imagine that research in genetic engineering will allows us to make each generation have IQ 20 points higher than the previous one. Could even such IQ-boosted humans compete with a superhuman AI which can rewrite its own source code?
Of course I am making many assumptions here, but the idea is that biological humans will probably still have to go through the cycle of birth and maturation, and face various biological constraints, while AI will not have these obstacles.
Is anyone willing to share an Anki deck with me? I’m trying to start using it. I’m running into a problem likely derived from having never, uh, learned how to learn. I look through a book or a paper or an article, and I find it informative, and I have no idea what parts of it I want to turn into cards. It just strikes me as generically informative. I think that learning this by example is going to be by far the easiest method.
There are many shared Anki decks. In my experience, the hardest thing to get correct in Anki is picking the correct thing to learn, and seeing someone else’s deck doesn’t work all that well for it because there’s no guarantee that they’re any good at picking what to learn, either.
Most of my experience with Anki has been with lists, like the NATO phonetic alphabet, where there’s no real way to learn them besides familiarity, and the list is more useful the more of it you know.
What I’d recommend is either picking selections from the source that you think are valuable, or summarizing the source into pieces that you think are valuable, and then sticking them as cards (perhaps with the title of the source as the reverse). The point isn’t necessarily to build the mapping between the selection and the title, but to reread the selected piece in intervals determined by the forgetting function.
Alright, I’ll be a little more clear. I’m looking for someone’s mixed deck, on multiple topics, and I’m looking for the structure of cards, things like length of section, amount of context, title choice, amount of topic overlap, number of cards per large scale concept.
I am really not looking for a deck that was shared with easily transferrable information like the NATO alphabet, I’m looking for how other people do the process of creating cards for new knowledge.
I am missing a big chunk of intuition on learning in general, and this is part of how I want to fix it. I also don’t expect people to really be able to answer my questions on it, and I don’t expect that I’ve gotten every specification. Which is why I wanted the example deck.
Edit: So I can’t pull a deck off Ankiweb because I want the kind of decks nobody puts on Ankiweb.
I could send you some of my anki cards, but I don’t know that you’ll get useful structural information out of them. They tend to be pretty random bits that I think I’ll want to know or phrases I want to build associations between. For most things, I take actual notes (I find that writing things down helps me remember the shape of the idea better, even if I never look at them), and only make flashcards for the highest value ideas.
It took me several months of starting and quitting anki to start to get the hang of it, and I’m still learning how to better structure cards to be easier to remember and transmit useful information.
I found this blog post and the two it links to at the top to be useful descriptions of an approach to learning, which incorporates anki among other things
Based on my own experience I strongly suspect the only way to do this is to fail repeatedly until you succeed. That said the following rules are very, very good.
If you really, really want an example I can send you my Developmental Psychology and Learning and Behaviour Deck. It consists of the entirety of a Cliff’s Notes kind of Developmental Psychology book, a better dev psych’s summary section and an L&B book’s summary section. In retrospect the Cliff’s Notes book was a mistake but I’ve invested enough in it now that I may as well continue it, most of the cards are mature anyway. I would recommend finding a decent book on the topic you’re learning, and writing your own summaries or heavily rewording their summaries and using lots and lots of cloze deletions.
I just found this guide to using Anki.
http://alexvermeer.com/anki-essentials/
It’s possible it may be worth looking at.
If you really want my deck pm me your email address.
http://super-memory.com/articles/20rules.htm
Here again are the twenty rules of formulating knowledge. You will notice that the first 16 rules revolve around making memories simple! Some of the rules strongly overlap. For example: do not learn if you do not understand is a form of applying the minimum information principle which again is a way of making things simple:
Do not learn if you do not understand Learn before you memorize—build the picture of the whole before you dismember it into simple items in SuperMemo. If the whole shows holes, review it again! Build upon the basics—never jump both feet into a complex manual because you may never see the end. Well remembered basics will help the remaining knowledge easily fit in Stick to the minimum information principle—if you continue forgetting an item, try to make it as simple as possible. If it does not help, see the remaining rules (cloze deletion, graphics, mnemonic techniques, converting sets into enumerations, etc.) Cloze deletion is easy and effective—completing a deleted word or phrase is not only an effective way of learning. Most of all, it greatly speeds up formulating knowledge and is highly recommended for beginners Use imagery—a picture is worth a thousand words Use mnemonic techniques—read about peg lists and mind maps. Study the books by Tony Buzan. Learn how to convert memories into funny pictures. You won’t have problems with phone numbers and complex figures Graphic deletion is as good as cloze deletion—obstructing parts of a picture is great for learning anatomy, geography and more Avoid sets—larger sets are virtually un-memorizable unless you convert them into enumerations! Avoid enumerations—enumerations are also hard to remember but can be dealt with using cloze deletion Combat interference—even the simplest items can be completely intractable if they are similar to other items. Use examples, context cues, vivid illustrations, refer to emotions, and to your personal life Optimize wording—like you reduce mathematical equations, you can reduce complex sentences into smart, compact and enjoyable maxims Refer to other memories—building memories on other memories generates a coherent and hermetic structure that forgetting is less likely to affect. Build upon the basics and use planned redundancy to fill in the gaps Personalize and provide examples—personalization might be the most effective way of building upon other memories. Your personal life is a gold mine of facts and events to refer to. As long as you build a collection for yourself, use personalization richly to build upon well established memories Rely on emotional states—emotions are related to memories. If you learn a fact in the sate of sadness, you are more likely to recall it if when you are sad. Some memories can induce emotions and help you employ this property of the brain in remembering Context cues simplify wording—providing context is a way of simplifying memories, building upon earlier knowledge and avoiding interference Redundancy does not contradict minimum information principle—some forms of redundancy are welcome. There is little harm in memorizing the same fact as viewed from different angles. Passive and active approach is particularly practicable in learning word-pairs. Memorizing derivation steps in problem solving is a way towards boosting your intellectual powers! Provide sources—sources help you manage the learning process, updating your knowledge, judging its reliability, or importance Provide date stamping—time stamping is useful for volatile knowledge that changes in time Prioritize—effective learning is all about prioritizing. In incremental reading you can start from badly formulated knowledge and improve its shape as you proceed with learning (in proportion to the cost of inappropriate formulation). If need be, you can review pieces of knowledge again, split it into parts, reformulate, reprioritize, or delete. See also: Incremental reading, Devouring knowledge, Flow of knowledge, Using tasklists
I don’t know if this question will help:
What is the least-bad way of doing the thing you want to do that you can think of?
(apologies I can be no help because I don’t anki; but I wonder if answering this question will help you)
Meta: in posting the open thread at this time I note that it is Monday where I am in Sydney Australia; even if this is roughly 6-12 hours earlier than usual to start the open thread. (hope you all have a good week ahead)
I like Comic Sans too, but is it intended?
apologies again! (same as last OT)
Update on the Slack: http://lesswrong.com/r/discussion/lw/mpq/lesswrong_real_time_chat/
A list of our topics:
AI
Film making
Goals of lesswrong (and purposes)
Human Relationships
media
parenting
philosophy
political talk
programming
real life
Resources and links
science
travelling
and some admin channels; the “welcome”, “misc”, and “RSS” feed from the lw site
These are expected to grow and change as we need them. I count 58 people who have joined so far today. Feel free to PM me as well.
It’s worth noting that parenting just opened up.
A Defense of the Rights of Artificial Intelligences by Eric Schwitzgebel and Mara [official surname still be decided]
Does anyone know of a good life expectancy calculator? Preferably one which has good justification behind the model, and also has been tested.
I tried this calculator, but I noticed a few issues. First, it sells me I should start doing conditioning exercise… when I did check that off. I think that part of the calculator is broken. It also seems to think that taller people live longer, when from what I understand it’s well accepted that the opposite is true. Some of its other features seem unjustified to me, for example, it seems to think you get a life expectancy boost from eating less than 10% of your calories from fat, but I can’t find any evidence for that.
Good life expectancy calculators seem very valuable to those interested in longevity. Perhaps some people at LessWrong should create some sort of model. Though I have little experience with these sorts of statistical models, I think the Monte Carlo method might be useful here to get a distribution. If we put the code on GitHub then others can take a look at its guts and submit corrections/improvements/pull requests if they want to.
A good life expectancy calculator implies a good model of which factors drive longevity. I don’t believe such a model exists (for healthy people—the effects of various illnesses on your life expectancy are known much better). There are a lot of correlation studies but correlations and causality are not quite the same thing.
“Some sort of a model” is a very low bar—presumably you would like the model to be good. People who will be able to make a good comprehensive model of how various health/diet/lifestyle/etc. interventions affect longevity will probably be in the running for a Nobel.
It’s like saying that you found online some investment advice which doesn’t look too good, perhaps some LW people would like to construct a model of the markets that will give better advice. Well...
Fair points. I’m don’t think what we understand about longevity is as bad as what we understand about investments.
I suppose what I’m looking for is a model which 1) doesn’t have any obvious bugs, 2) doesn’t contradict anything we do know, and 3) has at least some evidence behind the model. If it produces a fairly wide distribution because that represents the (poor) state of our knowledge, I think that’s fine.
The issue of correlation vs. causation also is important, and I’m not sure what we could do about it short of allowing someone to turn off certain features of the model if they believe them to be untrustworthy. For example, I’ve seen a fair bit about how marriage is correlated with an increase in longevity, and it seems obvious to me that any similar sort of social structure where one has frequent socialization and possibly receives feedback and care is probably where the real benefit is. So I think you can say you are married if you believe your situation is equivalent in some way. Obviously these details need to be shown more rigorously, but this is the basic argument.
My conscience is as hypertrophied as the next person, but how is a balance struck between avoiding cognitive biases, logical fallacies, etc., and enjoying life?
This is a broad question, and it will get broad answers.
Can you give some examples when avoiding biases made life less enjoyable?
For me, avoiding biases means a cognitive load which means I have to be vigilant which means I can’t relax. Perhaps when and if avoiding all/most of the foibles becomes second nature then it will be less of a load. I hope! :)
Would it be bad if you gave yourself time off for specific durations and/or activities?
One approach could be to set priorities. “How important is it if I do this not-optimally? What are the consequences of cognitive biases leading me to a poor choice here?” and to be vigilant on the most important stuff, and let it go for lower priority things.
However, practice can help, and sometimes it is easier to catch oneself on tasks or issues of a smaller scale than on the big importart ones. So practicing on the lower priority ones can be useful.
Vigilance takes energy. Awareness...not as much. Maybe a shift toward developing awareness rather than vigilance could help.
Ok, can you give an example of when you felt less relaxed, and the bias this helped you avoid?
I think I know what you are talking about.
There are almost two modes of functioning. “Never thinking hard and going with the flow”; and “thinking hard about what happened”. I would suggest that these processes are like system1/system2 processes about living. where if you only play in system 2 you have an exhausting life where you feel like you never get far because you didn’t actually do the washing; you just thought really hard about it. You never really had fun; you just thought hard about it. etc etc.
The important thing to note is that we need both system 1 and system 2 to go about getting things done. You are concerned about the balance; Absolutely!
In my post here; http://lesswrong.com/lw/mj7/3_classifications_of_thinking_and_a_problem/ Slider suggested a heuristic for producing results in the area of knowing how to balance.
in this case because you are balancing “hard thinking about the problem” and “enjoying life” If you are finding you are not enjoying life; reduce the time you spend hard-thinking. Iif you are finding you are making mistakes; or needing more planning time to make things work the way you want them to; increase hard-thinking time. If you want to increase both at once—take a break; work on a problem of no consequence.
Dilbert creator Scott Adams, who has a fantastic rationalist-compatible blog, is giving Donald Trump a 98% of becoming president because Trump is using advanced persuasion techniques. We probably shouldn’t get into whether Trump should be president, but do you think Adams is correct, especially about what he writes here. See also this, this, and this.
I think Scott Adams has taken to trolling the readers of his blog.
Taken to? He’s been doing it for like a decade at this point.
I wouldn’t put it at 98%, but I definitely wouldn’t put it at Nate Silver’s 2%, which I think comes from an analysis that is just way too simplistic.
I would take Silver’s analysis over Adams’ any day. Look at their respective prediction track records.
It was because of Nate Silver’s track record that I initially had high confidence in his estimate. Then as I read his justification my confidence in his estimate decreased. I think he’s just being lazy in his justification, here, when he says things like:
To be fair to Silver, when he wrote the article he might not have considered Trump’s campaign plausible enough to give serious thought. I suspect that if Trump continues to perform well in the polls Silver will give a more thoughtful and realistic analysis later on.
Were any of Silver’s previous predictions generated by making a list of possibilities, assuming each was a coin flip, multiplying 2^N, and rounding? I get the impression that he’s not exactly employing his full statistical toolkit here.
Isolated demands for rigor—what do you think Adams is doing? (I think he’s generating traffic.)
But sure, I agree, that’s more of a reasonable prior than an argument. There’s more info on the table now.
What Adams does is that he looks at Silver’s estimate, says that it is way too low and then takes 1 minus Silver’s estimate as his own estimate just to make a point. He does not attempt any statistical analysis and the 98% figure should not be taken seriously.
What Adams has said he’s doing is simulating the future along the mainline prediction—i.e. nothing too weird happens—and under his model, Trump is guaranteed to win. Then he says “well, maybe something weird will happen” and drops that confidence by 2%, instead of a more reasonable 30% (or 50%).
Does Adams have a track record at predicting this sort of thing? I am not aware of any instances he’s said “here is a master persuader trying to do X, they will succeed” and them having failed, but I can’t remember more than one instance of him saying that and it being correct (and I don’t remember the specifics), but I don’t follow Adams closely enough to have a good count.
I think that Adams is raising the sort of challenge that Silver is weakest against: Trump’s tactics are a “black swan” in the technical sense that no candidate in Silver’s dataset has run with a similar methodology. That Silver thinks Herman Cain’s campaign is the right reference class for Trump’s campaign seems to me like a very strong argument for Silver not getting what’s going on.
He has an excellent track record of saying outrageous things—that’s what he is optimizing for, I think.
Why do so many people see Adams as being rationality-compatible? I’ve seen very little that he has to say that sounds at all rational or helpful. Cynical != rational.
See my review of his book: http://lesswrong.com/lw/jdr/review_of_scott_adams_how_to_fail_at_almost/
Having written a rationality-compatible book isn’t the same thing as writing a rationality-compatible blog. (It surely indicates being able to write a rationality-compatible blog, but his actual goals may be different.)
Well… Scott Adams has a lot of money. I am willing to bet that Trump will NOT become president, at EVEN ODDS. Scott, if you read this, how about a wager? I propose a $10,000 stake.
Despite his frequent comments that he’s “betting” on Trump and that Silver is “betting” against Trump, Adams’s position is that gambling is illegal when pressed to actually bet. This means one of the big feedback mechanisms preventing outlandish probabilities is not there, so don’t take his stated probabilities as the stated numbers.
(In general, remember how terrible people are at calibration: a 98% chance probably corresponds to about a 70% chance in actuality, if Adams is an expert in the relevant field.)
How convenient for him.
And Adams himself says the “smart money” is on Silver’s prediction! I think Adams’s prediction is more performative than prognostic, even allowing for ordinary unconsciously bad calibration.
Forgetting what I know (or think I know) about Scott Adams, Donald Trump, Nate Silver, Jeb Bush, whoever, and going straight to the generic reference class forecast — I’m very sceptical someone could predict US presidential elections with 98% accuracy 14 months in advance.
Actuarial tables give him a roughly 2% chance of dying before the election.
Well, he’s very likely substantially healthier than the average 69-year-old American man, so I’d be willing to bet at 1⁄50 odds that he will survive to the election.
Did Adams praise Obama for skillful use of vagueness? “Hope” seems to be in the same category as “take your country back”.
I think Scott Adams wildly overestimate the power of conversational hypnosis.
First of all, yes, there have been prominent public figures who are well versed in the art. But that’s no argument at all: how many people are trained in conversational hypnosis (or NLP, or what have you), and how many of those are hyper-successful? And how many hyper-successful people are not trained in Ericksonian hypnosis? You could even make the point that Steve Jobs and Bill Clinton were successful despite being trained in that art.
There’s also something to be said about linear return on persuasion. If you are 2X more persuasive than your opponent, would you gain twice the supporter? I’m not very confident in this hypothesis too.
There might be a network externality effect with persuasion, where the more people I persuade the more persuasive I become because of social proof issues. In this situation, the returns to persuasion are exponential.
I think Adams is right that Trump has played the media exceedingly well and he has clearly surprised a lot of people. Some Republican pollsters have focus-grouped Trump supporters and found an extreme level of antipathy among them toward “establishment” Republicans. So it is unlikely his current supporters will abandon him in a sudden collapse, which is the failure mode a lot of Trump-skeptics have been describing. That means Trump will likely stay in the race for a long time—unless he gets bored and drops out. I doubt Trump will actually drop out though, he seems to enjoy the fray and clearly hates many establishment conservatives enough to stay in just to have a platform to keep attacking them.
Most likely Trump will split the anti-establishment vote with Ben Carson and eventually most of the establishment candidates will drop out and throw their support to an establishment survivor, who will manage to beat Trump with solid but not huge majorities and take the nomination. If Trump does manage to win the nomination, it is unlikely he will win the white house—odds are less than even, maybe 2:1 against him. Overall I would estimate a ~10% chance Trump wins the presidency.
A summary of rather counterintuitive results of the effect of priming on raising people’s performance on various tests of cognitive abilities, and the ability to negate (or enhance) the effects of stereotype threat through priming:
“Picture yourself as a stereotypical male”
(It’s not all about gender, either. Some of it is about race! How exciting!)
http://slatestarscratchpad.tumblr.com/post/128364907116/gruntledandhinged-drethelin-shlevy
Yes, effects that raise performance are good because they rule out a number of problematic mechanisms. However, this experiment has no control group and thus it does not have this benefit.
In view of this http://essay.utwente.nl/66307/1/Bolle%20Colin%20-s%201246933%20scriptie.pdf did the smartphone makers anticipate addiction, as did the tobacco companies in the U.S.?
Certainly both are profiting from it.
For me it seems like some version of The Tulip Mania.
I’ve never heard of this book or author before, anyone read it? How does it compare to eg “Smarter Than Us” or “Our Final Invention”?
Calum Chace, “Surviving AI”
Do western civilizations owe something to those civilizations that were disadvantaged as a result of imperialism? A common reaction of national conservatives to this idea is that what happened during imperialism is time-barred and each country is responsible for their citizens.
How much does Mongolia owe Russia? How much do North African countries owe Europe for the millions of Europeans kidnapped and sold into the Arab slave trade in north Africa? The notion is itself ridiculous.
It is relatively easy to understand the situation when one person owes money to another person, having borrowed it before. It is also not much more difficult to understand the situation when one person owes another person a compensation for damages after being ordered by court to pay it. Somewhat more vague is a situation when there is no court involved, but the second person expects the first one to pay for damages (e.g. breaking a window), because it is customary to do so. All these situations involve one person owing a concrete thing, and the meaning of the word “owes” is (disregarding edge cases) relatively clear.
Problems arise when one tries to go from singular to plural but we still want to use intuition from the usage of singular verb. Quite often, there are many ways to extend the meaning of a singular verb to a plural verb in a way that is still compatible with the meaning of the former. For example, one can extend the singular verb “decides” to a many different group decision making procedures (voting, lottery, one person deciding for everyon, etc.), saying “a group decides” simply obscures this fact.
Concerning the word “owe”, even when we have a well defined group of people, we usually prefer to either deal with them separately (e.g. customers may owe money for services) or create a juridical person which helps to abstract a group of people as one person and this allows us to use the word “owe” in its singular verb meaning. There are more ways to extend the meaning of the word “owe” from singular to plural, but they are quite often contentious.
“Western civilizations” is a very abstract group of people. It is not a well defined group of people. It is not a juridical person. It is not a country. It is not a clan. The singular verb “owes” is clearly inapplicable here, and if one wants to use it here, one must extend its meaning from singular to plural. But there seems to be a lot of possible extensions. Therefore one has to resort to other kinds of arguments (e.g. consequentialist arguments, arguments about incentives, etc.) to decide which meaning one prefers. But if that is the case, one can bypass the word “owe” entirely and go to those arguments instead, because that is essentially what one is doing, because words whose meanings one knows only very vaguely probably do not do much in actually shaping the overall argument.
In addition to that “being disadvantaged as a result of imperialism” is very dissimilar from “having a window broken by a neighbour”, it is not a concrete thing. The central example of “owing something” is “owing a concrete and well defined thing”. Whenever we have a definition that works well for a central example and we want to use it for a noncentral one, we again must extend it and there are often more than one way to extend it (Schelling points sometimes help to choose between all possible extensions, but often there are more than one of them and choice of the extension becomes a subject of debate).
In general, I would guess that if someone argues that an entity as abstract as “western civilizations” owes something to someone, most likely they are either unknowingly rationalizing the conclusion they came to by other means or simply sloppily using an intuition from the usage of the singular verb “owes”. I think that the meaning of the word can be extended in many ways, many of which would still be compatible with the meaning of the singular word and some of them would imply “new generations are not responsible for the sins of the past ones”, while some of them wouldn’t, therefore it is probably better to bypass them altogether and attempt to solve a better defined problem.
Other words where trying to go from singular to plural often causes problems are: “owns”, “chooses”, “decides”, “prefers” (problem of aggregation of ordinal utilities), etc.
Is anywhere on Earth inhabited by the descendants of the humans who first moved in?
Off the top of my head Iceland for sure, Māori-inhabited areas, and possibly the Basque Country. But yes, that’s pretty much the exception.
I’m not sure about “first moved in” but there are families in England who have been there for a very long time.
If you focus on utilitarianism the question doesn’t come up. The important thing isn’t who “owes” but how we can produce utility. If that means the best way is to give betnets to African’s than that’s the thing to do, regardles of the concept of “owing”.
How can I convince a national conservative of utilitarianism?
Why do you ask?
In general that question sounds like you are not focused on understanding but on persuasion.
The same way that they would convince you of their own views.
By giving me a persuasive reason to care about the subjective utility of people I can’t ethnically identify with.
I would only count debts toward the specific peoples directly affected; e.g. the Spanish Empire lived off Bolivian silver, the Belgians worked the Congolese to death, and the United States is literally built on stolen Native land. Those examples and many others allow for a case in favor of reparations.
However, the passage of time sometimes blurs the effects of exploitation and aggression. Should the UK sue Denmark for the Norman Conquest? Should Italy sue Germany because Germanic tribes destroyed the Roman Empire? Should Hungary sue Mongolia for what the Golden Horde did to them? I admit I don’t know how to answer to that in a way that is consistent with my first paragraph.
Related: A British answer.
I think that framing “Imperialism” as belonging to the past is inaccurate.
Many of the problemmatic behaviours grouped together into the term “Imperialism” have not actually stopped. There are Western developed countries that are doing horrible things to non-Western developing countries right now, and doing horrible things to their own people too.
I think a good first step would be to stop doing the horrible stuff now. If the problemmatic behaviour stopped, the topic of redress for past wrongs could be considered from a better vantage point. “I’m sorry I killed your ancestors and stole their stuff 100 years ago” tastes like ashes when coming from someone who is killing your family and stealing your things now, or who is doing something more subtle but equally awful.
“Disadvantaged” is a word that glosses over the damage done. Also, the whole question could benefit from being more specific and defining terms better.
No.
Could you explain why you see it this way? Our wealth is partly based on exploitation. Wouldn’t it be fair to fix the damage we’ve done to exploited people? This could perhaps be also justified in terms of utilitarianism, as fairness might bring people closer together which prevents wars.
Not to any significant extent. Most colonized places were net money-losers for the colonizer for most of their history. In addition, I doubt most western-colonized countries were made substantially worse off compared to non-colonized countries, since the Europeans introduced some level of infrastructure, medicine, etc.
First of all, who is this “we” you speak of? More importantly, there are a few “control-group” countries which were not colonized while their neighbors were, like Siam (modern Thailand) and Ethiopia, and they don’t seem better off than their neighbors. Unlike most African countries, which abolished slavery when the Europeans took control, Ethiopia banned slavery only in 1942--under pressure from the British, who were a bit embarrassed to be allied with a slave state.
But then why did people keep conquering and colonizing new lands?
There is also Japan, which was better off than its neighbors. In 1905 Japan was strong enough to win a war against Russia.
Because the people directly responsible for the colonization profited, even if their nation as a whole did not. To go back further in history, the general of a roman legion often came home from a campaign fabulously wealthy, while the people back home saw far less of the plunder. And asking modern italians to pay spain for what ceasar looted is kind of absurd
Is that true? I can think of examples, like Cecil Rhodes arranging for the British Empire to pay for the Boer Wars for his personal enrichment, but is that typical? The East India Companies were profitable, but they paid their own military costs and used a light touch. I think the question at hand is the 19th century, when European states claimed vast swaths of land.
(I don’t like the comparison to Caesar. I believe that he paid to outfit his army, so the Romans as a whole made a profit, in contrast to knb’s claim about European colonialism, which I believe is correct.)
Yeah, the ‘light touch’ thing is just not true. For all the history Moldbug reads, nRxs seem pretty unaware of the nightmare true corporate governance was historically.
A light touch compared to 19th century state colonialism, which is the context.
https://en.wikipedia.org/wiki/Indian_Rebellion_of_1857
Light touch indeed. They fucked it up so badly, the Crown had to come in and take over directly.
Eh… the story preceding that rebellion argues, if anything, that the Company tried too hard to bend to local practices, and the British public was outraged that “Clemency Canning” didn’t want to come down like a hammer on the natives.
Look, explanations of complex stuff that happened is basically what historians do. The fact of the matter is, the EIC policies led to an enormous rebellion that ultimately resulted in the Crown taking over in India, and the EIC ending its independent existence. The EIC policies were terrible and very heavy handed, here is one example:
https://en.wikipedia.org/wiki/History_of_the_British_salt_tax_in_India
(And it’s not like it was not known by this point that people hated salt taxes, they could have just asked the French about how the gabelle worked out for them.)
I am not sure in what sense it can be said that the EIC used a ‘light touch’ in India, unless that phrase can mean basically anything you want it to mean.
The Dutch EIC in Indonesia was much better (but then the Dutch were much better about free trade than the English. The Dutch idea was always to be super efficient about maritime trade and thereby drive others out of a market, the English idea was always to let things run and put tariffs on them. That sounds like a ‘light touch’ policy, but in fact this always got them into trouble, see also the Molasses Act.)
I suspect we should not use “fact of the matter” to describe counterfactual claims. You know how hard the problem of inferring causal knowledge from statistical data is, and specifically, how difficult it is to differentiate between different counterfactual hypotheses. (A says that a plan will fail because it is insufficiently yellow, B says that the plan will fail because it is insufficiently purple. When the plan fails, who do you update towards?)
And even this is highly suffused by interpretation—enormous rebellions are common against governments during this time period, and the implication is that the rebels won, because the EIC lost, which isn’t correct. The EIC forces were 80% Indian, and I can’t easily find numbers, but it seems likely that more Indians fought on the side of the EIC than on the side of the mutineers.
One example… where the British government continued to use similar policy for 90 years? This is pretty terrible evidence for the EIC being worse than the British government, and that you put this forward to support your claim suggests to me you might want to approach this a bit more carefully.
(If you want to argue that governance in general is terrible and heavy handed, we have a case, but to argue that the EIC is bad by the standards of Indian governance seems to me fairly mistaken.)
In this specific instance, I mean that they recruited from the highest caste of the natives and respected their superstitions, instead of recruiting soldiers who already shared their values or would be more pliable.
More broadly, I share Napier’s views on the EIC and Indian cultural practices.
I have grown less impressed by these sorts of comparisons since reading Albion’s Seed. because there’s pretty good evidence that people move to places where their strategies will work. American colonists varied widely in their approaches to the Indians, for example, but picked places where their preferred strategy would work. Those who wanted peaceful interaction with Indians settled near peaceful tribes (as determined by their relationships with other Indian tribes) and those who were not opposed to fighting Indians for land settled near aggressive tribes (again, as determined by their relationships with other Indian tribes). It seems highly likely that the Dutch sought out the lands where they expected their approach to work best, and likewise for the British.
Well, there are two competing claims here: EIC was a light touch government, or the EIC was a heavy-handed disaster. Now you can argue that the EIC was in fact a light touch government, and all the disasters in India that resulted in EIC terminating its existence were just due to confounders of the time and place. Maybe that’s true! But what exactly is the evidence for the original claim, just some priors on corps being better than governments in some Platonic sense?
I think the point of the argument is whether somehow corporate colonial governments were better than regular ones, so saying a regular government also continued a [bad policy] isn’t really evidence for this.
I define ‘light touch’ operationally—did it work as intended?
The Dutch were late to the game, and got what they could. They did not have a luxury of choosing. Even the British, who essentially were the premier power in a multipolar world, had to worry about other powers sniffing around.
Sense of history is notoriously hard to boil down to specific pieces of evidence, and it’s likely that Douglas_Knight would give a different answer than I would. But I would point primarily at the incentives (corporations are presumably weighting profit higher than glory, governments might be doing the reverse) and the number of boots on the ground; it seems to me that colonial corporations were more likely to use native power structures to suit their own ends, and colonial governments were more likely to replace native power structures. Whether or not this is a ‘light touch’ depends on what specifically you’re measuring. For example, the EIC never outlawed sati (though individual officers did in regions they had control over), and generally prevented Christian missionaries from operating in their lands, presumably because this would disrupt the creation of profit.
I agree with you that the salt tax isn’t relevant evidence, because both the EIC and the British government enforced that policy. The point I was making is that you introduced the salt tax as relevant evidence for comparing the EIC and the British government, and that suggests to me that you may want to be more cautious in reasoning about this area.
(I don’t think inertia has enough of an effect to make creating and continuing a policy significantly different, especially given the amount of time involved.)
From the East India Company wikipedia page:
The Dutch and British appear to have been operating at roughly the same time—the first British voyage to the area seems to have been a few years sooner, but the first significantly profitable voyage seems to have been Dutch.
I wouldn’t describe the Moluccas as “got what they could!”
History can be all things to all people, like the shape of a cloud it’s a canvas on which one can project nearly any narrative one fancies.
Compared to what?
That is a very good question on which books have been written. Some of this was about religion and prestige, and competition with others. Some of it was various sovereigns being convinced to fund dubious (in retrospect) ventures by good marketing.
We have our biases and our cultural zeitgeist, and folks in the past had theirs. After the Otman Turks conquered Constantinople and killed off the Roman empire for good, the Portuguise started looking for an alternative route to do spice trading (and also look for Prester John, the mythical Christian king in the east). “We are looking for spices and Christians” was the motto.
The English had complicated reasons to start colonizing that were not all about money. A lot of the times it felt like colonial things happened for complex reasons (e.g. having to do w/ what was happening w/ Christianity at the time), and the Crown tried to find ways to make money off it.
It was the case that at some point the sugar trade became very valuable (e.g. to Napoleon the tiny sugar-producing possessions of France were worth much more than the entirety of Louisiana), but this happened much later—there wasn’t a “master imperialist plan” at all.
Because conquering new lands helps spread the meme that one should conquer as much as one can.
Money is not the only motivator. Power is another one.
I don’t see any basis for this claim. More explicitly, I don’t see any reasonable and consistent legal/moral theory which would justify such a claim. Note that I do not consider the popular “deep pockets” legal theory to be reasonable.
Do all other civilizations owe something to western civilization for the benefits they gained stemming from western science and technology?
Meh, companies did clearly got rich on exporting western technology (and they often didn’t export our ethical standards to maximize profit).
Capturing only a tiny fraction of the value they created, and that’s just the for-proft companies, not to mention all the scientists and charitable organizations that gave out western science and technology for free.
I would love to see some statistics on that, but it’s probably too hard to measure; also how much % of the exported technology was charity.
This seems to be clearly an ethical question to me, and the field of ethics is far from scientific. What kind of answer are you looking for?
My system of ethics would suggest that developed nations are morally obligated to help poorer nations (at least in so far as significant human suffering is caused by limited resources), and that this is the only relevant factor. So help disadvantaged peoples yes, but the cause (imperialism or otherwise) is irrelevant in determining the need.
If you would like a different answer, I can surely construct an argument pointing in the direction you prefer.
But the cause is relevant to determining the incentives created by your help.
I get the feeling that “national conservatives” is the name of some specific political movement or affiliation in your own country. It is not a phrase I have heard before. What specifically does it refer to? The movement discussed in the Wiki article appears to be of significance mainly in the former-communist European countries, and even there consists mainly of minority parties. These countries are not the ones for which an argument is being made for post-imperial reparations.
I meant people from the right nationalist, conservative spectrum, not a particular group with that name. It’s just that I’ve read that argument often expressed by people who I’ve associated with this spectrum.
I think that people in a position to actually do something about it generally take a similar view, but not so loudly, preferring the idea to just go away, while avoiding the media storm that would result from saying straight out, “We’ve got ours, deal with it.” That is something that can only be said by those who are not in a position to do anything but talk.
The opposite view, “all of the developed world’s prosperity was extorted from the rest and should be restored in full” is of the same nature. No-one can say it and get into power to do it.
Something which may prove interesting to somebody here:
A tentative list of internal states (certainly incomplete), divided into emotions and mental states. I distinguish between emotions and mental states on the basis of something I can’t quite put my finger on, but I’m reasonably certain there -is- a difference, something like the difference between color photographs and black-and-white photographs. (It’s quite fuzzy in some places, though, so not everything neatly fits in one or the other. Suspicious/paranoid, for example, I quibble about the placement of.) I’ve done a few passes at combining emotions I suspect are identical except for context and intensity. You’ll notice emotions like “Happy” and “Angry” aren’t present—unless somebody can correct me, I think these aren’t distinct emotions in and of themselves, but simplifications of a broad range of more complex emotions. (A couple permutations of “Angry” show up under “Rage”). Some words show up multiple times, where the word appears to refer to more than one emotional state, with clarifications.
Out of the emotions listed, I experience somewhere around a third of them, which makes it hard to evaluate how distinct they actually are, and in other places leads me to incorrectly consider them separate internal states. Of the mental states, I experience most of them (which is why I think the sorting criteria isn’t -entirely- arbitrary). Of the uncertain—I have no idea whether those things are actually distinct feelings, or just ways people describe other people’s behavior, so it’s safe to say, if they are experiencable, they’re in those things I don’t experience.
The list is largely comprised of entries from the following list: https://robbsdramaticlanguages.files.wordpress.com/2014/07/vocabulary-expand.jpg.
Some I’ve omitted as being, as far as I can tell, embellishments. I’ve added others, as well.
Emotions:
Abandoned/Alienated/Rejected/Discarded/Deserted (Distinct?)
Abused/Put-Upon/Exploited/Used (Distinct?)
Acceptance
Appreciated
Appreciative
Apalled/Disturbed/Horrified (Distinct?)
Amorous/Horny
Amusement
Anxious/Tense
Ascendant/Transcendant
Ashamed/Shameful (Distinct?)
Assured/Reassured (Distinct?)
Awkward
Bittersweet
Burdened
Cheated/Deceived/Betrayed
Cheery
Compassionate
Condemned/Doomed
Confident/Self-Certain
Controlled/Constrained/Trapped/Smothered/Stifled/Coerced/Dominated (Distinct?)
Craving/Attraction/Desire (Generalized)
Crushed/Defeated
Delight/Joy (Distinct?)
Degraded/Defiled
Demoralized
Depressed/Dejected/Dispirited (All-encompassing negativity)
Desperate
Despised/Hated (Distinct?)
Determined
Disappointed
Disenchanted
Disgusted/Repulsed (Distinct?)
Disgraced
Disheartened/Discouraged (Distinct?)
Divinity/Inspiration
Doubtful
Dread
Elation
Embarrassed
Empty
Enchanted
Encouraged
Ennui/Lacking direction (Distinct?)
Enthusiastic
Envy
Fear/Fearful/Averse (Distinct?)
Fortunate/Lucky (Distinct?)
Frustrated (Limited)
Frustrated (Exasperated)
Fulfilled
Grateful
Grief/Mourning
Harassed
Helpless
Hopeful
Hopeless
Humbled (Awed)
Humbled (Intimidated)
Humbled (Status drop-ish)
Humbled/Insecure (Unworthy)
Humiliated
Hurt/Wounded (Distinct?)
Ill-will
Inadequate
Indignant
Indulged/Gratified/Satisfied
Irritated/Annoyed/Provoked (Distinct?)
Isolated/Lonely
Jealous
Lost
Loved
Love (Towards others)
Love (Towards self)
Misunderstood
Neglected/Uncared for/Unappreciated (Distinct?)
Nervous/Tense/Panicked (Distinct?)
Offended
Optimistic
Peaceful (At peace)
Perplexed/Confused/Puzzled (Distinct?)
Pessimistic
Pitiful (Others)
Pitiful/Litost (Self)
Protective
Proud
Rage (Righteous/Outrage)
Rage (Seething)
Rage (Vengeful)
Reckless
Rebellious
Regretful
Relieved
Resentful
Resigned
Resolved
Respected/Admired
Respectful/Admiring
Restless
Revolationary/Inspired
Schadenfreude
Scheming
Sorry/Apologetic
Spiteful
Suspicious/Paranoid
Thrilled
Torn
Uncertain/Unsure/Undecided (Distinct?)
Undesired/Unwanted (Distinct?)
Unloved
Uncomfortable/Unsettled
Vulnerable/Threatened/Timid (Distinct?)
Worthless
Mental States:
Alarmed
Apprehensive
Ambivalent
Amused
Bewildered/Confused
Defensive/Guarded
Depressed (Low-emotion)
Distant
Distracted
Drained
Energetic
Excited
Exhausted
Equanimous
Flustered
Frantic
Manic (High-emotion)
Overwhelmed/Petrified/Stunned (Distinct?)
Reluctant
Shocked/Startled/Shaken (Distinct?)
Skeptical
Surprised/Startled
Uncertain:
Apathetic
Deprived
Dismayed
Disrespected/Slighted
Distressed
Exuberance
Flattered
Hesitant
Jubilant
Patronized
Patronizing
Pleased
Shy
Tolerant
Wasted
I would like to point out a concept that has recently entered into my life.
Sometimes these emotions are generated internally and often the word for the emotion is one that is about an emotion that “pulls” you to feel that way. An example is; “Appreciated” where something else gives you a feeling of being appreciated. It’s not an emotion you can give to yourself. (only recognise it) where distress can be from yourself; or hesitation.
Not sure how that adds to the list exactly.
I make a spreadsheet of how often I think I experience each one—https://docs.google.com/spreadsheets/d/1lkOftycrnhjSdbC6cExawoiyX-Jbn9wuxg2GlCjGeh4/edit?usp=sharing on a scale of 1-10, nothing is 9 or 10 because that would imply I experience it all the time.
Scheming! That emotion definitely belongs on the list. WRT Disappointment/Disheartened/Discouraged, which would you separate? (Or are all three distinct?)
There is a sense that some of these are… very self-inflicted. I suspect some people have a fine degree of control over that, and others have no control over the distress, or hesitation, they experience. (I don’t feel “Appreciated”, so I can’t comment on that example, but there are similar external emotions I do, such as annoyance, which is one I’m incapable of feeling towards myself, in pretty much exactly the same way I couldn’t tickle myself.)
Equanimity is… a bit broader than “cool and collected”, at least in my personal experience. Cool and collected is a good description for the outer-state of it—what is directly experienced in most situations. There’s an inner component to it, too—it’s… a capacity for dealing with emotions. It’s the capacity to remain cool and collected, whatever emotions are hurled at you. When my equanimity is low, I feel like I’m on the top of an immensely tall column that is swaying haphazardly, and will topple in the slightest emotional breeze. When my equanimity is high, there’s an inner stability, like a hurricane of emotion couldn’t budge it—I describe that state as “centered”.
I would separate Disappointment from Discouraged. As distinctly things that don’t have to have each other there to happen. Disappointment also doesn’t have to be disheartening. Dishearten/Discourage are similar and could probably be left close by.
Looking good. Not sure how to use it; but if it stays up—I will think about it...
Done!
No idea what any of those three are supposed to feel like. I imagine the inverse of relief?
Disheartened ~= “soulcrushing” Discouraged ~= I am running a race against my peers and I don’t seem to be able to keep up. After a month of training; they seem to be getting faster and I seem to not be keeping up at the same grade. “All this effort for nothing” Disappointed ~= I was expecting chocolate spread on my sandwich but it was only jam. (slightly in the direction of “something I expected but did not quite estimate right”)
This is useful. Do you have experience with Focusing? Part of the workflow is to sit with your emotional state and gently try to discern what label applies to it. This can be hard because sometimes the feeling is complex or unclear, but I expect part of the difficulty lies in a simple lack of vocabulary with which to label the feeling.
The biggest issue from my perspective is that the labels don’t immediately connect to any kind of easily-communicable qualia, so even if you know the correct label, you don’t necessarily have a good way of connecting the label to the feeling. (That said, the only emotion I required outside assistance to identify was a generalized anxiety, which didn’t feel at all like I expected it to. I expected anxiety to be definitively unpleasant, and it was merely ambiguously so.)
I’m looking for a good demonstration of Aumann’s Agreement Theorem that I could actually conduct between two people competent in Bayesian probability. Presumably this would have a structure where each player performs some randomizing action, then they exchange information in some formal way in rounds, and eventually reach agreement.
A trivial example: each player flips a coin in secret, then they repeatedly exchange their probability estimates for a statement like “both coin flips came up heads”. Unfortunately, for that case they both agree from round 2 onwards. Hal Finney has a version that seems to kinda work, but his reasoning at each step looks flawed. (As soon as I try to construct a method for generating the hints, I find that at each step when I update my estimate for my opponent’s hint quality, I no longer get a bounded uniform distribution.)
So, what I’d like: a version that (with at least moderate probability) continues for multiple rounds before agreement is reached; where the information communicated is some sort of simple summary of a current estimate, not the information used to get there; where the math at each step is simple enough that the game can be played by humans with pencil and paper at a reasonable speed.
Alternate mechanisms (like players alternate communication instead of communicating current states simultaneously) are also fine.
Bridge, the card game. Bidding is the process of two players exchanging information about the cards they hold via the very limited communications channel (bids). The play itself is also used to transfer more information about which cards remain in the hand.
I don’t know if that will work as a demonstration of the Aumann’s Theorem, though, bridge gets very complicated very fast :-/
That’s an excellent practical example, though it doesn’t really have the explicit probability math I was hoping for.
In particular, I like that you’ll see stuff like which player thinks the partnership has the better contract flips back and forth, especially around auctions involving controls, stops, or other specific invitational questions. The concept of evaluating your hand within a window (“My hand is now very weak, given that I opened”) is also explicitly reasoning about what your partner infers based on what you told them.
I think the most important thing here might be that bridge requires multiple rounds because bidding is limited bandwidth, whereas giving a full-precision probability estimate is not.
If you want explicit probability math, you might be able to construct some kind of cooperative poker (for example, allow two partners to exchange one card from their hands following some very restricted negotiations). The probabilities in poker are much more straightforward and amenable to calculation.
The two-coins example might be useful as a first step, even if you then present a more difficult one.
How about some variation on Bulls and Cows?
That seems like fertile ground for exploration, but no probability / agreement variation immediately springs to mind. Did you have something specific in mind?
Have several people try to guess the same number, with everyone able to see everyone’s guesses and results.
But then everyone has the exact some information, right? I’m specifically looking for something that’s like Hal Finney’s game, in that the different players have different information, and communicate some different set of information (some sort of knowledge about the state of the world, like their posteriors on the joint data).
Based on simple coin flip; other games:
Several coins;
scissors paper rock (and then iterated)
I am sure there are more small games that have a similar “known” problem space.
What change would you make that results in multiple rounds being required?
For example, if each player flips multiple coins, and then we share probability estimates for “all coins heads” or “majority of coins heads” or expectations for number of heads, in each case the first time I share my summary, I am sharing info that exactly tells the other player what information I have (and vice versa). So we will agree exactly from the second round onwards.
example I was thinking:
each player flips 3(? 10) coins of their own. (giving them various possibilities on what they think the whole coin-space looks like) They present their 90%, 99% confidence intervals on there being more than 4 (9) heads. Round 2 repeat. (also make statements based on what they think the state of play is ++ try to get to the answer before the other person. So make statements that can be misleading maybe?)
Not sure how easy it is to tease out that information for a human. maybe a computer could solve it. but not so much a human...
“I flipped 10 coins; My 90% confidence that there are at least 7 of each heads and tails is 90%. 99% confidence is 60%.”
confidence for “at least 10 heads and 6 tails” etc.
Here’s how that goes. I flip 3 coins. Say I get 2 heads. My probability estimate for “there are 4+ heads total” is now 4⁄8 (the probability that 2 or 3 of your coins are heads). For the full set of outcomes I can have, the options are: (0H, 0⁄8) (1H, 1⁄8) (2H, 4⁄8) (3H, 7⁄8). You perform the same reasoning. Then we each share our probability estimates with the other. Say that on the first round, we each share estimates of 50%. Then we can each deduce that the other saw exactly two heads, and on the second round (and forever after) both our estimates become 100%. For all possible outcomes, my first round probability tells you exactly how many heads I flipped, and vice versa; as soon as we share probabilities once, we both know the answer and agree.
(Also, you’re not using “confidence interval” in the correct manner. A confidence interval is defined over an expectation, not a posterior probability.)
I still don’t see any version of this that’s simpler than Finney’s that actually makes use of multiple rounds, and when I fix the math on Finney’s version it’s decidedly not simple.
My version of making this work would be choosing to only share limited information.
i.e. estimates of 33% heads. or estimates of >10% heads and >80% tails. Where they don’t sum to 100%, and will be harder to work out the “unknown space” in the middle. Limiting the prediction set to partial information. Also playing with multiple people should make it more complicated. Also an optional number of coin flips (optional to the person flipping coins and unknown to others within parameters)
Are there any advocacy groups with sex buyers or ‘johns’? They’re an affluent bunch, and their interests include easily influenced poor settings, and they’re not neccersarily constrained by the scrupolosity that advocates for say sex worker’s rights may have. It suprises me that they don’t exist, when advocacy groups for smokers and other vices exist, when only advocacy groups for the suppliers and workers in the sex trade seem to exist.
Being a sex buyer is low status. Being in an oppressed group such as sex workers is high status in many political contexts.
That depends. Being a john is low-status. Inviting girls over to your yacht for champagne and caviar is high-status.
That really depends. A whore is not a high-status professsion.
That’s not being a “sex buyer” within the context of needing advocacy for sex buying.
Thus, “in many political contexts”.
No true Scotsman would splurge on champagne and caviar, I see...
Name three (different ones).
The advocacy needed for someone who uses champagne and caviar to woo a woman into having sex with him is different from the type of advocacy needed for someone who patronizes prostitutes. While both involve sex and things of value, the social and legal challenges faced by their practitioners, that they would need advocacy to affect, are different.
I am not quite sure of the whole “need for advocacy” business. I am sympathetic to legalizing prostitution (and, generally speaking, all kinds of interactions between consenting adults), but formulating that as “johns need advocacy” is problematic.
In many Scandinavian countries selling sex is legal, but buying it isn’t.
There are two situations. One is legal. One is not, in most places.
Clearly there are two categories, regardless of how arbitrary the distinction between them may seem.
The original issue was whether buying sex marks a man as low-status. I continue to think that it depends: in some contexts it does and in some contexts it does not.
An example where it does not: a high-roller in a Vegas casino orders half a dozen hookers to his room.
How many of the people who engage in that behavior do you think would like it if it would be public knowledge? In many cases I don’t think that those high-rollers want publicity for the action which suggests that their general status doesn’t benefit from it.
If a dude’s ordering half a dozen hookers to his room, he doesn’t mind publicity for it.
Calibration error. It’s still a low-status signal, there are just other high-status signals embedded in the sentence that make up for it for low-status individuals who couldn’t afford that.
Consider, by comparison, the status signal given by a high-roller in a Vegas casino being accompanied by half a dozen women to his room without having to pay for their services (directly). Or a high-roller in a Vegas casino buying a night of drinks for a bar (or equivalent conspicuous consumption purchase to six hookers).
No, I don’t think so. The thing is, many status signals flip the sign depending on whether you’re low-signaling-medium or high-signaling-veryhigh. A usual (though maybe a bit outdated) example is a Blackberry: if you’re a low-level office drone, possession of a Blackberry signifies high(er) status. But if you’re a captain of industry, you won’t carry a Blackberry because you have minions for that.
With buying sex it’s a bit more complicated because you have culture/religion messing up the status messages. But let’s look at the margins: does a high-roller in Vegas lower his status by ordering girls to his room? I don’t think so. And, of course, culture/context matters a lot: what’s fine for a hip-hop mogul would be unthinkable for a Boston brahmin.
Is it really a matter of sign-flipping, or is it just that the same status level can seem low or high depending on what you’re comparing with? If a Blackberry signifies (or signified) middle-to-senior-manager status, then it’s a high-status signal for a minion and a low-status signal for the big-company CEO. If inviting six hookers up to your room in Las Vegas signifies not-very-classy-high-roller status, then again it’s a high-status or low-status signal depending on the starting point. Nothing needs to flip; it’s the same status in either case; but the reference point (set by other characteristics of the person or situation) can be lower or higher.
Flip the sign :-) What status does the lack of a Blackberry signify?
It signifies a set of possible statuses; more precisely, either the presence or the absence of a Blackberry actually signifies something more like a probability distribution over statuses. (More precisely still, they’re likelihoods rather than probabilities, and needn’t sum to 1.) The absence-of-Blackberry distribution is like a notch filter; learning that someone doesn’t have one makes it (or did, a few years back) much less likely that someone occupies the middle-to-senior-manager niche.
This (I take it this is your point) can produce more counterintuitive updates than a more “unimodal” signal like the presence of a Blackberry. Learning that someone has no Blackberry will tend to make your assessment of their ( corporate) status more “extreme”. You can call that a sign flip if you like; I’m not convinced that’s a helpful way to look at it.
You’re introducing too many variables to consider. Take this out of rationality, and into intuition, because that’s where status evaluations are made anyways. Pause for a second and picture the person who is hiring six prostitutes.
Does he -look- high status? Is he short, or tall? Is he wearing nice well-tailored clothes, or are they alcohol-stained and rumply? Is he lean, or overweight? Is his arm over the shoulder of one or two of the prostitutes, or at their waists? Are his teeth clean and white, or plaque-colored? How obviously drunk is he?
Now picture the guy who is engaging in another form of conspicuous consumption, buying a night of drinks for a bar. What does he look like?
More importantly: What images do you think -other- people would conjure, when they imagine these two people?
Given only the information that he’s rich and hired six prostitutes, most people aren’t going to picture a well-groomed businessman. You leap to a lower-status class of rich—hip-hop mogul—where hiring six prostitutes might be acceptable, without apparently realizing you’re shifting to a lower-status class of rich. (But even among hip-hop moguls, however, buying six women suggests you can’t seduce them.) I’d leap to rock star, another form of lower-status rich.
I don’t know and neither do you. I think different people would conjure different images.
And status is (at least) a two-variable function: you think that a hip-hop mogul is a “lower-status class of rich”, presumably lower than a Boston brahmin—or, more generically, a rich New England WASP with lineage stretching to the Mayflower or thereabouts—but that’s not a universal. In some sub-cultures it is lower, in some sub-cultures it is higher.
Not only do I know, the vast majority of people know; it is this shared knowledge which makes status signaling possible in the first place.
And sure. And the rich New England WASP and hip-hop mogul are both lower-status than almost anybody at a convention of physicists. And at an imaginary convention of johns, the guy who buys thirty is the highest status. That’s not the context which matters for the purpose of law and advocacy, however.
You are confused between being sufficiently socially clueful to understand status signaling and having the same mental imagery in response to a short description.
But I’m not quite sure what are we arguing about :-) Is there any falsifiable notion in play?
I wonder what is the lesson here.
“If you want to buy sex for money, you better have a lot of money, or it will reflect poorly on you.”
Or perhaps:
“Doing things in a way which demonstrates that you have a lot of money can make almost anything high-status.”
Or: be classy, not crass. Form and style matter.
It is, of course, easier to be classy when you have a yacht stocked with champagne and caviar on hand… X-/
Counter-example: Donald Trump. A dictionary counter-example: nouveau riche :-)
From memory: Amnesty International has come out in favor of legalizing prostitution. They were grudging about admitting that, while they aren’t going to call it human rights, they have to support something like human rights for prostitutes’ customers and agents.
I read the Amnesty paper and it didn’t said something about rights for customers or agents.
Hence the term “status whore.”
Cigratte companies manage to fund advocacy groups for smokers. Mafia that runs brothels on the other hand doesn’t fund advocacy groups.
I hypothesise that there are several topics for which you can reliably expect upvotes or downvotes depending your position, regardless of your content.
cryonics
effective altruism
synthetic biology
libertarianism
meta
regular threads (e.g. open thread)
posts by top posters
posts referring to rationality blogs
conceding a mistake
Anyone ever try modeling internal monologue as political parties? I suppose it’s not so different from the House voices in HPMOR, but I’m curious if there’s RL experience.
Why would you want to dumb yourself down? X-/
I’ve been thinking about different ways to model the adaptive system of thought and ideas in my mind. Governments don’t seem like a helpful model because parts of my mind aren’t as autonomous as people, nor do I have clearly defined interests groups or political party proxies. Also keen to hear ways of modelling that system for internal usage.
The abstraction is that each party gets one voice, without worrying too hard about who exactly is speaking for it, and the voting public represents the support for each voice.
I find parties better capture the fact that some voices are more supported than others. If I thought of all the voices in my head as people in a room together, I’m afraid I’d end up thinking the voices I most endorse are jerks pushing everyone else around.
I’ve tried to model it as it was shown on Herman’s Head. It helps me remember that I don’t have to listen only to my inner wimp.
Political parties, no. I just don’t care that much about the topic to have a solid identity for any party which I could usefully use to apply to myself.
I do have an internal dialog, though. It’s just more fluid about identity of participants. I generally think of it as different-timeline future-selves arguing about which of them has it better based on the decisions I’m about to make.
Are most of the hard choices you face ones with known factual outcomes? The future-self approach seems to rely on that.
Nope. Hard choices will have outcomes, but I don’t know them in advance, and can’t always be sure of them even in retrospect. That doesn’t keep me from imagining how I’ll feel about the decision if I find myself in each cell of the matrix of options and outcomes.
In the US at least, where the system it set up such that there can only be two parties that matter, I think the parties are too much of a “big tent” hodgepodge for that to work. Perhaps it would if they were based on the parties in a country where they have more of an incentive to be based around a consistent world view.
Any Germans want to weigh in?
Regarding prediction markets and regulation, does anyone know whether a betting market wherein the payout for the betting contract goes to the winner’s choice in charities (as opposed to going to the winner) would avoid most or all of the legal issues involved?
So, Long Bets? Betting for charity has always been legal AFAIK.
Ask a lawyer, it probably depends on the exact wording of anti-gambling laws. The answer also is likely to depend on whether the betting market collects any fees in process.
Killer robots about to be released into the world’s oceans!!eleven!
So says Auntie Beeb.
http://dilbert.com/strip/2015-09-02
I bought a $200 prepaid debit card to precommit to getting a beeminder account that won’t fuck up my bank balance. I plan to use it to give up pornography and excessive masturbation (<or=1 a week is my goal). However, $200 doesn’t have a lot of marginal value to me. I’m thinking of exploiting my irrationality and warm fuzzies by precommiting to donate it to a warm fuzzies charity, or maybe I’ll put the money towards potential dates so I can get a girlfriend as substitute if I’m successful in nofapping or watching porn. Ideally there would be a system whereby I could donate to people who would be incentivised to help me stay on the yellow road, at the end of passing the Beeminder test. I hope beeminder will let me do that. Any tips or comments? I’ve never done beeminder before.
I am curious about your terminal goal here.
accidental post
I’m confused. Do you want to use the $200 to pay people, charities, or a dating fund when you derail? Beeminder does not allow that directly, but you are free to do additional things when you derail if you want. However, Beeminder does have “supporters”, who will get an email when you derail, and you could use this to do something similar (like get them to bug you to pay them).
Yes, either, and, or.
I have yet to see a treatise, for strategic managers or from academics of any domain, on the game theoretic implications of data science and data-driven firm behaviour in general.
I for one would expect data driven organisations to be act more rationally and therefore predictable, meaning that game-theoretic optimal strategic behaviour, or rather an approximation of it because many data driven organisations will be stupid like many poker players forming a nash equilbirum would maximise expected utility. However, I don’t see how machine learning provides an avenue for firms to inform their strategic multi-agent decisions. They instead need to consider artificial intelligence techniques more broadly and to be able to frame machine learning in that context. This, I suspect, will lead to the goldrush for AGI development. As soon as the potential for this becomes common knowledge, linkedin losers will start ’hailing AI experts as the sexiest job in the 21st century. MIRI, take head of my warning that if you are not more transparent with your research agenda (which to those who don’t know, is still secret in part) you may find yourself developing FAI solutions way too slow.
Release your agenda and let others work on your problems cooperateively. Maybe you’ll even get a more heterogenous audience at the Intelligent Agents Forum. Maybe mainstream researchers can craft work you can actually use on the mathematical foundations of AI or UAI. I suspect the reason that this community blog, albeit devoted to human rationality and not machine rationality, devolves into topics like ‘polygamy’ is that we don’t have shared problems to solve.
Human rationality is a very, very awkward construct and the problem space is unclear and tangential, albeit related to MIRI’s work which let’s admit, is the very reason this please exists. Let us run wild and perhaps LessWrongers will start alternative agendas like developing criminal networks and intelligence networks so potential hostile AI could be detected in advance and stopped coersively. I’m just giving the first example I could think of.
My point is, you don’t have any significant proprietary hard assets, why shouldn’t I or any other particular funder instead create a prize on award for a more transparent FAI research organisation to pivet off your incredible work? I’m not in a position to judge whether or not your ongoing contributions are essential, but this could also be good opportunity for the community to discuss what will happen if or when you die or become incapable of contributing to the community. Same goes for other critical members of the community. Are their intellectual succession proceses in place?
Does anyone else have trouble with people who openly display their intelligence or attempt to be smart about something? High-school and media have somehow ingrained a hostility towards that and I find it surprisingly hard to overcome. I think it is some sort of empathy response, similar to vicarious embarrassment.
It’s worth distinguishing a number of things.
Actually and visibly being really smart, and pretty much always right in their domain of expertise.
Trying to look really smart and right, over and above merely being so.
Arrogance in dealing with people who are wrong.
Arrogance in dealing with people disagreeing with oneself.
(1) is a great virtue, (2) and (4) are mortal sins of rationality, and (3) merely a venial one. I will overlook a lot of arrogance in someone who is actually pretty much always right, especially if it isn’t me they’re being arrogant at.
People who are insecure around smart people often read actually being right and knowing it (1 and 3) as pretending to be right and intimidating others (2 and 4).
seconded. nothing to add.
That’s what the little thumbs-up button is for.
I don’t think we have a problem on LW with too much people writing messages that they agree with other people.
I find it good to be clear as to add support for the original idea; and also tell the person they have agreement not just “that was a thing that I felt like +1 to.
but I could have been more lazy...
I openly display my intelligent all the time. Nobody would -describe- it as that, however. They’d describe me as giving advice, suggesting solutions, or similar -specific- activities, and only in appropriate situations. (If you don’t know when advice is desired—which is, critically, not whenever somebody mentions a problem they have—don’t give it unless asked.)
“Openly displaying your intelligence”, as an activity in itself, is merely -bragging-, and is just as annoying, and for precisely the same reason, as the guy who will tell anyone who will listen about how he’s a motorcycle racer who could easily win any race he ever entered, but he just enjoys riding his motorcycle for the fun of it.
For me the most annoying aspect of “displaying intelligence openly” is the following:
Imagine that you have an average person A, an intelligent person B, and a super-intelligent person C. More precisely, imagine that there are 100 As, 10 Bc, and 1 C, because most people are at the center of the bell curve.
From A’s point of view, both B and C are smarter than him, and he cannot really compare them. All he can say is that he kinda understands what B says, but a lot of what C says is incomprehensive.
The experience of B is that most people are either A or B. Add some political or other mindkilling, and B may quickly develop a heuristic “everyone who agrees with me is a B, and everyone who disagrees is A and a huge waste of time”.
Now once in a while B and C meet and disagree about something. B, using their long-practiced heuristics says “lol, you’re an idiot”.
An observer A looks at their interaction and thinks “B is probably right, since I know B to be a smart person; and C also seems kinda smart, but not as smart as B, and B says he is wrong, so he probably is”.
From my point of view, B is “cheating” in this process, using both his intelligence and his lack of even higher intelligence to create an advantage over C. Thus I applaud the norms which prevent this, even if they were created for other reasons.
“attempt to” is a key phrase in your question. I don’t see much trouble with openly displayed intelligence, as long as it’s actually intelligent (correct, and directed to an agreed shared goal). Nobody much cares for show-offs or useless knowledge.
I do see a bit of resistance to “weird”, which often comes with analysis. Much of the time, but not always, that’s because the supposed-intelligent participant has done only a superficial analysis and not really attempted to understand the equilibrium that is the status quo.
High-school is … unrelated to the real world, for which I am grateful. Don’t extrapolate from what is effectively a Robbers Cave experiment that kids impose on each other in the absence of any meaningful effort/skill rewards.
I think the ingrained hostility doesn’t come from high school and media, but from human nature which doesn’t like it when people are trying to raise their status relative to you.
But anyway, the motive of speaking the truth is different from the motive of displaying intelligence, so to the degree that someone has the second motive that is likely enough to hinder the first. So if someone has the second motive, that isn’t a good reason to be hostile, but it is a good reason to take what they say with a grain of salt.
Where are neglected causes found?
Recently a friend told me that values are important to relationship success. I met a business person the same day who claimed knowing his values got him to where he is as a social entrepreneur. Long ago, my psychiatrist asked me about my values, and I didn’t know. Psychologist have tried to help me know my values on several occasions but I forget.
Just now I looked up how to find my values and found this article
Since their technique isn’t any good for me since my memory is shoddy, I just selected from the list of values. Hope I haven’t unconsciously chose socially desirable options.
Grouped by theme after choosing, I chose:
leadership::
boldness
legacy
uniqueness
vision
empathy (and the lack thereof, ‘ruthlessness’)
rationality:
accuracy
prudence
intelligence
efficiency
truth-seeking
temperance
It is the year 2050 and much of the world’s soils have only five more fertile harvests remaining
Are there any good reasons to use hotmail (outlook.com online) instead of gmail apart from switching costs if you already use outlook? Outlook is associated with business and therefore carries higher status and formality, perhaps?
Why aren’t more LW’s public intellectuals in the conventional sense—making appearances on radio or television news bulletins? The benefits seem obvious, if you’re okay with fame. And, It’s a position of influence and seems relatively easy to contact news organisations to say you have original research for a reputable organisation. Many of us are academics so that’s probably true. Perhaps there is even an easier way to contact many news distributes at once to get your name out there and get offers coming to you. Something easier than say manually sending out press releases for instance. Though, they are probably paid PR services, but I mean there’s probably a free service somewhere too.
The only existing ways I know are to get listed in expert databases like this one for Australia or this one for the world. I vaguely remember one run by an institute in Australia that requires experts to have completed meta-analyses or systematic reviews in their area, but it’s for consulting work not journalists and the institute gets a cut (but they are prestigious, so it’s good affiliation). Their name starts with K if I remember correctly. Don’t know why I tend to remember the first names of things, but I tend to be pretty accurate with it. There’s probably a menmotechnical explanation out there that some cogpsy LW will inform me about.
In general, having weird beliefs and politics makes it very dangerous to speak on live TV. Interviewers and editors are incentivized to make you seem crazy, scary, or ridiculous depending on where you appear. Eliezer is especially leery (justifiably so given his experiences with journalists) of this sort of thing, and he’s the most prominent LW public intellectual.
There’s also a question of need: Given that Elon Musk knows about and has given millions of dollars to the cause of AI risk, does MIRI really need to do TV publicity?
CBT is becoming less effective and (by the article author’ insinuation) is creating disability
For SSC fans, here’s an article that’s probably about the same thing, but I can’t bring myself to read the inane story at the start.
According the the first article’s author, the declining efficacy effect is seen in psychiatry in particularly, but also in medicine more generally. Interesting.
How does one deliver Interpersonal psychotherapy? It’s just as effective as CBT without the psychobabble. I can’t find information on what is actually done, however.
If you can’t find information on what’s done why do you think there less psychobabble than in CBT?
What’s in the way of large scaleprospective placebocontrolled trial of preexposure HIV prophelaxis?
Talos Principle has an AI singularity plot. In the the final test to pass is an anti-friendliness test. However upon experiencing the story this doesn’t seem especially repugnant. Is friendliness in conflict with moral autonomy?
SNP’s are not independent—tag SNP’s ’represent a region fo hiily correlated SNP’s.
So, can the correlations be used to correct the reported risks in promethease to identify overall risk for a particular thing?
Does 23andme test for highly correlated SNP’s, or does it exclude them cause unnecersary???
https://en.wikipedia.org/wiki/Linkage_disequilibrium
NZ is a libertarian paradise. Additional points missed are that there are no agricultural subsidies, and there are some other things mentioned in the comments.
Disabled people can benefit from sex. Presumably, some disabled people cannot access sex without paying for it (including neurodevelopmentally disabled, mentally ill, etc). There are barriers to sex workers providing for disabled clients. Unfortunately, there are compelling misconceptions that criminalizing the buying of sex is helpful to society when the evidence appears overwhelmingly on the other side, not to mention the stigma and access to the information about the rewards of sexual experience for sex worker’s clients. Further, existing advocacy for sex workers and their client’s rights outside of Europe is overly gentle, rarely attacking the other side. I hypothesise that it’s because an extremely small minority of people have both the pre-requisite compassion, steadfastness against stigma and endurance against low-status to do something that is good but won’t ‘look’ good.
It’s not a straightforward subject. Legalized prostiution in Germany results in a situation where it likely would be good if the majority of brothels don’t exist http://www.spiegel.de/international/germany/human-trafficking-persists-despite-legality-of-prostitution-in-germany-a-902533.html because they are abusive to the women in them.
On the other hand there are people doing body work for whom the lines between sexual and nonsexual are pretty fluid. For those people a law forbidding sexual contact makes little sense.
I don’t know the details, but from reading the article seems to me that “legalization” is this case simply meant saying “okay, it is no longer illegal”, instead of treating it as any other employment.
For example the article mentions prostitutes under 14. Did they have an employment contract? If no, then the whole situation was illegal, even if prostitution per se is legal. Keeping prostitutes locked in the basement; again, would the same situation be legal if the locked “employees” would be e.g. programmers? Etc.
Legalizing prostitution should mean treating the prostitutes as standard employees with standard employee rights (and duties: taxes, insurance), not just ignoring the whole business. The employees should be able to sue their employers, if necessary, and get legal assistance.
Simply the whole situation should be treated exactly the same way as if some organizations would decide that it is cheaper to kidnap programmers and keep them locked in basement, making them write Java code for food, and torturing them if they refuse. We would not have a debate about whether we should make programming illegal, or merely buying Java applications illegal, or any similar frequently proposed “solution”.
Treating every situation the same way basically means that you want to ignore the empiric reality of how different situations differ from each other. It means not optimizing for the way different situations differ from each other.
The problem is that you have women in the brothels who are under the threat of force and therefore won’t tell the police that their rights are violated.
Of course they can do that, the legal framework gives them the possibility. A person who’s physically abused and afraid to speak out still doesn’t do this in practice.
Programmers can’t work if you drug them in a way from preventing them to think clearly. As a result you simply don’t have reactions where organizations put programmers in a similar state.
But they can work if they are kidnapped, imprisoned, threatened with force, afraid of police, disapproved of by much of the population, etc.; almost none of what’s special about (many) prostitutes’ situation couldn’t happen to software developers. So Viliam’s thought experiment is a good one: what would and should we do if it did?
I’m not sure it’s so obvious that people wouldn’t be calling for criminalization it, say, half of all software was made by imprisoned blackmailed kidnapped slaves. (Note: I have no idea what fraction of prostitutes are actually in such a situation and wouldn’t be surprised if anti-prostitution campaigners exaggerated it on account of disapproval with a different real source.) So I don’t find Viliam’s thought experiment conclusive.
Programming involves a lot of judgment. Enslaving programmers would lead to the programmers programming very inefficiently on purpose, and there would be no way for the slaveowner to punish only the programmers guilty of slacking off. The slaveowner could try to punish all programmers who don’t produce much, but the slaveowner can’t tell the difference between a slacking programmer and a programmer just given a difficult or inappropriate job, so in the long run that wouldn’t work.
Also, the mental activity involved in programming doesn’t work well if the programmer is psychologically stressed by other things, and enslavement and blackmail tends to cause such stress.
I agree that enslaved programmers would probably make worse software, and make it slower, than not-enslaved programmers. Perhaps this is one reason why programmers are not commonly kidnapped and enslaved, or why people who have been kidnapped and enslaved are not usually then compelled to write software. (I can think of others.)
But I’m not sure how this is relevant. We already know that the world of Viliam’s thought experiment is not the real world, and it shouldn’t be a surprise that there are reasons why it isn’t. We can still ask “what would and should happen if somehow it were?”.
If you’re suggesting that Viliam’s hypothetical world is so ridiculous—because obviously slaves would make rotten programmers—that there’s no point asking that question, though, I can’t agree. I don’t think it’s any more obvious that slaves would make rotten programmers than that slaves would make rotten prostitutes, and for quite similar reasons. Sex, like programming, doesn’t work best under conditions of extreme stress.
Yes, slaves would make rotten programmers, barring some kind of society-wide slave system like the Romans had where certain types of slaves could benefit from their skills and even buy themselves out of slavery.
While it doesn’t work best, the fact that it is a physical activity sharply limits how much worse it becomes.
Yes, Viliam made an extremely poor example. No, this doesn’t affect his main point, because he could have made a better example instead. Sweatshops do exist and yet AFAIK nobody’s ever proposed to ban selling clothes for money.
Prostitution has an unusual feature: for a given level of need for money, the ratio of “how much would most people who have X have to get paid in order to be willing to sell X” to “how much money would X get if sold on the market” is extremely large, compared to a similar ratio for, say, selling one’s labor as a janitor. The dynamics of things with large ratios of this type lead to slavery and mistreatment much more often than the dynamics of things with smaller ratios of this type.
That doesn’t mean that people in other jobs can’t be mistreated; obviously, sweatshops do exist. But it does mean that mistreatment is less central for those other jobs, and is less relevant to banning them.
I don’t see why this is so.
Note that in your setup there is a market and that market, presumably, clears. This means that at the prevailing price point the supply and the demand are balanced. The observation that there could be a lot more supply at a much higher price seems irrelevant to me.
In my setup the market “clears” by there being no sales by most of the people who have X, because they are not willing to sell X at its market price. As the need for money increases, the price at which people are willing to sell X goes down, but on the average, janitorial work (for instance) reaches the point where sales happen long before prostitution does.
Any particular reason for the quotes around “clears”? The market does clear, it’s not a metaphor or anything.
Besides, consider e.g. long-range truck drivers. Most people can be one (there is no high barrier to entry) and yet very few people actually want to be one and/or work as one.
In economics terms you are talking about supply elasticity and pointing out that the supply of sex in exchange for money is locally inelastic, that is, the supply does not increase much in response to non-huge changes in price. Yeah, sure, so what? I still don’t see how you get from here to enslavement and mistreatment.
Yes, a market clearing by there not being any sales is a very non-central example of clearing.
Most people’s loathing of being a truck driver is much less than their loathing of being a prostitute.
Except that there are sales. Are you saying prostitution does not exist??
I’m not sure how this is relevant to your argument.
There are no sales for most of the people who have it.
The ratio I described is a way of formalizing “people loathe selling X, compared to Y”. If, at a given level of need for money, the ratio between the asking price for X and the market price is large for X compared to Y, then people loathe selling X compared to Y.
This is getting stupid.
Tap.
This is true for most things most of the time, and in itself hardly seems reason for the scare-quotes around “clears”.
Not all mosts are the same. “Most of the people” won’t sell sex is a much stronger “most” than “most of the people won’t sell janitorial work”, for the reason I stated.
Isn’t there some “Uber for escorts” app in Germany that mostly solves that problem?
Why should an uber like app solve the problem? When women get drugged and get beaten if the don’t engage in prostitution having an App to connect them to buyers doesn’t solve much.
By eliminating the necessity of the brothels as an intermediary/broker for sex
That doesn’t change the fact that the majority of woman who work as a prostitute do so, because they are forced. The availability of an app that allows a woman to sell her body simply doesn’t encourage much normal woman to sell themselves as a prostitute.
Source?
The article I linked to contains the passage:
In addition to reading about the subject I also talked to someone who years ago wanted to start a brothel in Germany for some time and who did background research into how the industry operates. That conversation shifted my own views on the subject because he’s not simply a feminist with a political agenda where I don’t trust the person to accurately represent reality but his views are the product of contact with the ground reality.
I wonder what you would find if you surveyed ordinary workers and asked how many would stop working immediately if they could?
“Would get out immediately if they could” might mean that they’re being kept prisoner at gunpoint. Or that they are addicted to a drug that they can only get from the pimp who’s insisting that they keep working as prostitutes. Or that they don’t have any other way to earn as much money as they need (or want). Or just that like many other people they don’t like their job much.
I suspect there’s quite a lot of the third of those; in such cases I suggest that the underlying problem is poverty more than it’s prostitution, and maybe legalizing and destigmatizing prostitution makes those people’s lives better by giving them one more viable way to earn a living. (Only maybe: it could be, e.g., that prostitution is in almost all cases a much worse way to earn a living than it seems from outside, in which case making it an easier option could be doing them a big disservice.)
If you think you have been infected or potentially infected with HIV, IMMEDIATELY go to an emergency department and explain your situation. You can get a treatment that can stop you getting HIV! Here’s more information relevant to Australians. Yes, science has come this far!
Also, if you are engaging in risky sexual behaviour like having sex without a condom, guys get some of your foreskin chopped off. It reduces your HIV risk. Women note, it doesn’t reduce your risk of getting infected from an infected male.
My car seems to take a bit longer to break and more effort on the break pad then cars around and others I’ve tried. My local auto place has a brochure saying to go to them if break pad feels spongey, so I told them to check it out, even though they had done a full check up the week before. The break become stiffer than ever before, and even though it was more responsive, it felt like whatever air was not cushioning it has since left and it’s just as ‘spongey’ as before. What do I do?
How about taking the question to a car mechanics forum?
And I’m wondering if there are engineering whistleblowers on this site that I can chat with.
Spongy means air in the lines which means the lines need to be “bled.” A hard pedal with decreased effectiveness sounds like your brake power-assist is failing. With the engine off you should feel this effect after pumping the pedal two or three times. A hard spongy pedal seems contradictory.
Very few states give pedal force in lbs vs. pedal deflection standards, which I find rather unscientific. IMHO, the middle 1⁄3 of pedal travel should go from no braking to max braking, for a 100 lb. driver.
With a clutch, the middle 1⁄3 should go from no engagement to full engagement.
You’re welcome! :)
That’s a good thing :-P
Go to a competent mechanic. Spongy brakes commonly result from two causes—either your braking system has air in it (usually because there is a leak), or your brake pads and/or rotors are worn out.
Paraguayan prostitutes are free.
According to Wikisexguide you can get 20 minutes with a prostitute in Paraguay for 0.0095 $USD (’50g’ in the local currency). However, other places it’s up to $100, which is pretty normal in the Western world.
Let me say that again: 20 minutes with a Paraguayian prostitute for $0.01USD.