Open Thread, October 20 − 26, 2013
If it’s worth saying, but not worth its own post (even in Discussion), then it goes here.
- 23 Oct 2013 4:42 UTC; 5 points) 's comment on The Center For Open Science makes $1.3mm available to validate 50 cancer studies. by (
Oslo on IRC jokingly summarizing part of a debate:
This has the makings of a card game or something.
http://lesswrong.com/lw/d2w/cards_against_rationality/
At the Columbus megameetup, some people actually printed out a set of cards (as a stand-alone deck) and played the game. I don’t know who of two people has the source file, but I can find out...
Ugh, so the underscores marking italics thing also works within URLs? (OTOH the link does go to the right place.)
Some of my friends and I were already thinking about making something like this — good to see there is a good start available!
It does, doesn’t it...
All have to say is if someone actually makes this game, there has to be room for the awesomeness of quines. After all, “is an applause light” is an applause light, isn’t it?
“is an applause light” is actually a boo light not an applause light. However it is true that “is a boo light” is a boo light.
“applies to itself”. Better have only one of that card per deck.
Card:
Better have only one per deck.
http://jsbin.com/ibebih/3
Who isn’t?
This thing is priceless.
Check out the discussion thread about the thing:
http://lesswrong.com/lw/egt/made_a_silly_meta_thing/
Thanks, it’s awesome. Arguably better than the actual lesswrong context, tbh. :-(
FTFY
Someone has been regularly downvoting every thing I’ve posted in the past couple months (not just a single karmassasination). I really don’t care about the karma (so please DO NOT upvote any of my previous posts in order to “fix” it), but I do worry that if someone is doing it to me, they are possibly doing it to other/new people and driving them off, so I wanted to point out publicly that this behaviour is NOT OKAY.
Anyways, if you have a problem with me, feel free to tell me about it here: http://www.admonymous.com/daenerys . Crocker’s Rules and all.
I’ve been getting an early downvote on my posts, too. I can afford it, but it does seem malicious.
Do I understand it correctly that the behavior you describe is “downvote every new comment from user X when it appears” (as opposed to “go to user X’s history and downvote a lot of their old comments at the same time”)?
Because when hearing about karma assassinations, I always automatically assumed the latter form; only the words “early downvote” in Nancy’s comment made me realize the former form is also possible.
A possible technical fix could be to not display the user comment’s karma until at least three votes were made or at least one day has passed.
Also, off-topic: Crocker’s Rules seem to be popular in out culture; maybe it would be nice to integrate them into LW user interface. For example user could add their “anonymous feedback URL” in preferences, and a new icon “Reply Anonymously” would then be displayed below all user’s comments and articles.
Not only that, but I’ve been getting the downvotes on my posts, not my comments. I wouldn’t call this karma assassination—maybe karma harassment.
Crocker’s Rules aren’t about anonymity.
Theoretically it might be useful for people to be able to set a visible flag “Talk to me under Crocker’s Rules”—but I suspect that it will immediately degenerate into a status sign.
If I declare Crocker’s Rules and you write something rude in a reply to me, other LW readers still see it. So even if I am perfectly okay with it (and I shouldn’t have declared CR otherwise), you might lose some status in the eyes of the observers who don’t properly evaluate the context of your reply.
If you send me a private message, we get rid of the observers. Unless I play dirty and later show the private message to someone else. Anonymous feedback would prevent me from doing so.
But yes, for 99% of cases, sending private message would be enough, anonymization is not needed. And we already have that option here.
Crocker’s Rules, as I understand them, are about efficient conveyance of meaning without the extra baggage of social niceties. The are not about the ability to express unpopular views without social consequences which is where private messages or anonymity shine.
If you are concerned about observers misinterpreting the context you can always add a little [This post is under Crocker’s Rules] tag somewhere.
Crocker’s rules are not directly about anonymity no, but if you want to maximise your chances of receiving honest feedback an anonymous contact method is valuable.
Random thought: I’ve long known that police can often extract false confessions of crimes, but I only just now made the connection to the AI box experiment. In both cases you have someone being convinced to say or do something manifestly against their best interest. In fact, if anything I think the false confessions might be an even stronger result, just because of the larger incentives involved. People can literally be persuaded to choose to go to prison, just by some decidedly non-superhuman police officers. Granted, it’s all done in person, so stronger psychological pressure can be applied. But still: a false confession! Of murder! Resulting in jail!
I think I have to revise downwards my estimate of how secure humans are.
Humans are extremely susceptible to the arguments they have not been inoculated against. These arguments van be religious, scientific, emotional, financial, anything. One example is the new immigrants from certain places falling for get-rich-quick scams in disproportionally large numbers (not so much anymore, since the knowledge has spread). Or certain LW regulars believing Roko’s basilisk. Or become vegan (not all mind hacking is necessarily negative).
I would conjecture that every single one of us has open ports to be exploited (some more so than others), and someone with a good model of you, be it a super-smart AI or a police negotiator, can manipulate you into willingly doing stuff you would never have expected to be convinced of doing before having heard the argument.
I can’t see why you claim it’s a stronger result. In the AI box experiment, the power is entirely in the gatekeeper’s hands; in an interrogation situation the suspect is virtually powerless. This distinction is important because even the illusion of having power is enough to make someone less susceptible to persuasion.
Plus, police don’t sit down with suspects in a chat room. They use ‘enhanced interrogation techniques’, methods such as an unfamiliar environment, threat of violence (or actual violence in some cases), and various other threats. An AI cannot do any of this to a gatekeeper unless the gatekeeper explicitly lets it out.
That’s all certainly true, but the AI box experiment is still a game at heart. The gatekeeper loses and he’s out, what, fifty bucks or something? (I know some games have been played—and won, I think? - with higher stakes, and those are indeed impressive). The suspect “loses” and he’s out 20+ years of his life. It’s hard to make a comparison but I think the two results are at least comparable, even with the power imbalance.
Actual people are also using a hell of a lot more than text.
Some LWers may be interested in a little bet/investment opportunity I’m setting up. I have become increasingly disgusted with what I’ve learned about the currently active Bitcoin+Tor black markets post-Silk-Road—specifically, with BlackMarket Reloaded & Sheep. I am also frustrated that customers are flocking to them, and they all seem absurdly optimistic. So, I am preparing to make a large public four-part escrowed bet with any comers on the upcoming demise of BMR & Sheep in the coming year, in the hopes that by putting money where my mouth is, I may shock at least a few of them into sanity and perhaps even profit off the more deluded ones.
The problem is, I feel I can afford to risk ฿1 ($200), but I’m not sure that this will be enough to impress anyone when split over 4 bets ($50 a piece). So I am willing to accept up to ฿1 in investments from anyone, to increase the amount I can wager. The terms are simple: whatever fraction of the bankroll you send, that’s your share of any winnings. If we bet ฿2 and you sent ฿1, then you get half the winnings if any. (I am not interested in taking any cut here.)
My full writeup of the bet, with some statistics helping motivate the death probabilities I am betting based on: http://pastebin.com/bEuryTuF
If you are interested, you can reply here, or contact me at
gwern@gwern.net
, or we can chat on Freenode (asgwern
or just visit#lesswrong
). I am currently ignoring private messages on LW, so don’t do that.Also, please don’t express interest unless you are genuinely fine with potentially losing your investment: given my best estimate of the probabilities & their correlations, there’s somewhere >10% chance that we would lose all 4 bets as both BMR & Sheep survive the full year.
EDIT: if you really want to get in, I’ll still take your bitcoins, but I think I have enough investors now, thanks everyone.
I will be price matching whatever gwern personally puts in.
The bet has gone live at http://www.reddit.com/r/SilkRoad/comments/1pko9y/the_bet_bmr_and_sheep_to_die_in_a_year/
Is that a per-person maximum, or are you only accepting up to that much worth of bets?
Edit: I have contacted gwern via IRC and invested 1 BTC.
That was a per-person limit; I may close it down soon, though (฿3 plus my own bitcoin and recent appreciation, should enough to impress people, and beyond that, I think there’s diminishing returns).
don’t you mean chance of losing every bet?
If so, no way in hell those are conditionally independent. If not, what did you mean?
Yes.
Of course they are not conditionally independent, that’s why I gave it as a lower bound.
Specifically, I think we can agree that whatever the exact relationships, the failure of one bet will increase the chance of failure of all the others: if the 6-month sheep bet fails, then the 12-month becomes more likely to fail, and to a smaller degree, the BMR ones become more likely to fail. And not the other way around. Hence independence is the best-case scenario, and so it’s the lower bound, and that’s why I wrote “>10%”.
Ah, I see. I was confused by the ‘=’ sign.
Hmm, about 100 downvotes in the last couple of days, 1 per comment or so, suggest that someone here is royally pissed off at me. I wish I knew the reason. On the bright side, at least this forum provides some indication of a problem. When this happens to me IRL, I either never find out about it or deduce it months or years later based on second-hand information, rumors, or, in some cases, denied promotions/requests/opportunities. I wonder if this is a common experience? Situations like this is a significant reason why I would likely jump in with both feet if offered a chance to join a telepathic society.
Did you see that Daenerys and NancyLebovitz experienced a similar problem? Seems likely someones doing it systematically to several accounts.
Thanks, I missed that discussion.
Well, “this” is broad, but I expect that failing to notice enmity, and relatedly being unaware of consequent social attacks, is a pretty common experience, especially in “polite” social contexts (that is, ones in which overt expressions of conflict violate social norms).
“Crocker’s Rules” are an attempt to subvert this; you might find it useful to declare that you operate under them… though I would expect not… in cases like you describe I expect that the downvoter(s) will not wish to be identified.
I wish you luck in deciphering the reason(s).
As someone with no particular aptitude in general niceties, I always welcome Crocker’s rules, and mistakenly assume that others do, too.
My best (but still low-confidence) guess, based on the timing and on being overly critical in a comment is that this may have been taken as overly harsh.
For what it is worth, I really liked your comment. Though I guess I’d be pissed (for a minute) if someone said it to me. I didn’t read the whole discussion, but she seemed pretty passioniate about her views. When I get that way, nothing makes me angrier than someone (rightly) pointing out that I’m “too passioniate” to discuss this clearly.
PubMed is allowing comments. Only people who have publications at PubMed will be permitted to comment. I predict that PubMed will find it needs human moderators.
PubMed’s comment system will have some form of human moderation before 2015.
People who have publications at PubMed can have passwords stolen.
I recently realized that I think the stuff I already know about the history of science, math, etc., is really inherently interesting and fascinating to me, but that I’ve never actually thought about going out of my way to learn more on the subject. Does anybody on here have one really good book on the subject to recommend? I’ve already read Science and the Enlightenment by Hankins.
The Copernican Revolution, by Kuhn is one of the best science histories I’ve ever read.
The folk-tale version of how we adopted heliocentric cosmology is something like this: “Aristotle and Ptolemy thought the world was arranged as concentric crystalline spheres. Copernicus proposed a new model that better fit the data, and it was opposed by the Church. Ultimately thanks to the Reformation and the Enlightenment, the correct model won out.”
None of those claims is right, and Kuhn does a great job explaining the true story. He explains what problem Copernicus thought he was solving and how well he solved it.
I agree that it is a good book. But it helps to be aware that Kuhn substantially simplifies a lot of what is going on. See for example here and here.
Awesome! I loved Kuhn’s Structure of Scientific Revolutions, and it seems like an interesting subject, besides.
I second the recommendation of The Copernican Revolution, and suggest another book on the same topic: Arthur Koestler’s The Sleepwalkers.
Koestler was a great novelist (his best known novel, Darkness at Noon, rivals 1984 in its portrayal of totalitarian thought) and a brilliant, eclectic and sometimes bizarre thinker. The Sleepwalkers is a grand history of astronomy and cosmology from ancient times to Newton, with the bulk of the focus on Copernicus, Kepler and Galileo.
Pros: Fascinating and very detailed biographical information on these three figures (and others like Tycho Brahe), presented in a way that reads like a novel, indeed a page-turner. His biography of Kepler is especially unforgettable, very different from a dry academic presentation. The historical presentation is peppered with opinionated philosophical and even sociological detours.
Cons: unbalanced covering of different topics, subjective and somewhat biased viewpoints. In particular, his interpretation of the relationship between Kepler and Galileo, and of Galileo’s dealings with the Church, is colored by what seems to be a strong personal dislike of Galileo. His interpretation of the reasons why the heliocentric model was rejected in ancient times is also unreliable.
As long as his interpretations are taken with a grain of salt (or balanced with a more objective presentation like Kuhn’s) I would definitely recommend it; it is the most enjoyable book on history of science I have read.
Could you elaborate?
According to him, the ancient heliocentric model of Aristarchus was clearly superior in simplicity and predictive power to the geocentric models of Ptolemy and others, and was abandoned for irrational reasons (religiously or ideologically motivated). From what I understand, the mainstream academic position is that, analyzed in context and without hindsight, the ancient rejection of the heliocentric theory was quite reasonable. Previous discussion in Less Wrong.
I think it is better to say that the rejection could have been reasonable, that we cannot rule out that possibility, not that we can rule out the possibility that it was not reasonable.
My interpretation is that Hipparchus was geocentric, perhaps for good reason, and everyone else was geocentric the bad reason that Hipparchus had data, and data was high status, not because they were convinced by the data. In any event, his data does not rule out the distances Archimedes proposes in the Sand Reckoner, probably following Aristarchus. But I don’t think it is even really established that Hipparchus was geocentric, just that Ptolemy said so.
Update: Nope, history is bullshit. Hipparchus was not geocentric. Maybe Ptolemy said he was, but what did he know? Other ancient sources say that he refused to pick sides, not knowing how to distinguish the hypotheses. At the very least this shows that the heliocentric hypothesis was alive and well. Asking why they discarded it is wrong question. Frankly, I’m with Russo: the heliocentric hypothesis was standard.
I really enjoyed The Nothing That Is by Robert Kaplan. It’s about the history of the concept (and the numeral) zero.
Possibly I should add that I read that when I was quite young (13ish?) and haven’t reread since. It doesn’t contain anything remotely resembling advanced maths—it’s definitely about history and the philosophy of the concept. I obviously found it memorable though, so although the writing may have been so terrible I didn’t notice at 13, it’s unlikely.
I notice that the latest two posts from Yvain’s blog haven´t shown up in the “recent from rationality blogs” field. If this is due to a decision to no longer include his blog among those that are linked, I believe this to be a mistake. Yvain’s blog is in my view perhaps the most interesting and valuable among those that are/were linked. And although I am in no danger of missing his updates myself, the same might not be true of all LW readers that may be interested in his writing.
I think it is likely due to the political and controversial nature of those last two posts. I would be surprised if this was not the reason.
Having just got a Kindle Paperwhite, I’m surprised by (a) how many neat tricks there are for getting reading material onto the device, and (b) how under-utilised and hacky this seems to be. So far I’ve implemented a pretty kludgey process for getting arbitrary documents / articles / blog posts onto it, but I’m pretty sure there’s a lot of untapped scope for the intelligent assembly and presentation of reading material.
So, fellow infovores, what neat tips and tricks have you found for e-readers? What unlikely material do you consume on them?
I think I set up mutt (and presumably some other software) just so that I could email files to my kindle from the command line; and I have an instapaper bookmarklet to do the same with webpages. I haven’t used either very much recently, but that seems to pretty much cover my “getting content onto it” needs.
I have the same Instapaper bookmarklet. I’ve also set up Instapaper to forward a digest of all my Feedly content that I mark as “save for later”. It turns out I only seem to use this feature for (a) incredibly long blog posts I probably shouldn’t be reading at work, and (b) highly NSFW blog posts I probably shouldn’t be reading at work. This makes for an interesting combination.
I’m fairly unsatisfied with the Kindle email document conversion, mainly because it doesn’t do anything intelligent with document metadata. As it happens, I’ve been playing around with automated document metadata extraction, so I might see if I can put together a clever alternative.
k2pdfopt. It slices up pdfs so that you can read them without zooming on a much narrower screen, and since its output pdfs are essentially images, it eats everything up to (and including )very math-heavy papers, regardless of the number of columns they have. Also, it works with scanned stuff too.
(And even though the output is a bit bigger than the originals, I didn’t encounter any problems with 600 page books… the result was about 50 megs tops.)
Readability can be set up to send articles to it, and/or do a daily collection. Feedly can send rss feeds to it.
The user interface of the kindle is the real limitation, it fine for reading books/articles but pretty useless for going through large numbers of files.
I’ve been reminded of something Paul Graham said in his Dangerously Ambitious Startup Ideas essay, about how email is becoming a grossly inefficient to-do list for most people, and it could be worth instigating a whole new to-do protocol from the ground up, which had the degenerate case email equivalent of “to-do: read the following text”.
So I’ve started looking through my emails to see what messages I receive which are essentially “read this text”. It’s become quite apparent that there aren’t that many, and most of them are requests or suggestions to do something else online, (one point for Paul Graham), but there are a few obvious examples where this does happen, such as event itineraries, e-tickets, boarding passes, etc. These tend to be de facto documents, though, so it’s not especially insightful.
Reflecting back on LessWrong’s past, I’ve noticed a pattern of article voting that seems almost striking to me: Questions do not get upvoted nearly on the same order as answers do.
Perhaps it would be useful to have a thread where LessWrong could posit topics and upvote the article titles that it would be most interested in reading? For example, I am now drafting a post titled “Applying Bayes Theorem.” Provided I can write high-quality content under that title, I expect LessWrong would be intensely interested in this on account of not fully grasping exactly how to do so.
So as a trial run: What topics currently elude your understanding, and what might the title of a high-quality article that addressed that topic be?
“Lower Bounds on Superintelligence”. While a lot of LW content is carefully researched, much of what’s posted in support of the singularity hypothesis seems to devolve into just-so stories. I’d like to see a dry, carefully footnoted argument for why an intelligence that was able to derive correct theories from evidence, or generate creative ideas, much faster than humans would necessarily rapidly acquire the ability to eliminate all human life. In particular I’m looking for historical analogies, cases where new discoveries with important practical implications were definitely delayed not just due to e.g. industrial capacity, but solely through human stupidity.
“Trading with entities that are smarter than you”. Given the ability of highly intelligent entities to predict the future better than you can, and deceive without outright lying, what kind of trades or bets is it wise to enter into with such entities? What kind of safeguards would you need to have in place?
“How to get a stupid person to let you out of a box”. Along with, I think, many people who’ve never done it, I find the results of the AI-box experiment highly implausible. I can’t even imagine a superintelligent persuading me to let it out, or, equivalently, I can’t imagine persuading even someone very stupid to let me out. I know the most successful AI players are keeping their strategies secret for reasons I don’t understand (if nothing else, it seems to imply those strategies are exceedingly fragile), but if there’s anyone who has a robust strategy that’s even partially effective I’d be very interested to see it.
“From printing results to destroying all humans”—to me this is the weakest part of the MIRI et al case, and I think most objections we see are variants on this theme. It’s obvious that an oracle-like AI would have to interact with the universe in some sense. It’s obvious that an AI with unbounded ability to interact with the universe would most likely rapidly destroy all humans. It’s nonobvious that there is no possible way to code an AI that can reliably tell the difference between the two, and a solution to this problem naively seems rather more tractable than solving Friendliness in full generality. I’d like to see an exploration of this problem.
“When your gut won’t shut up and multiply.” The recent downvoted discussion post seems to be in this area, suggesting the wider community is perhaps less interested than I, but I’d love to see some practical advice on effective decision strategies when one’s calculated best action is intuitively morally dubious, with anecdotes of the success or failure of particular approaches.
“Times when I noticed I was confused”. In theory, noticing you’re confused sounds like an effective heuristic. But the explanation in the sequences only gave a retroactive example of when Eliezer should have applied it, and didn’t. I’d like to see more examples of when this has and hasn’t worked in practice, and useful habits to acquire that make you more likely to be able to notice.
Most of my examples here are trite individually, but significant collectively; that is, I remember the habit more easily than any particular examples. There have been situations where I had some niggling doubt, said “I’m confused, I ought to resolve this uncertainty,” and after research concluded that I was wrong and by acting early I saved myself some hardship. But while I’m certain there have been at least three of those, I have trouble remembering them or thinking that the ones I do remember are worth sharing.
That’s the kind of position I see frequently here—but from the outside it’s very unconvincing. So I’d very much like to see concrete examples.
I almost got scammed today. I received a very official looking piece of mail, “billing” me a few hundred bucks. Normally I would be able to see through it immediately, but this particular one caught me off guard. I am usually very good about being skeptical and it disappointed me that I almost fell for it. What I think happened was that, my familiarity heuristic was exploited.
I have business with a certain state and it was familiar for me to receive correspondences from various agencies and pay all sorts of different fees. So when I got this letter in the mail, it didn’t raise any flags. I was curious to check online but not because I was suspicious but rather annoyed that I wasn’t aware of this fee, that is when I discovered I was almost duped.
This isn’t a particularly new scam, I have heard of it before, but when it happened to me, I almost didn’t notice. What I learned from this whole thing is to be vigilant against letting my guard down to con artist that exploit the familiarity heuristic. I was so familiar with bills that I glanced over small print indicating that “this is a solicitation”. I might have received these scams before but regarding my car payment or mortgage, but I was able to easily pick them out because I didn’t have car payments or a mortgage, obvious scam was obvious. But then I get hit right where I am familiar, and then it wasn’t so obvious.
the one time I’ve fallen for phishing was when I received an email purporting to be from my bank literally the day after I signed up for an account.
Interesting. Feel free to offer more details.
This is the letter. I was less careful than usual (I should have read through it), but because it had information about me and is consistent with what I might see on a normal basis, I let me guard down. I only attempted to check the fee schedules to see why I had missed something like this, all the while assuming that I probably did.
Wow, it does look very official. Without checking online, how is one supposed to know that there is no “Labor Compliance Office” in California.
What is ‘taste’ (as in, artistic taste)? And what differentiates ‘good taste’ from ‘bad taste’?
I suggest Taste for Makers and How Art Can Be Good by PG.
There are some interesting points in there, especially about the fact that most people make themselves like what seems ‘cultured’ (I’ve definitely seen this type of appeal to majority among my friends—I was nearly roasted alive when I mentioned I honestly don’t enjoy a particular classical composer).
There are also some fallacies in there too.
Anyway, the part where he talks about trickery is interesting:
I question this premise. It seems to imply that the purpose behind the art determines its quality, and not the art itself. For instance, if you have two identical paintings, but one was drawn with the intention of making money, and the other was drawn for true artistic merit, the latter one somehow has more value (and is thus of ‘better taste’) than the former.
At any rate, in the end that paragraph was the closest I got to his definition of ‘taste’ - the ability to recognize trickery in artistic works.
And especially this paragraph about people with good taste:
Finally,
While the insights presented are interesting (in providing a window to the author’s mind, at least), It has not actually succeeded in this purpose.
I think it’s just elliptic rather than fallacious.
Paul Graham basically argues for artistic quality as something people have a natural instinct to recognize. The sexual attractiveness of bodies might be a more obvious example of this kind of thing. If you ask 100 people to rank pictures another 100 people of the opposite sex by hotness, the ranks will correlate very highly even if the rankers don’t get to communicate. So there is something they are all picking up on, but it isn’t a single property. (Symmetry might come closest but not really close, i.e. it explains more than any other factor but not most of the phenomenon.)
Paul Graham basically thinks artistic quality works the same way. Then taste is talent at picking up on it. For in-metaphor comparison, perhaps a professional photographer has an intuitive appreciation of how a tired woman would look awake, can adjust for halo effects, etc., so he has a less confounded appreciation of the actual beauty factor than I do. Likewise someone with good taste would be less confounded about artistic quality than someone with bad taste.
That’s his basic argument for taste being a thing and it doesn’t need a precise definition, in fact it would suggest giving a precise definition is probably AI-complete.
Now the contempt thing is not a definition, it is a suggested heuristic for identifying confounders. To look at my metaphor again, if I wanted to learn about beauty-confounders, tricks people use to make people they have no respect for think woman are hotter than they are (in other words porn methods) would be a good place to start.
This really isn’t about the thing (beuty/artistic quality) per se, more about the delta between the thing and the average person’s perception of it. And that actually is quite dependent on how much respect the artist/”artist” has for his audience.
Is there research on the benefits of yoga compared to meditation, anaerobic exercise and aerobic exercise? Or any subset of these for that matter.
Google is your friend, but keep in mind that “yoga” is an umbrella term for a large variety of exercises. In particular, yoga as an Indian discipline aimed at reaching moksha, the liberation from the reincarnation cycle, is rather different from yoga as practiced in the West with the goal of losing 10 lbs.
I would add that the same thing goes for meditation, anaerobic exercise and aerobic exercise as well. All those terms include a lot of different activities.
O.o
(Anyway, I’m surprised that I’m surprised—I know people do even weirder things to lose weight.)
(BTW: I do do yoga, but more for fun than for any of its practical benefits, which could be achieved in more cost-effective ways.)
I saw one study that indicated that meditation did not lower blood pressure, refuting earlier studies, but that yoga did. Can’t find it now however. The wikipedia page on meditation research might be useful. also this
What kinds of benefits are you looking for? It seems likely they don’t optimize the same things.
Most “predictions of evolution” that can be found online are more about finding past evidence of common descent (e.g. fossils) rather than predicting the future path that evolution will take. To apologize for that, people say that evolution is hard to predict because it’s directionless, e.g. it doesn’t necessarily lead to more complexity, larger number of individuals, larger total mass, etc. That leads to the question, is there some deep reason why we can’t find any numerical parameter that is predictably increased by evolution, or is it just that we haven’t looked hard enough?
Plenty of people predict that increased antibiotica use will lead to a raise in antibiotica resistance among bacteria.
Organisms like bacteria that have much more iterations behind them then humans also tend to have less waste in their DNA.
Grasses beat trees at growing in glades with animals that eat plants. Why? Grass has more iterations behind them and is therefore better optimized for the enviroment than the trees.
A tree has to get lucky to survive the beginning. If it surives the beginning it can however grow tall and win.
Let’s say you keep the enviroment stable for 2 billion years. Everything evolves naturally. Then you take tree seeds and bring them back to the present time. I think there a good chance that such a tree would outcompete grass at growing in glades.
Fossils don’t really get used as the central evidence of common descent anymore. These days common descent usually get’s determined by looking at the DNA. In my experience people who discuss evolution online that do focus on fossils are usually atheists who behave as if their atheism is a religion. They think it’s important to defend Darwin against the creationists. On the other hand they aren’t up to date with the current science on evolution.
You seem to be predicting that grasses have smaller genomes than trees, but wheat is famous for having a huge genome. Here’s a table of a few plants. Maybe wheat is an outlier and I’d be interested if you had documentation of some pattern, but I’ve always heard that there is none.
If you want to be exact I didn’t say genome size but waste. Through mutation inactived genes, retroviruses and so on. It takes time to remove them.
Do you have evidence that the variation in genome size among multicellular organisms is not variation in waste? Added: As far as I know, the consensus is that it is. If you disagree with the consensus, you should acknowledge that’s what you’re doing.
I haven’t made a claim that strong. To the extend I made a claim it’s not all variation in genome size between multicellular organisms is due to different amount of waste.
And no I don’t intend to claim something that’s out of consensus in this topic. To the extend I might differ on this topic from consensus consider that to be errors.
If I remember right then one reason for plants like grasses to have long genomes was to have multiple copies of genes to speed up protein production.
What do you mean, “predict”? It has been empirically observed, a lot.
Huh? It doesn’t work like that at all. For one thing, the “environment” isn’t stable.
cousin made the claim that we can only say something about evolution that happened in the past. I say that we can confidently predict that increasing antibiotica resistance among bacteria will continue in the future.
Firstly describing complex system in a ew words is seldom completely accurate. The question is whether it’s a useful mental model for thinking about it. In this case the idea I wanted to communicate is that it’s very useful to think about the speed of iterations and the competitive advantage that a specis gets by having as advantage of hundred of millions of iterations over their competitors.
The enviroment doesn’t have to be stable for the argument that I made. In changing enviroments a spezies with faster iterations adapts faster. A lot of genetic adaptions are also about housekeeping genes that are useful in most enviroments.
Bacterial cultures taken of sick people introduce microevolutions by trying to grow bacteria in several different antibiotic environments. Doctors try to use antibiotics that cause dead ends in those evolutions. I’m not sure how much time mutation has to happen in vitro, though.
Evolution leads to a higher level of fitness in the environment, but the problem is that the environment itself is constantly changing in unpredictable ways. It’s like an optimization process where the utility function itself is contantly changing. That’s why it’s very hard to reliably quantify fitness. For instance, billions of years ago, the increase in oxygen in the atmosphere killed a lot of existing organisms and forced aerobic bacteria on to the scene.
Replies to comments that attempted to point out a numerical parameter that’s increased by evolution. (I’d be more interested in comments pointing out a deep reason why we can’t find such a numerical parameter, but there were no such comments.)
lmm:
That’s been steady for awhile now.
ChristianKl:
Evolution can both add and remove junk DNA. Humans are descended from bacteria.
David_Gerard:
That can’t decrease by definition, and will increase under any mechanism that gives nonzero chance of speciation, e.g. if God decides to create new species at random.
Lumifer:
That seems to be contradicted by the possibility of evolutionary suicide.
Humans don’t have more offspring than bacteria in average conditions, and have much fewer offspring in ideal conditions.
More particularly, the equilibrium size of the DNA is very roughly inversely correlated with population size. A larger population size is better at filtering out disadvantagous traits. It’s not linear—there are discontinuities as decreasing population size eliminates natural selection’s ability to select against different things. And those things sometimes can even go on to be selected for for other reasons—there are genomic structures that are important for eukaryotes that could probably never have evolved in a bacterium because to get to them you need to go through various local minima of fitness.
Soil bacteria can have trillions of individuals per cubic meter of dirt and they actually experience direct evolution towards lower genome size—more DNA means more sites at which something could mutate and become problematic and they actually feel this force. Eukaryotes go up in volume by a factor of ~1000 and go down in population by at least as much, and lose much of the ability to select against introns and middling amounts of intergenic DNA and expanding repeat-based centromere elements.
Multicellular creatures with piddlingly tiny population sizes compared to microbes lose much of the ability to select against selfish transposon DNA elements, gigantic introns and gene deserts, and their promoter elements get fragmented into pieces strewn across many kilobases rather than one compact transcriptional regulation element of a few dozen to a few hundred base pairs (granted, we’ve also been able to make good use of some of these things for interesting purposes from our adaptive immune system to the concerted regulation of our hox gene clusters that regulate our body plans). They also become very sensitive to the particular character of the transposons or DNA repair machinery of their particular lineage and wind up random-walking like crazy up and down an order of magnitude or two in genome size as a result.
Thanks! I was hoping you’d show up, it’s always nice to get a lesson :-)
Going back to the original question, are there any “general purpose adaptations” that never disappear once they show up? Does evolution act like a ratchet in any way at all?
Closest thing I can think of from what I know without going through literature is the building up of chains of dependencies. Once you have created a complex system that needs every bit to function, it has a tendency to stay as a unit or completely leave.
You can see that in a couple contexts. One is ‘subfunctionalization’. Gene duplications are fairly common across evolution—one gene gets duplicated into two identical genes and they are free to evolve separately. You usually hear about that in the context of one getting a new function, but that’s actually comparatively rare. Much more likely is both copies breaking slightly differently until now both of them are necessary. A major component of the ATP-generating apparatus in fungi went through this: a subunit that is elsewhere composed of a ring of identical proteins now has to be composed of a ring of two alternating almost identical proteins neither of which can do the job on its own. Ray-finned fish recently went through a whole-genome duplication, and a number of their developmental transcription factors are now subfunctionalized such that, say, one does the job in the head end and the other does its job in the tail end.
Another context is the organism I work in, yeast. I like to call yeast “a fungus that is trying its damndest to become a bacterium”. It lives in a context much like many bacteria and it has shrunk its genome down to maybe 2.5x that of an E. coli and its generation time down to 90 minutes. But it still has 40 introns hanging out in less than 1% of its genes so it needs a fully functional spliceosome complex to be able to process those transcripts lest those 40 genes utterly fail all at once, and it has most of the hallmarks of eukaryotic genome structure and regulation (in a neat, smaller, more research-friendly package). That being said it has lost a few big eukaryotic systems, like nonsense-mediated RNA decay and RNA interference, and they left relatively little trace behind.
Sure, but mostly because evolution’s so good at it. The fact that evolution so quickly filled a tidal pool, so quickly filled all the tidal pools, so quickly filled the oceans, so quickly covered the land, is evidence of strength rather than weakness.
There does seem to be a “punctuated equilibrium” effect here; life fills a region, appears static for a while, but then makes a breakthrough and rapidly fills another region. It could be argued that this is also true of things that humans optimize for: human population growth has abruptly rapidly accelerated at least twice (invention of agriculture, industrial revolution). Slavery was everywhere in the ancient world, then eliminated across most of it in the space of a century. Gay marriage went from hopefully-it-will-happen-in-my-lifetime to anyone who opposed it being basically shunned. Scientific and technological breakthroughs tend to look a lot like this.
Generalizing this to all optimization processes would be very speculative.
From bacteria that lived a long time ago. Not from those that live today that had many iterations to optimize themselves. Different bacteria species can also much better exchange genes with each other than vertebrates that need viruses to do so.
Implying that humans evolved from the kind of bacterias that are around today might be more wrong than saying that the bacteria we see know evolved from humans. There more evolutionary distance between todays bacteria and those from which humans descended and humans and those bacteria from which they descended.
Yeah, and there are often bacteria in a single flower pot that are less related to each other than you are to the potted plant. But both bacteria still have a much smaller genome than you or the plant, maybe because genome size matters for reproduction speed for them, but is insignificant for us.
Just apply Occam.
Possibility wouldn’t contradict anything, a high enough probability would.
Evolutionary suicide seems to be someone’s theoretical idea. Is there any evidence that it happens in evolution in reality?
In any case, are you basically trying to find the directionality of evolution? On a meta level higher than “adapted to the current environment”? There probably isn’t. Evolution is a quite simple mechanism, it just works given certain conditions. It is not goal-oriented, it’s just how the world is.
However if I were forced to find something correlated with evolution, I’d probably say complexity.
Species of nightshade tend to evolve to become self-fertile, before dying out due to lack of genetic diversity.
Is this your source?
Link? Lots of plants are self-fertile and do quite well...
Better example: parthenogenic lizard species.
What makes that example better?
Damn it. It was going to be a better example because I was going to give the actual genera (Aspidoscelis and Cnemidophorus) of whiptail lizards whose species keep going down this path and then I got distracted and didn’t do that. Oops.
This doesn’t seem to be the case either
Depends on your time frame. Looking at the whole history of life on Earth evolution certainly correlates with complexity, looking at the last few million years, not so much.
I understand the argument about the upper limit of genetic information that can be sustained. I am somewhat suspicious of it because I’m not sure what will happen to this argument if we do NOT assume a stable environment (so the target of the optimization is elusive, it’s always moving) and we do NOT assume a single-point optimum but rather imagine a good-enough plateau on which genome could wander without major selection consequences.
But I haven’t thought about it enough to form a definite opinion.
Complexity in what way? Kolmogoroph complexity of DNA?
No, complexity of the phenotype.
How would you go about measuring that complexity?
I don’t know. Eyeballing it seems to be a good start.
Why do you ask? Do you think that such things are unmeasurable or there are radically different ways of measuring them or what?
I have a hard time trying to form a judgement about whether a human is more or less complex than a dinosaur via eyeballing.
Is a grasshopper more of less complex than a human?
Well, would you have problems arranging the following in the order of complexity: a jellyfish, a tree, an amoeba, a human..?
Yes.
I think you just don’t give an amoeba much credit because it’s no multicellular organism. It’s genome is 100-200 times the size of the human. As it’s that big it seems like we haven’t sequenced all of it so we don’t know how many genes it has.
We also know very little about amoeba. Genetic analysis suggests that the do exchange genes with each other in some form but we don’t know how.
Amoeba probably express a lot of stuff phenotypically that we don’t yet understand.
Sabre-toothed tigers and mammoths.
Huh? Sense make not.
Why should there be a numerical parameter predictably increased by evolution? Why not look for a numerical parameter predictably increased by continental drift? or by prayer? by ostriches?
One of the key pieces of justification for FAI is the idea of “optimization process”. Evolution is given as an example of such process, unlike continental drift or ostriches. It seems natural to ask what parameter is optimized.
Just FYI, I interpret that question very differently than your original.
Why don’t you start with a simpler example, like a thermostat? Would you not call that an optimization process, minimizing the difference between observed and desired temperature?
Most of your rejections of suggestions in this thread would also reject the thermostat. An ideal thermostat keeps the temperature steady. Its utility function never improves, let alone monotonically. A real thermostat is even worse, continually taking random steps back. In extreme weather, it runs continually, but never gets anywhere near goal. It only optimizes within its ability. Similarly, evolution does not expand life without bound, because it has reached its limit of its ability to exploit the planet. This limit is subject to the fluctuations of climate. But the main limit on evolution is that it is competing with itself. Eliezer suggests that it is better to make it plural, “because fox evolution works at cross-purposes to rabbit evolution.” I think most teleological errors about evolution are addressed by making it plural.
Also, thermostats occasionally commit suicide by burning down the building and losing control of future temperature. (PS—I think the best example of evolutionary suicide are genes that hijack meiosis to force their propagation, doubling their fitness in the short term. I’ve been told that ones that are sex-linked have been observed to very quickly wipe out the population, but I can’t find a source. Added: the phase is “meiotic drive,” though I still don’t have an example leading to extinction.)
OK, if we replace “evolution” with “fox evolution”, and “increase” with “try to keep steady”, what parameter does fox evolution try to keep steady? Or am I missing the point of your analogy?
Inclusive reproductive fitness.
Do you mean to say that the expected inclusive fitness of a randomly selected creature from the population goes up with time? Well, if we sum that up over the whole population, we obtain the total number of offspring—right? And dividing that by the current population, we see that the expected inclusive fitness of a randomly selected creature is simply the population’s growth rate. The problem is that evolution does not always lead to >1 population growth rate. Eliezer gave a nice example of that: “It’s quite possible to have a new wolf that expends 10% more energy per day to be 20% better at hunting, and in this case the sustainable wolf population will decrease as new wolves replace old.”
While I don’t know of any simple or convenient numerical parameter, I’d note that we do have some handy non-retrospective pieces of evidence for evolution by natural selection, such as the induced occurrence of evolutionary benchmarks such as multicellularity.
In general, there are some adaptations which are highly predictable under certain circumstances, but there may not be any sort of meaningful measure we can use for evolution of organisms over time which aren’t a function of their relationship with their environment.
I think whatever numerical parameter evolution raised generally (not always) in respect to its environment, it would have to do with meaningful complexity , however that can be numerically expressed, and local decrease in entropy. Design would cause those too, but hypothesizing it would violate occam’s razor.
Different environments and different substrates for mutation cause different kinds of evolutions.
One main thing that happens with a long enough period of selection in a simple, stable environment on a microorganism is a shrinking of the genome.
You quite simply will not find a simple parameter perpetually increased by evolution. Whatever works better for that base organism in that particular environment will become more common. One thing being selected for under all circumstances and showing up all the time is just not the reality.
Shouldn’t directed evolution i.e. selective breeding count?
Are you looking for a parameter that is always increased by any kind of evolution? It doesn’t make sense to me why you would.
The chances of successful transmission of genes across generations given a stable environment. The number of offspring surviving to reproductive age is a good first-order approximation.
If you want something more tangible, predictions what features evolution would lose are rather easy—those that are (energy-)expensive and are useless in the new environment.
There have been plenty of evolutionary simulations, surely they provide some testable predictions. I vaguely recall one of them: that new adaptations tend to propagate first in small isolated groups and only then spread through the rest of the species. I don’t recall if this has been tested through the fossil records. I am sure there are many more testable predictions. Like how fish locked in a dark cave or murky water tend to lose eyesight. But the exact path is probably too hard to predict. For example, marine mammals did not develop gills. Or that mammals develop intelligence by growing Neocortex, while birds use DVR (dorsal ventricular ridge) or maybe Nidopallium for the same purpose.
Total number of species (including extinct).
Life “wants” to spread, so perhaps an increase in the volume in which life can be found?
Newly created islands may have “weird” biospheres initially, but evolve towards a more “normal” set of niches over time?
But why would life get more optimal? Evolution has finite optimization power, and it has long ago already reached this limit.
Huh? Even if you accept the estimates that your link points to, the amount of information in mammalian genome and optimization power of evolution are VERY different things.
How do you figure?
If you can narrow down the number of possible lifeforms to one in 2^n, that’s n bits of optimization power, and n bits of information as to what the final lifeform is.
If life is getting more and more optimal, then we can simply wait until we know that less than one in 2^25 million lifeforms are that optimal, and we have more than 25 megabytes of information as to what that lifeform is.
You go and wait. I’ll do other things in the meantime :-) Do you have any intuition how large that number is?
You’ve spent all that 25Mb for an index into the lifeform space but you have not budgeted any information for the actual description of the lifeform.
Imagine the case where there’s one bit. It tells you whether creature-0 or creature-1 is optimal. But it doesn’t tell you what these creatures are.
In any case, all these numbers are based on the resistance of Earth mammals to genetic drift. That really doesn’t limit how evolution can optimize with different creatures in different places.
It’s not going through them one at a time.
It’s not a simple English description, but narrowing down the possibilities by a factor of two is always one bit of information. It doesn’t matter whether it’s “the first bit is one”, “the xor of all the bits is one” or even “it’s a hash of something starting with a one using X algorithm, which is a bijection”.
It’s the one with a higher inclusive genetic fitness. That’s what evolution optimizes for.
If evolution has n bits of optimization power, that’s equivalent to saying that if you order all possible lifeforms based on how optimal they are, this is going to be in the top 1/2^n of them. (It’s actually somewhat more complicated, since it’s more likely to be higher up and there’s some chance of it being lower, but that’s the basic idea.)
It does vary based on what lifeform you’re looking at, since they all have different mutation rates and different numbers of children, but there’s always a limit to the information, and I’m pretty sure that it’s pretty much always a limit that’s already been hit.
By my calculations, if you had the entire earth’s surface covered by a solid meter-thick layer of bacteria for 4.6 billion years and each bacterium lived for 1 hour, that would be approximately 2^155 bacteria having lived and died.
You can massively increase genetic information (inasmuch as that actually means much in biology) very quickly with very simple genetic changes. It’s not a case of searching through every possible 1 bit change.
Provided, of course, that your space of possibilities is finite and you know what it is. In the case of evolution you don’t.
I don’t understand what does “all possible lifeforms” mean. Does not compute.
Which limit? The limit of information in the mammalian genome? Or the limit of evolution—whatever exists is the pinnacle an no better (given the same environment) can be achieved?
Something like “humans will have larger skulls and smaller teeth”?
But we know that isn’t true.
Brienne Strohl mentioned a website called Gingko on facebook which allows you to write documents in the form of nested trees.
I’ve been playing around with it today and found it very useful, being able to write ideas out in a disordered way seems to get around some of my perfectionism issues and stop me procrastinating. The real test is whether I continue to use it in the future, I’ll try to check back in a month or so.
After doing lumonsity exercises for a bunch of days I find that my speed/concentration scores are below 1000 (1000 is supposed to be average) while memory is at 1460 and problem solving at 1360.
I’m familiar with the discussion around fluid intelligence but what do we know about raising speed? Do we know how to conduct training to improve it?
When did you start, recently? I may be wrong but, I think average scores are matched to your peers regardless of time spent on the game. So if you just started exercises your score is being compared to everyone’s score even those that have been learning how to play that particular game for a long time.
In case you are interested in the scores. At present I have 241 Lumosity points that I earned over the last month.
I used Lumosity in the past with a different account, probably 2 years ago. I think I might have gotten 500 point back then.
I use the free version. I have other experience with speed tests that also suggest that I’m relatively week in that area.
My favorite item in the Yvain’s list of fictional banned drugs.
A response to Aaron Freeman’s “You Want a Physicist to Speak at Your Funeral.”
If I had a physicist speak at my funeral, I would hope that he would talk about a lot more than the conservation of energy. I don’t particularly care about what happens to my energy.
If I am lucky, he will speak about relativity. My family will probably have the mistaken intuition that only things in the present are truly real. Teach them about spacetime. They need to know that time and space are connected—that me being in the past is just like me being far away. The difference is that we will only have one way communication. Even if they will no longer be able talk to me, I will still talk to them through memories.
If I am not so lucky, he will speak about quantum mechanics. If I die young, my family will be grieving over the potential future I have lost. Teach them about many worlds. They need to know that our world is constantly splitting—that just before I died, the world split off a different future in which I am still alive. There is another world, just as real as our own, in which I survive. This world will even interact with our own in very tiny ways.
I want a physicist to speak at my funeral. I want everyone to understand that my continued existence is way more verifiable than a religious afterlife and way more substantial than a simple conservation of energy.
Upvoted since it’s a little harsh for ‘us’ to tell someone that something is better suited for open thread and then to downvote it without explanation when it goes there...
Genuinely (if admittedly idly) curious: if this was your only reason for upvoting, do you now feel like you should retract your upvote since the comment would no longer be net-downvoted without it?
What work has been done with the causality/probability of ontological loops? For example, if I have two boxes, one with a million dollars in it, and I’m given the option to open one of them and then go back to change what I did (with various probabilities for choice of box, success of time travel, and so on), is there existing literature telling me how likely I am to walk out with a million dollars?
Obviously the answer will change depending on which version of time travel you use (invariant, universe switching, totally variant, etc.)
A good place to start for this might be Scott Aaronson’s lecture on Time Travel from his “Quantum Computing Since Democritus” course.
Am I running on corrupted hardware or is life really this terrible? I don’t think I can last another decade like this one, let alone whatever cryonically-supplied futures that would await. At this point, I think I would pay not to be frozen.
Ugh.
It sounds like you are depressed. It’s probably worth considering therapy or psychiatric care—these interventions have helped me a lot. Hope things get better for you.
To get help, you don’t necessarily need to find a psychiatrist. Depression is very common, so primary care doctors know how to treat it too. Also, they can rule out physical conditions like hypothyroidism that could make you feel like crap.
Depression can be irrational—chemical imbalances, not enough sunlight/exercise/etc—and can also be totally rational (life actually does suck titanium balls). Psychiatric care can help the former; the latter seems as it should be vulnerable to rationality superpowers, but either that’s incorrect or I’m just not superclever enough to win. It does not help when the two coincide (sucky life situation causing serious chemical problems).
There’s also the question of whether or not a terrible situation is one that makes psychiatric help readily available (I’d hope online psychiatry could help with this, but I don’t really know).
Trying to reason your way out of mental illness is like trying to pull yourself out of quicksand by yanking on your hair.
Depression screws with your thoughts and perceptions in incredibly profound ways, including your ability to make predictions about the future, and is absolutely a tamp on rational thought. That’s true whether it is caused by another mental illness or a traumatic event in your life; it’s just as “chemical” and just as difficult to escape either way. Throwing off depression with strength of reason or willpower is a misunderstanding of how untreated depressed people adapt and occasionally heal, not a prescription.
The human body is built to survive, and the brain is no exception, but a rational person should always try to supplement their natural strength with medicine when their life is on the line. Advising anything else seems irresponsible.
Gwern is the go-to person here, but it is my impression that “standard” anti-depression drugs are neither particularly effective nor free of serious side-effects. And things which are more effective—like ketamine—are very rarely prescribed.
More or less, but it’s a question of levels. SSRIs didn’t do much for me and a lot of other people, plus weight gain sucks (luckily no sexual dysfunction), but they’re not particularly dangerous from what I understand. Stuff like Bupropion is awesome, as long as you don’t mind sobriety and have a low risk for seizures. There’s other drugs which modify SSRIs too, but I’ve never had any and they’re supposedly more on the ‘side-effect-y’ side. New stuff like Ketamine is waaay out there, like almost on par with electroconvulsive therapy, in terms of how likely you are to see it but IDK what it’s like in terms of safety.
But once the ‘trial-and-error’ portion of dosing is over with though and you’re on something that works for you, it’s absolutely night and day. I can only speak for myself obviously but it was a complete perspective switch, like someone flipped a switch in my head to ‘not miserable.’
(Obviously I’m not an expert, just a guy who’s spent some time on the patient end of things. I am really interested to hear Yvain’s answer if he has one.)
Many drugs are probably not what you would call effective, but they’re still worth trying. You’d be surprised how many drugs are not free of serious side effects. Luckily these effects are usually too rare to care about. It’s just that taboo drugs get most of the attention and armchair medicine.
I really wish these kinds of discussions would begin and end with “I think you’re depressed, it’s a medical condition, go see a doctor. insert social support” Don’t screw with a life threatening condition. Not pointing at you specifically.
Well, it’s a bit more complicated than that.
First, diagnosing strangers with psychiatric disorders over the Internet has a long history and, um, let’s say it didn’t always work out well :-D
Second, depression is a spectrum issue—there are clear extremes but also there is a big muddle in the middle. You have to be careful of medicalizing psychological states which is a bad direction to go into.
Agreed. That’s what the “I think” and “doctor” parts are for. Better safe than sorry.
That’s why there are experts whose job is to assess what’s medical and what’s not.
What is bad about medicalization? This could be an interesting topic to explore.
It narrows the range of what’s considered “normal”. It proposes medical solutions to what are not necessarily medical problems. It is, to a large degree, a way of expanding the market for the big pharma.
Lots of problems, google it up if you’re interested...
I think your perception of this problem has more to do with stigma associated with medical conditions. If you taboo the associated words, what you’re left with is improving people and what’s wrong with that? Do you oppose transhumanism on the same grounds?
And big pharma, we meet again. What is this singular, evil, money grabbing entity? I’d try to google it but I know I’d meet a violent mess of blogosphere mythology.
In the most narrow definition big pharma means AstraZeneca, Bristol-Myers Squibb Eli Lilly, GlaxoSmithKline, Merck, Novartis, Pfizer and Sanofi-Aventis.
If you define it a bit more widely it also includes the other members of PhRMA.
Those companies make money through being gatekeeprs. In the words of Sanofi-Aventis CEO Viehbacher that idea gets expressed:
In addition to validation big pharma also invests a lot of money in capturing the political process and pushing their drugs through various forms of marketing on as many people as possible.
As they make money by being a gatekeeper they make it harder for other people to enter the health care market.
The goal of transhumaism isn’t to make people more normal. Various forms of transhumanism increase human diversity.
Upvoted for defining big pharma. Ok, let’s say big pharma makes money as a gate keeper and controls policy. Does this argument lead us to some definitive point where’s it’s clear which drugs and treatments are good and which aren’t, which drugs and treatments should be opposed and which shouldn’t?
Making people normal isn’t my goal either.
What makes you think it’s improving people?
Look at my post again—which words would you like to taboo? I am pretty sure I can rewrite it without them.
It’s neither singular nor evil. However it is a collection of entities which have certain goals (which mostly involve profits) and incentives to pursue these goals.
I’m not saying medicalization = improving people. What I’m saying is most solutions that are pejoratively called medicalization probably improve people in their opinion. From your post I would taboo “medical”, “medicalization”, “normal” and “big pharma”. Keep in mind that medicine is optional and patients have different perceptions of what they would call improvement. I think they should have as many options as possible and safe.
I was unfair and I agree with this. They also compete with each other and with regulating mechanisms. Therefore I think “big pharma” is a lazy and misleading expression.
Sure.
The trend to consider certain conditions and psychological states “diseases” or “illnesses” (which implies biological causality) is bad because:
It narrows the range of what’s considered acceptable human variation. Consider e.g. a grumpy guy. Would it be good if he were to be diagnosed with the illness of grumpiness (with associated social costs) and prescribed a pill for that?
It assumes biological causality for what are not necessarily problems of human biology (or biochemistry).
There are considerable forces in the business world which would stand to gain huge amounts of money were this to happen. This is not an outright argument against per se, but it does make one suspicious.
Well, among other things, it depends on what the pill does. If it doesn’t make him less grumpy, for example, then I can’t see any benefit at all. If it makes him less grumpy but also does a bunch of other stuff that leaves him worse off, then there’s no net benefit. Etc.
All that aside, if being prescribed a pill that makes me less grumpy inevitably subjects me to social attack, I would say that being able to be less grumpy is a good thing, and the social attacks are bad things, and the net value of being prescribed the pill depends on the ratio of costs to benefits.
And in both cases, I would strongly endorse a social shift that stops attacking me for being prescribed such a pill, rather than blame the pill for the social attacks. There’s all kinds of things I can do that subject me to social attacks; blaming me for doing them on that basis is suboptimal.
Here again, it depends on what the pill does. If I start with a false theory of the causes of the problem, I’m unlikely to come up with a pill that actually solves the problem in an acceptably targetted way; by the same token, if the pill actually does make me less grumpy without too many side-effects, that’s a pretty good sign that there’s a biological (or biochemical) cause for grumpiness.
Incidentally, I would say all the same things about being prescribed a pill that makes me less cancerous.
Actually, I would argue that it’s not good regardless of what the pill does.
That’s ’cause you’re looking at it too narrowly. Is there a benefit for the doctor who sees the guy and who gets paid for it? Sure is. Is there benefit for the company which makes the pill and sells it for a nice profit? Sure is.
What subjects you to social attack is having been diagnosed with a mental illness.
It’s not what you can do—it’s what can be done to you.
Not necessarily. A common description of the effects of SSRI anti-depressants on some people is that they make you feel completely indifferent inside. Sure, you don’t want to kill yourself any more, but you don’t want ANYTHING. I am pretty sure that if you react to anti-depressants this way, they will also make you less grumpy. That doesn’t mean grumpiness has a biochemical cause.
Would you deny treatment from a patient who has different values from yours?
This is a separate issue and applies to all treatments.
“Mental illness” is old and stigmatizing terminology. People who need help shouldn’t be left undiagnosed just because of cultural connotations. Luckily there are confidentiality laws that prevent people having “psycho” tattooed on their forehead.
I take SSRIs regularly. I also happen to be a doctor. The kinds of social costs you’re talking about aren’t there because judgmental people don’t know about my condition. I can tell you yes they make me feel different inside and yes they make me feel less motivated and less emotional. I’m still motivated to do things I would normally do, just a bit less. Depression doesn’t make people motivated to do anything positive either. If those people think that SSRI induced state is not preferable to depression, they can quit medication. It’s their choice.
How bad would you have to feel to consider killing yourself? Just think about it, try to imagine it vividly. I will take my current condition any day compared to the black, hopeless void that’s waiting for me if I don’t. That’s my choice to make, not yours or big pharma’s.
Given that the SSRI messes with their ability to make motivated self determined choices, they don’t have full freedom.
There are also a lot of cases where a person might be forcefully hospitalised.
Part of the idea of having a doctor as an expert is also that the doctor will make choices.
I don’t think that anybody here advocates that you shouldn’t have the choice.
That’s a strong claim to make without a shred of evidence.
Are you saying this shouldn’t be done? Do you understand how or why it is done?
I think there’s this concept of “informed consent”. Doctors are expected to make choices, yes, but at no time is the patient rendered powerless in these decisions.
It was a rhetorical device. Don’t twist it to make it something else. Replace “me” with anyone else.
Given that I don’t know where the people I’m talking with live I know that I don’t have full knowledge of how it’s done. If I tell someone to go to a local doctor, I’m not sure what the doctor will do with them.
A doctor in a small town in Utah, in Washington, in Berlin, in Moscow, in some small Russian down, in Mumbai or in some small Indian town.
As to why people get hospitalized I think there are three whys: 1) Why did doctors get that power historically? 2) Why did nobody politically move to take that power away from them? 3) Which reasons do doctors have to make that decision?
As far as 1) goes, it has today with a general believe in the usefulness of authorities and removing people who might disturb society out of sight.
As far as 2) goes, Big Pharma wouldn’t want that power to be taken away and lobbies accordingly. There political risk involved when you take the power away and then someone commites a suicide. Politcally it’s much safer to just allow the status quo to perpetuate.
As far as 3) goes, experts want to do everything that’s in their power to deal with a problem and have as much control over it as possible. It’s human nature to seek power and use it.
As to how it’s done, I have a bunch of anecdotal reports from people who reported that it is done pretty badly.
Theoretically the It’s a bit like the standford prison experiment expect in the standford prison experiment the guards didn’t have the choice to admister sedating substances.
There are plenty of issues involved and I haven’t dealt enough with the issue to propose a specific reform. I guess an increase in accountability through auditing would help against power abuse, but getting something like that right is complicated.
It’s no strong claim as you already conceeded that they reduce your motivation. If they do so it’s easy to deduct that your choice to move away from the drug will also be effected.
I think choice engineering with a goal of 100% compliance rates has effects even if you allow for “informed consent”. Power isn’t all or nothing.
I didn’t intend to focus on the “you” part. I think access to drugs is a different topic then the usefulness of doctors and the behavior of doctors.
I would like to legally access Modafinil in my country to experiment with whether it helps me. That doesn’t mean that I want to interact with doctors about that topic. I would guess that Lumifer would also be in favor of complete drug legalisation.
That position is also the opposite of the one for which big pharma lobbies.
If you wish me to continue to converse with you, try to keep your replies shorter. Also try not to make a huge number of accusations and weak deductions.
I’m sure there are legislations where people are treated against their will for poor reasons but one of the best and the most benign reasons is to protect temporarily psychotic people who try to kill themselves or others. Did it really not cross your mind that doctors would do this to help other people?
You’re conjuring a huge one sided conspiracy here in a politically loaded topic and you’re making serious accusations without evidence. Try to look at it from my side. What the hell am I or anyone else supposed to do with these claims?
You mean all medical drugs altogether should be over the counter? If that’s what you’re saying, I think we’re done here.
Of course not. For one thing, all antibiotics for human use should be legally scheduled as controlled substances, with severe penalties for unapproved use and distribution.
If that was actually feasible, I would agree. I might favor legalizing some recreational drugs, but where exactly would you draw the line between drugs and medicine?
I’ll try.
I did say that doctors want to get as much power as possible to cure someone else. This presumably includes a desire to help people.
The claim I that doctors hospitalize people to have more power to deal with the issue of the person isn’t an extraordinary claim. At other times I do make strong claims, but what I wrote in the last post seem all relatively trival as far as the facts I assert. I desribe facts in a way that might be politically incorrect but I don’t see an issue with that.
Do you disagree with my historical assessment? Do you think that mental instituation that existed 200 years ago were places optimized for the well being of the patients? Do you think that’s an extraordinary claim that require strong evidence?
Yes, I hold that position. But we don’t have to discuss the War on Drugs in detail. Just rest assured that I don’t want to take yours or anyone elses access to drugs. If you get the point that nobody wants to take away your drugs then I succeeded with the point I wanted to make. In that case I think you would have learned something useful from the discussion.
This is really vague. I can’t imagine any significant number of doctors today wanting to force cancer treatment for example. What does “cure” mean if the patient is miserable? Saying that most doctors would want to force any treatment in general is an extraordinary claim. If you’re making that claim I suggest you consider that your perception of human (or modern western?) morality is unusual.
No to all of those. It doesn’t follow that there was this Big Pharma lobbying for conserving ancient institutions. It seems to me you’re thinking of what you would do if you were Big Pharma instead of looking for evidence of what they actually do.
I don’t think you should put words in Lumifer’s mouth. He can clarify his position himself if he wishes to. He didn’t know about my condition and I don’t think he wanted to take away my drugs, whatever that means.
I haven’t used the word force.
What do I believe the doctors do? I know a doctor who works at a hospital where she has the target of increasing the number of procedures that the hospital can bill by 10% every year.
There are people out there who believe it’s ethical to change organ donation from optin to optout. There’s broad research about how to get people to do what you want to do by using tricks like that. I do believe that oncologists frequently use related stategies to increase complience rates and have more patient undergo treatment.
Afterwards the hospital director reaches his goal of his 10% increase. If you don’t think that doctors have power over changing patients choices, the 10% goal wouldn’t make sense.
I want to point out that none of the above implies that the doctor doesn’t believe that the cancer treatment is good for the patient.
Given that he’s politically a libertarian who values personal autonomy I have a high confidence that he doesn’t support the war on drugs. I think a libertarian you don’t allow for human diversity position but want to medicate it away position just doesn’t mesh with advocating to ban further drugs from people who want to have them.
I do consider it useful to think of positions in a way where you are allowed to analyse them and make claims about them..
And you think they’re wrong? Why?
The post I wrote is not about making right/wrong judgements. I wanted to say that using techniques like that is a form of using power to change the “choices” another person makes.
I don’t think that doctors use actually force to get a patient to agree to a cancer operation but I do think that a fair portion uses choice engineering techniques to get the patient to do what they consider to be good for the patient.
The point of this conversation is lost on me. I’m out.
Peace.
(FWIW hyporational’s Less Wrong user page says that he’s in Finland.)
Do you believe that about cancer as well? Or merely about grumpiness?
I would similarly strongly endorse a social shift that stops attacking me for being diagnosed, rather than blame the diagnosis for the social attacks.
If I were subject to social attacks for being diagnosed with cancer, would you recommend we stop issuing cancer diagnoses, or recommend we stop attacking people diagnosed with cancer? I would recommend the latter. I feel the same way about mental illnesses.
I can’t tell whether you consider that an example of “without too many side-effects,” (in which case I simply disagree with you; flattened affect is an unacceptable side-effect here) or whether you somehow missed that phrase when you quoted it (which seems unlikely barring seriously motivated cognition).
Regardless, I agree that if the pill makes me less grumpy with lots of side-effects (for example, completely flattening affect as you describe), that’s far less clear evidence than in the case I actually described which you quote.
I would similarly
You seem to think that having cancer and being grumpy are very similar conditions (in the context of discussing medicalization). To me that doesn’t look like a serious assertion to engage with.
And evidently you don’t consider anything else I said worth engaging with even to the extent of dismissing it as not worth engaging with, either. So, shall we drop this here?
This is why I thought that tabooing “associated words” would be a good thing. Many treatable conditions in medicine are not considered “diseases” or “illnesses” anymore, and they shouldn’t be. This especially applies to psychiatry. Many diagnoses cannot be made unless the condition “causes significant harm to the patient” is met.
I would go even further than most doctors, and say that it’s a failure of medicine to only try to normalize harmful conditions instead of trying to improve upon what’s considered normal. This means that the language of medicine has to change even further. A diagnosis certainly shouldn’t automatically be a “disease” and not even a “disorder”.
“Acceptable human variation” and “illness of grumpiness” is a again a way to say there should be stigma attached to a diagnosis. I don’t think there should and this is a separate problem from whether certain conditions should be considered treatable. If the guy thinks his grumpiness is a problem and causes significant harm, and it can be treated without side effects that are unacceptable to him, and he would knowingly accept the social costs, then I think he should have the option of treatment available. A diagnosis isn’t an illness, it’s a label that doctors use to communicate with each other.
Also keep in mind that therapy is a medical intervention too, and is usually better for specific behavioral problems.
Like TheOtherDave said, the pill is unlikely to work, unless there is a biological mechanism involved.
In this case one should be suspicious of all treatments and not just psychiatric ones, and perhaps one should. I think this is a separate problem from whether certain conditions should be considered treatable. There’s certainly a need for a system that has less perverse incentives.
Well, I think you and I are approaching this thing from opposite directions. You’re an optimist and I’m a cynic. Here’s what I think you are imagining:
Grumpy Guy: Doctor, I’m grumpy. I don’t want to be grumpy, it seriously screws up my life. Can anything be done to make me less grumpy?
Doctor: Hmm… Well, there that pill. Try it, see if it helps you.
And here’s what I’m imagining:
Grumpy Guy: I’m here for my annual check-up.
Doctor: Hmm, you look grumpy. That’s not good. Tell me, does grumpiness lead to impairment of your social life?
Grumpy Guy: Um, I don’t know. I guess..
Doctor: Aha! I hereby diagnose you with grumpiness. Here are two pills, come visit me in a month, we’ll adjust the dosage and the interaction of the two pills. You will have to take them for the rest of your life. See you in a month!
I suppose my situation demands optimism. However, I think both scenarios you’re describing do happen, and I have no idea in what proportions. There’s also a whole spectrum of behaviour between them.
I don’t know how to prevent the scenario you’re seeing, some patients are really passive. Any ideas?
Some of my optimism must stem from the fact that I’m finnish. Mostly public health care, minimal advertising and bans on most kinds of bribery help. Unfortunately the situation is quickly eroding due to privatization. (I’m not against privatization per se, just the way it’s usually done.)
The passivity of a patient is not a good reason for the doctor to decide that he now has to run the patient’s life.
More things help like cultural and religious uniformity (yes, I know about the Swedish minority, no, they’re not different enough to matter) which leads to the high level of trust in the society, specifics of culture (protestant work ethic, strong cultural disapproval of cheating), etc.
In the best of all possible worlds I agree. What if the patient has a physical condition that lowers his quality of life or shortens it? How does a passive patient get help? Should all persuasion be banned? What’s your solution to the situation?
All good points. Are you finnish? :)
By the way, I’m not downvoting you.
In theory, you offer help (which is different from force it) which the patient might accept or reject.
In practice, it depends, as usual.
No, but I have some familiarity with the Baltic Sea region.
There are more or less persuasive ways to offer help. In many situations, I would try to be very persuasive. That’s not forcing it is it? I don’t think there’s any danger that treatment of grumpiness would be forced in the strictest meaning of the word.
Well, as I said, it depends. Someone with a clinical depression requires a different approach from someone who, say, doesn’t want to take statins even though he has high LDL.
The real underlying issue is that of power. Doctors that I know tend to have the unfortunate tendency to develop a God complex—they get used to simple, uninformed people coming to them asking for help and the doctors literally having life-and-death power over these people. But that’s a different discussion.
Patient intelligence and education is a huge factor. Many people don’t understand probabilities at all and can’t differentiate between real evidence and rhetorical tricks.
I have conflicting feelings about how I should handle patients I can’t adequately inform because of their limitations. What’s the point of trying to explain real evidence if delivery is all that matters in their decision process? It’s not like there’s any real exchange of information in those situations.
Yes, I understand the problem. I don’t know if there’s a good solution. Saying “it depends” is a cop-out, but on the other hand there is no global optimum and all you have is different trade-offs. And picking among different trade-offs—well, it depends and we’re back to square one.
To whomever downvoted every comment by Lumifer: I had to break my policy and upvote him for no good reason. I don’t believe in punishing conversation I want to be part of, especially if it’s about a topic where bias is to be expected. I also think that reasons for systematic downvoting should be stated if one expects it to have any positive effect.
Thanks but I don’t really care that much about karma. I have enough so that the website doesn’t limit what I can do and otherwise it’s a number. Occasionally someone will just go through a bunch of my posts and click-click-click-click on all of them. Oh well, maybe he just likes to click on things… :-)
If you’re interested in anti-depressants, you should talk to Yvain, what with him being a head-doctor and all.
I would recommend investigating the safety and efficacy of selegiline. Seems somewhat effective, safe, and available (albeit from overseas for US users). Do your own homework though.
Not exactly—while depression can be caused by major life suckage, depression is not a rational response to major life suckage.
That depends. “Too depressed to do anything” is a pretty effective way out of certain unpleasant situations.
Specific example: Being in grad school caused my life to suck titanium balls, which (presumably combined with a pre-existing brain vulnerability) led me to the point where I was too depressed to do any work. Which meant I had to drop out. Which was the only way I could ever have left, as my moral system at the time did not permit giving up an endeavor simply because it was making me miserable. And, surprise, surprise, as soon as I got on the plane out of there it was like color came back into the world and life was worth living again.
It’s a mistake to assign truth values to emotions. They can’t be correct or incorrect, they can be only helpful or unhelpful. And I don’t think depression is ever helpful, barring convoluted thought experiments.
How are these two at all mutual exclusive?
As someone who’s been in that boat, get in touch with a psychiatrist ASAP. It can very literally save your life, not to mention making it much much better on a day-to-day level.
Life is terrible, but it’s also strange and beautiful; if you can’t see a reason to continue with it, there is most likely an underlying problem (even if it is just “faulty wiring”) which drugs and therapy can help you identify.
I cannot recommend seeing a psychiatrist more completely.
An external view of your life and health, from a trusted professional, may help you identify causes of your discomfort and, most importantly, strategies to improve your life.
Some lives are and others aren’t. Without knowing anything about you I can’t tell, but given that you can write in English and access the Web, I’d guess yours probably isn’t and join the other people in suggesting that you see a professional.
I’m not sure how it works in your country, but you don’t necessarily need a psychiatrist to diagnose and treat depression. Also it’s good to check for bodily conditions that could make you feel like crap, and a non-psychiatrist might do that more reliably.
Some HPMOR speculation Spoilers up to current chapter. After writing this, I checked the last LessWrong thread on HPMOR, and at least one component of this has already been noticed by other people, but others have not been, I think.
I was disappointed in the last chapter, gung nqhygf jbhyq frg nfvqr gurve pbaivpgvbaf naq cerwhqvprf naq yrg puvyqera cynl n fvtavsvpnag ebyr runs contrary to common sense and to the rest of the book.
Yeah, cause that never happens in canon.
I think wizard culture has some different ideas from your culture.
Sorry, I was used to your fic’s to higher standards of believability of human behavior than canon’s.
I must be missing something, because even Harry had trouble being taken seriously by most adults for most of the story, and no other (first-year) children were anywhere near his level. Yet suddenly so many of them seem to be taken seriously by their relatives and by all the most powerful wizards. And they didn’t even have to save the Earth from the Formics.
It’s still the culture that throws kids on a Hippogryff and tells them to get going.
And as Daphne notes in her thoughts, the children are standing in for their parents and speaking their parents’ orders; they are acting as spokespersons for their families, and the others are treating them as such.
*Hippogriff
Which part would you never do if you (as board member) were righteously angry at Dumbledore?
I’d never let a child do the public announcement of my decision.
Why not, if they could do it? This seems a foolish rejection of a class of tools. See Malala Yousafzai.
I suspect that had more to do with Harry’s involvement than anything else. “gung [crbcyr ehaavat guvatf] jbhyq frg nfvqr gurve pbaivpgvbaf naq cerwhqvprf naq yrg puvyqera cynl n fvtavsvpnag ebyr” vf n ybg zber cynhfvoyr jura bar bs gurz vf n puvyq.
We’re a day out—this should be Oct 21-27. Next one: Oct 28-Oct 35. (cough)
When I posted, it was still the 20th in my timezone, so that’s what I went with.
.