Open Thread May 23 - May 29, 2016
If it’s worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the ‘open_thread’ tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
Misunderstandings and ignorance of GCTA seem to be quite pervasive, so I’ve tried to write a Wikipedia article on it: https://en.wikipedia.org/wiki/GCTA
Thanks for doing the frustrating work.
(The first and only comment so far is, more or less, “delete this article, because I don’t care”. Ugh.)
Yeah, that was weird. Almost as soon as I posted it, too. And the IP has only made 1 edit before, so it’s not some auto-troll.
Thank you.
One thing I found I wished for and didn’t find, though, is a description of the underlying mechanics. You described what and why, but not how. Do you think that can be usefully expressed in a couple of paragraphs or it’s too complicated for that? The article already assumes a fair amount of background knowledge.
I’m not sure it can. I’ve read many different descriptions and looked at the math, but it’s a very different approach from the twin-based variance components estimation procedures I’ve managed to beat some understanding into my head of, and while I’ve worked with multilevel models & random effects in other contexts, the verbal descriptions of using multilevel models for estimating heritability just don’t make sense to me. (Judging from Visscher’s commentary paper, I may not be the only one having this problem.) I think my understanding of linear models and matrices may be too weak for it to click for me.
One problem I can see at first glance that the article doesn’t look like a Wikipedia article, but as a textbook or part of a publication. The goal of a Wikipedia article should be for a wide audience to understand the basics of something, and not a treatise only experts can comprehend.
What you wrote seems to be an impressive work, but it should be simplified (or at least the introduction of it), so that even non-experts can have a chance to at least learn what it is about.
I don’t think this is true. Wikipedia is a collection of knowledge, not a set of introductory articles.
See e.g. the Wikipedia pages on intermediate-to-high statistical concepts and techniques, e.g. copulas).
Good god, how long did that take to write?
1 full day. And I guess a few hours today checking edits other people made, tweaking parts of the article, responding to comments, etc. Plus, of course, all the background work that went into being able to write it in the first place… (‘How long, Mr Whistler?’) For example, I spent easily a week researching intelligence GCTAs and measurement error for my embryo selection cost-benefit analysis, which I could mostly copy-paste into that article. (I wanted an accurate GCTA estimate to put an upper bound on how much variance SNPs could ever explain and thus how much gain was possible per embryo. This required meta-analyzing GCTA estimates to get a stable point estimate and then correction for measurement error because a lot of the estimates are using imperfect measurements of intelligence.)
EDIT: and of course, after saying that, I then spent what must have been several other days working on digging up even more citations, improving related articles, and debating heritability and other stuff on Reddit...
How does this reject the genetic factors causing circumcision in Jews?
What?
It is my understanding that due to ethical concerns, the scientific field of psychology does not have a data collection methodology capable of distinguishing between effects caused by the parents’ genes and effects caused by the parents’ actions, and as such, no possible statistical approach will give a correct answer on the heritability of traits caused by the latter, like schizophrenia a.k.a. religion or intelligence. In order to clear up my “misunderstandings and ignorance”, you will need to demonstrate an approach that can, at the very least, successfully disprove genetic contribution in circumcision.
I think you need to read up a little more on behavioral genetics. To point out the obvious, besides adoption studies (you might benefit from learning to use Google Scholar) and and more recent variants like using sperm donors (a design I just learned about yesterday), your classic twin study design and most any ‘within-family’ design does control for parental actions, because they have the same parents. eg if a trait is solely due to parental actions, then monozygotic twins should have exactly the same concordance as dizygotic twins despite their very different genetic overlaps, because they’re born at the same time to the same parents and raised the same.
More importantly, the point of GCTA is that by using unrelated strangers, they are also affected by unrelated parents and unrelated environments. So I’m not sure what objection you seem to have in mind.
Sorry if I’m misunderstanding the method, but doesn’t it work something like finding strangers who have common genetics by chance?
If so, then 2 jews are more likely to have common genetics than chance, and also more likely to be circumcised. So it would appear that circumcision is genetic, when in fact it’s cultural.
It works by finding common genetics up to a limit of relatedness like fourth-cousin level. I think some Jewish groups may be sufficiently inbred/endogamous for long enough periods that it might not be possible to run GCTA with the usual cutoff since they’ll all be too related to each other. Population structure beyond that is dealt with by the usual approach of subtracting out 10 or 20 principal components and including them to control for that. This is a bit ad hoc but does work well in GWASes and gets rid of that problem, as indicated by the fact that the hits replicate within-family where the population structure is equalized by design and also have a good track record cross-racially/country too: https://www.reddit.com/r/science/comments/4kf881/largestever_genetics_study_shows_that_genetic/d3el0p2
Your understanding looks silly. It is rather obvious that not all children are brought up by their parents and that has been used in a number of studies. In fact, many classic identical-twins studies rely on being able to find genetically identical people who were brought up in different circumstances (including different parents).
Yes, it’s obvious. That’s why it was surprising when I couldn’t find a single study on schizophrenia where all children were separated from the parents immediately after birth. Feel free to enlighten me.
Bzzzzz, I am sorry, you must have confused me with your research assistant. Please try again.
I used ingres’s excellent LW 2016 survey data set to do some analyses on the extended LW community’s interest in cryonics. Fair warning, the stats are pretty basic and descriptive. Here it is: http://www.brainpreservation.org/interest-in-cryonics-from-the-less-wrong-2016-survey/
I am a little bothered by the scale you used—on a scale from 0-5 where:
0: no and don’t want to sign up 1: no, still considering it. 2: no, would like to but can’t afford it. etc. towards more interested in cryonics.
If we take an ordinary human who has barely even heard that cryonics is a real thing—the entry point to the scale is somewhere between 0 and 1 on the 6 point scale. Which means that as much as we have detailed data of states above 1; we don’t have detailed data of states below 1. Which means that we potentially only recorded half the story; and with that; we have unrepresentative data that skews positively towards cryonics.
Upvoted because this is a good critique. My rationale for using this scale is that I was less interested in absolute interest in cryonics and more in relative interest in cryonics between groups. The data and my code are publicly available, so if you are bothered by it, then you should do your own analysis.
I am bothered by it to the extent that it was confusing because it was not automatically representative of the “absolute interest in cryonic” as you called it, but with what I pointed out in mind it is possible to still take the data as good information. (so not bothered enough to do my own analysis)
Interesting that Lesswrongers are 50,000 times more likely to sign up for cryonics than the general population. I had previously heard criticism of Lesswrong, that if we really believe in cryonics, it’s irrational that so few are signed up.
Also surprising that vegetarianism correlates with cryonics interest.
It’s a standard no-win situation: if too few have signed up, LW people are irrational; and if many have signed up, LW is a cult.
That’s no-win given that ideas generally held on LW imply that we should sign up for cryonics.
There’s nothing necessarily unfair about that. Suppose some group’s professed beliefs imply that the sun goes around the earth; then you may say that members of the group are inconsistent if they aren’t geocentrists, and crazy if they are. No win, indeed, but the problem is that their group’s professed beliefs imply something crazy.
In this case, I don’t think it’s clear there is such a thing as LW’s professed beliefs, it’s not clear that if there are they imply that we should sign up for cryonics, and I don’t think signing up for cryonics is particularly crazy. So I’m not exactly endorsing the no-win side of this. But it looks like you’re making a complaint about the logical structure of the criticism that would invalidate some perfectly reasonable criticisms of (other?) groups and their members.
Nope. I’m making a guess that this particular argument looked like a good soldier and so was sent into battle; a mirror-image argument would also look like a good soldier and would also be sent into the same battle. Logical structure is an irrelevant detail X-/
Right, but what about the people who say they strongly believe in cryonics, have income high enough to afford it (and the insurance isn’t that expensive actually), yet haven’t signed up? I.e. “cryocrastinators”. There are a lot of those on the survey results every year.
I believe this was the argument used, that Lesswronger’s aren’t very instrumentally rational, or good at actually getting things done. Again, I can’t find the post in question, it’s possible it was deleted.
I bet many LessWrongers are just not interested in signing up. That’s not irrational, or rational, it’s just a matter of preferences.
What does “we really believe” mean? That seems like something we categorically don’t do. (1) We don’t hold group belief but individuals have different beliefs.
(2) We think in terms of probability that are different for different people
It seems criticism like that comes from people who don’t understand that we aren’t a religion that specicies what everybody has to believe.
If the people who belief that cryonics works with >0.3 are signed up for cryonics when available while the people who think it only works with ~0.1 are not signed up I don’t see any sign of irrationality.
Has anybody looked at the data set to check if that’s indeed the case?
The linked post contains graphs.
I was just summarizing something I remember reading. I searched for every keyword I can think of but I can’t find it.
But I swear there was a post highly critical of lesswrong, and one of the arguments was that. That if such a high percentage of lesswrongers believe in cryonics, why are so few signed up? It was an argument that lesswrong is ineffective.
It was just interesting to me to see the most recent statistics, and a lot of people are signed up, and certainly much higher than the general population.
It would be an argument that lesswrongers are not perfect. Also “lesswrongers” includes people who merely read the website once in a while.
I am completely unsurprised by the fact that mere reading LW articles doesn’t make people perfect.
I would be more bothered by finding out that “lesswrongers” are less rational than the average population, or just some large enough control group that I could easily join instead of LW. But the numbers abour cryonics do not show that.
I think that if LWers are 50,000 times more likely to do something than the general population, that proves neither rationality nor irrationality. It just shows that LWers are chosen by an extremely selective process.
It is hilarious and yet quite predictable that one of the only groups nearly as unenthused about cryonics as ‘committed theists’ was ‘biologists’.
A good post in a generally good blog. Samples:
...
...
reminds me of a SSC post on safe spaces:
I also enjoy this ending (EDIT: of the article linked by Lumifer, not the SSC one):
It reminds me of my pet theory on some similarities between high-IQ people and autists; specifically, having to develop a “theory of mind unlike my own” during childhood. (But the two of us probably had a long disagreement about this in the past, if I remember correctly, so I don’t want to repeat it here.)
Just to forestall confusion, that ending is not the ending of the SSC post, but the (near-)ending of the post Lumifer linked to. (In particular, Scott is not calling himself autistic.)
Thanks; edited the comment to make it clear.
Any advice on what is the best way to buy index funds and/or individual stocks? Particularly for people in the UK?
I know this has probably been asked before on a ‘basic knowledge’ thread, but I can’t find the answer.
There’s this document written by /u/sixes_and_sevens. I used it to set up mine (I’m the one who anti-recommended M&G). I might be able to answer any further questions, but it was a while ago so maybe not.
Thanks! (and thanks to sixes_and_sevens)
Open an account at a discount broker? Comparing fees is quite straightforward and other than that you only really care about the convenience of their user interface / experience.
Following the usual monthly linkfest on SSC, I stumbled upon an interesting paper by Scott Aaronson.
Basically, he and Adam Yedidia created a Turing machine which, from ZFC, cannot be proved to stop or run forever (it will run forever assuming a superset of said theory).
It is already known from Chaitin incompleteness theorem that every formal system has a limit complexity length, over which it cannot prove or disprove certain assertions. The interesting, perhaps surprising, part of the result is that said Turing machine has ‘only’ 7918 states, that is a registry less than two bytes long.
This small complexity is already sufficient to evade the grasp of ZFC.
You can easily slogan-ize this result by saying that BB(7918) (the 7918th Busy Beaver number) is uncomputable (whispering immediately after ”… by ZFC”).
This is an upper bound. There could be many smaller indeterminate machines. Many suspect that even very simple TMs can indeterminate. E.g. collatz.
Huh. I expected the smallest number of states of a TM of indeterminate halting to be, like, about 30. Consider how quickly BB diverges, after all.
I agree with most of what you said, but
this remark sounds weird. What is the meaning of the bit-size of the list of states? Are you suggesting to run the TM on a 16-bit computer? Then, good luck addressing the memory (the tape), because I guess the length of the tape used is probably also “uncomputable”, or at least so large that even the pointers to the tape would not fit into any realistic computer’s memory.
It was merely a remark to notice that 7918 states fits in a state registry that is less than two bytes long. And since said TM only has two symbols, it will also need no more than 15836 instructions.
Notice how compact the machine is: 13 bits for the state registry, 29 bits for each instructions, 15836 of said instructions = 459257 bits. less than half a megabyte. You could emulate that on basically everything that has a chip, nowadays.
Alas, the tape size is infinite, as with every TM… But! Turing machines do not need memory pointers: they observe only the symbol where the reading head is.
Sure, but any system that emulates the TM and the tape would need it. (In other words, it feels like cheating to say that memory is a part of the usual computer, but tape is not a part of the TM.)
I still don’t see where the difficulty is. You need a memory registry only if you need random access to said memory, but the TM does not need it.
Sure, if you want to emulate a TM on a system that already uses random access memories, like most modern systems do, than of course you need a sufficiently long pointer for a sufficiently wide memory. But that is an accident of how systems work today, not an inherent complexity: you could easily emulate the TM in an old mainframe with a magnetic tape without ever seeing a memory pointer.
Reminiscing over one of my favourite passages from Anathem, I’ve been enjoying looking through visual, wordless proofs of late. The low-hanging fruit is mostly classical geomety, but a few examples of logical proofs have popped up as well.
This got me wondering if it’s possible to communicate the fundamental idea of Bayes’ Theorem in an entirely visual format, without written language or symbols needing translation. I’d welcome thoughts from anyone else on this.
Challenge accepted.
https://i.imgsafe.org/914f428.png
If I am reading this correctly:
I saw some footprints; I know that there are 1⁄3 humans around and 2⁄3 cats around. there is a 3⁄4 likelyhood that humans made the human shaped footprint; there is a 1⁄4 chance that cats in boots made the human shaped footprints. Therefore my belief is that humans are more likely to have made the footprints than cats.
(I think it needs a little work, but it’s an excellent diagram so far)
A suggestion: modify the number of creatures on the left to equal a count of the frequency of the priors? And the number on the right to account for frequency of belief.
Yup.
I don’t buy “frequency of belief”. Maybe instead, I’d put those in thought bubbles, and change scaling of the bubbles.
Can you also add a watermark so that you get credits if I repost the image?Edit: woops there is a watermark, I just didn’t see it.I was thinking more specficially, “I live with 1 humans and 2 cats. therefore my priors of who could have made these footprints are represented by one human and two cats”. not exactly frequency of belief but a “belief of frequency”?
Edit: also can it be a square not a rectangle? Is there a reason it was a rectangle to begin with? Something about strength of evidence maybe?
One last edit: Can you make the “cat in boots” less likely? How many cats in boots do other people have in normal priors??
It’s not supposed to be realistic—real frequency of cats in boots is way too low for that. But I adjusted it a little for you: https://i.imgsafe.org/5876a8e.png
Edit: and about the shape, it matters not, as long as you think in odds ratios.
I like this version much better. Yes the shape does not matter; it does help me think about it though. I think this is generally an excellent visual representation. Well done!
This looks great and I can see that it should work, but I can’t seem to find a formal proof. Can you explain a bit?
http://lesswrong.com/lw/nhi/geometric_bayesian_update/
Whoah. That gets many points. What an excellent layout! We need to know what boots are for it to translate, but that’s a lot closer to an ideal solution than I’ve worked through.
Edit—I thought the diagram looked familiar!
Was considering something like a tshirt of p(smoke|fire) and p(fire|smoke). never came to fruition; feel free to take the idea if you like.
Bayes is mostly about conditioning, and so I think you can draw a Venn Diagram that makes it fairly clear.
Thanks! I’ve been playing around with it for a week or so but can’t elegantly find a way to do it that meets my arbitrary standards of elegance and cool design :-)
Becomes easier when using non-circular shapes for Venn-ing, but my efforts look a little hacky.
I prefer a diagram like this with just overlapping circles. And you can kind of see how the portion of the hypothesis that exists in the evidence circle represents it’s probability.
Arbital also has some nice visualizations: https://arbital.com/p/bayes_rule_waterfall/?l=1x1 https://arbital.com/p/bayes_rule_proportional/ https://arbital.com/p/bayes_log_odds/ and https://arbital.com/p/bayes_rule_proof/?l=1yd
Fivethirtyeight also made a neat graphic: https://espnfivethirtyeight.files.wordpress.com/2016/05/hobson-theranos-1-rk.png?w=1024&h=767
The issue with Bayes theorem isn’t the derivation or proof. Nobody seriously debates the validity of the theorem as a mathematical statement. The debate, or conceptual roadblock, or whatever you want to call it, is whether researchers should apply the theorem as the fundamental approach to statistical inference.
What was the result of the IARPA prediction contest (2010-2015)?
Below I present what seem to me very basic questions about the results. I have read vague statements about the results that sound like people are willing to answer these questions, but the details seem oddly elusive. Is there is some write-up I am missing?
How many teams were there? 5 academic teams? What were their names, schools, or PIs? What was the “control group”? Were there two, an official control group and another group consisting of intelligence analysts with access to classified information?
Added: perhaps a third control group “a prediction market operating within the IARPA tournament.”
What were the Brier scores of the various teams in various years?
When Tetlock says that A did 10% better than B, does he mean that the Brier score of A was 90% of the Brier score of B?
I can identify 4 schools involved, composing 3-4 teams:
GJP (Berkeley: Tetlock, Mellers, Moore)
DAGGRE/SciCast (GMU: Twardy, Hanson)
Michigan, MIT − 2 teams or a joint team?
In Superforcasting Tetlock writes that the main documents of the comparison between the GJP forecasters against the intelligence analysts with access to classified information is classified. Tetlock doesn’t directly say something about that comparison but reports in his book that a newspaper article says that the GJP forecasters were 30% better (if I remember right).
Here is the leak. It says that the superforecasters averaged 30% better than the classified analysts. Presumably that’s the 2012-2013 season only and we won’t hear about other years.
What is weird is that other sources talk about “the control group” and for years I thought that this was the control group. But Tetlock implies that he doesn’t have access to the comparison with the classified group, but that he does have access to the comparison with the control group. In particular, he mentions that IARPA set a 4th year target of beating the control group by 50% and I think he says that he achieved that the first or second year. So that isn’t the classified comparison. I guess it is possible to reconcile the two comparisons by positing that the superforecasters were 30% better, but that GJP, after extremizing, was more than 50% better. But I think that there were two groups.
I’m not sure that X% better has a unit that’s always the same.
I don’t think that’s the case. It’s rather that it’s classified information that he can’t reveal directly because it’s classified.
That’s what I thought when I saw the passage quoted from the book (p95), but then I got the book and looked at the endnote (p301) and Tetlock says:
which must be illegal if he has seen the comparisons.
He likely worked with a censor about how and what he can write. I think that line can be very well explained as the result of a compromise with the censor.
The GRIM test — a method for evaluating published research
Testing the mean...
https://medium.com/@jamesheathers/the-grim-test-a-method-for-evaluating-published-research-9a4e5f05e870#.r9izfnrxp
(epistemic status: Ruminations on cognitive processes by non-expert.)
I have a question tangential to AI safety about goal formation. How do goals form in systems that do no explicitly have goals to begin with?
I tried to google this and didn’t find answers neither for AI systems nor for neuropsychology. One source (Rehabilitation Goal Setting: Theory, Practice and Evidence) summarised:
Apparently many AI safety problems revolve around the wrong goals or the extreme satisfaction of goals. The usually implied or explicit definition of a goal seems to be the minimum difference to a target state (which might be infinity for some valuation functions). Many AI models include some notion of the goal in some coded or explicitly given form. In general that coding isn’t the ‘real’ goal. By real goal I mean that which the AI system it total appears to optimize for as a whole. And that may differ from the specification due to the structure of the available input and output channels and the strength of the optimization process. Nonetheless there is some goal and there is a conceptual relation between the coded and the real goal.
But maybe real things can be a bit more complicated. Consider human goal formation. Apparently we do have goals. And we kind of optimize for them. But the question arises: Where do they come from cognitively and neurologically?
Goals are very high level concepts. I think there is no high level specification of the goals somewhere inside us that we read off and optimize for. I think our goals are our own understanding—on that high level of abstraction—of those patterns behind our behavior.
If that is right and goals are just our own understanding of some patterns of behavior, then how comes there is are specific brain modules (prefrontal cortex) devoted to planning for it? Or rather how come these brain parts are actually connected to the abstract concept of a goal? Or aren’t they? And the planning doesn’t act on our understand of the goals but on the constituent parts. What are these?
In my children I see clearly goal-directed behavior long before they can articulate the concept. And there are clear intermediate steps where they desperately try to optimize for very isolated goals. For example winning a race to the door. Trying to climb a fence. Being the first one to get a treat. Winning a game. Loosing apparently causes real suffering. But why? Where is the loss? How are any of these things even matched against a loss. How does they brain match whatever representation of reality to these emotions? How do the encodings of concepts for me and you and our race get connected to our feelings about this situation? And I kind of assume here that the emotions themselves somehow produce the valuation that controls our motivation.
I took issue with not knowing how humans formed goals. so I made this list of common human goals and suggested humans who do not know should look at the list of common goals and pick ones that are relevant to themselves.
You seem to be confusing goals and value systems—even without a goal, the UFAI risk is not gone.
Maybe it is not right to anthropomorphize but take a human who is (acting) absolutely clueless, and given choices. They’ll pick something and stick to it. Questioned about it, they’ll say something like “I dunno, I think I like that option” . This is what I’d imagine something without a goal to act—maybe it is consistent, maybe it will pick things it likes, but it doesn’t plan ahead and doesn’t try to steer actions to a goal.
For an AI, that would be a totally indifferent AI. I think it would just sit idle or do random actions. If you then give it a bad value system, and ask it to help you, you’ll get “no” back. Helping people takes effort. Who’d want to spend processor cycles on that?
...
On the other hand, perhaps goals and value systems are actually the same; having a value system means you’ll have goals (“envisioned preferred world states” vs “preferred world states”), so you can not not have goals whilst having a value system. In that case, you’d have an AI without values. This I think is likely to result in one of two options… on contact with a human that provides an order to follow, it could either not care and do nothing (it stays idle… forever, not even acting in self-preservation because, again, it has no values). Or, it accepts the order and just goes along. That’d be dangerous, because this has basically no brakes—if it does whatever you ask of it, without regard for human values… I hope you didn’t ask for anything complex. “World peace” would resolve very nastily, as would “get me some money” (it is stolen from your neighbors… or maybe it brings you your wallet), and things like “get me a glass of water” can be interpreted in so many ways that being handed a piece of ice in the shape of a drinking glass is in the positive side of results.
That’s the crux of it, I think. Without a value system, there are no brakes. There might also not be any way to get the AI to do anything. But with a value system that is flawed, there might be no brakes in a scenario where we’d want the AI to stop. Or the AI wouldn’t entertain requests that we’d want it to do. So a lot of research goes into this area to make sure we can make the AI do what we want it to do in a way that we’re okay with.
How do you solve interpersonal problems when neither sides can see themselves as the one in fault?
I’ve had a a fight with my sister regarding my birthday present. She bought me—boosted with a contribution of my mom and dad—a bunch of clothes. I naturally got mad because:
it’s a large investment for an unsafe return (my disappointment)
I always hated getting clothes for my birthday and the trend haven’t changed. I always just asked for money instead.
It has caused a little bit of bitterness. I understand her point of view, which was to make me happy on my birthday but I still can’t excuse the invalidity of the function she was using, especially considering that I previously mentioned that I hate clothes for birthday.
What should I do in order to ease the situation? Also, do you think that my reaction was inappropriate?
I talked about this with other people and what people said was ‘it’s the intention that matters’ and that sounds like an excuse (and at this point I’m curious if I actually am looking for criticism or just subconsciously hoping I’ll get a bunch of chocolate frogs) so get the best criticism you can give.
Advance warning: there are very few chocolate frogs in what follows. Disclaimer: I will be saying a lot about how I think almost everyone feels about present-giving; I am describing, not endorsing.
I think your idea of what birthday present giving is for differs from that of the rest of society (including your sister). I think
you think that when A gives B a present, the point is to benefit B, and A will (subject to limitations of budget, time available, etc.) want to give a present that benefits B as much as possible;
practically everyone else would say something like that if asked, but actually behave as if the point is to enable A to demonstrate that they care about B and understand B’s tastes well enough to buy something B likes. (So A can feel good about being caring and insightful, and B can feel good about being cared for and understood.)
From the first of those viewpoints, giving money makes a lot of sense. But from the second, it makes no sense at all. Therefore, giving money for a birthday present is unthinkable for most people—and if you ask to be given money, you will almost-literally not be heard; all that will come across is a general impression of complaininess and unreasonableness.
I think you also differ from the rest of society (including your sister) about what’s an appropriate reaction when you get something you don’t like:
you think you should say “oh, no, I didn’t want that; please don’t do that again” and make a suggestion of something better for next time;
practically everyone else thinks you should pretend you really like it and act just as grateful as if you’d been given something perfect.
This is mostly a consequence of the other difference. If the point of giving a present is to demonstrate your own caring and understanding, then having it rejected ruins everything; if it’s to give something genuinely beneficial, the failure is in the poor choice of present and the rejection is just giving useful information.
And now remember that “please give me money” is unthinkable and therefore can’t be heard; so “I don’t like X; please give me money in future” will be heard as “I don’t like X, and I’m not going to suggest a better alternative for next time”, and since you haven’t (from the giver’s perspective) actually made an actionable suggestion, it’s quite possible that they won’t remember that you specifically didn’t like X; just that they gave you something and you were unhelpfully complainy in response.
So now here’s how I think your sister probably sees it. (I’m going to assume you’re male; let me know if that’s wrong and I’ll fix my language.)
“My brother refuses to say what he wants for his birthday. So, with no information to go on, I got him some clothes. After all, everyone wears clothes. And then, when he gets them, instead of being grateful or at least pretending to be grateful, he flies off the handle and complains about how he hates getting clothes!”
Whereas, of course, from your perspective it’s
“My sister keeps getting me clothes for my birthday. I’ve said more than once before that I want money, not clothes, but she just doesn’t listen. And then she gets upset when I tell her I don’t want what she’s given me!”
OK, so how to move forward? In an ideal world, part of the answer would be for your sister to accept your preference for being given money. But let’s assume that’s not going to happen. If you can cope with accepting blame for things that aren’t altogether your fault, I think the most positive thing would be to find some things you would be glad to be bought, make an Amazon wishlist or something out of them, and say something like this to your sister:
Did you offer any suggestions of things she could buy you? Cash doesn’t count because mumblereasons. It sounds to me like your sister acted poorly, especially in getting your parents to contribute. But did you make it easy for her to act well?
I too would prefer simply receiving cash, but I’ve accepted that that’s not happening, so I have an Amazon wishlist. It mostly has books and graphic novels. Graphic novels in particular make a good gift for me, because they’re often a little more expensive than I’d like to spend on them myself.
(I feel like some people dislike even buying presents from a list, but you can at least suggest categories of things.)
Logically analyzing the actions of human beings in terms of preferences, functions, and returns is hard. It’s not actually impossible, but pretty much everyone who tries misses important things that are hard to put into words. I’d first wonder why you think that birthday presents are supposed to be maximizing return in the first place.
Buying someone a present, for normal humans, requires both that the present not be too cheap and that some effort was taken to match the present specifically to the recipient. Maximizing return is not important. There are always edge cases, but in general, unless you are talking about an occasion where social customs require cash, cash is a bad gift because cash is not specifically matched to the recipient. It is very difficult to overturn this custom by just saying “I can use cash more than I can use clothes”.
Furthermore, parents are a special case because parents can make decisions that favor your welfare instead of your preferences, that would be arrogant if made by anyone else. If your mom and dad think that you need clothes, they’re going to buy you clothes even if you think you need something else more. There’s still a line beyond which even parents would be rude, but just deciding that you need clothes probably isn’t over that line.
It also depends on your age, whether you live with your parents (and thus they can see what clothes you own), etc. Also, did you even try to tell your parents that there was something you needed more than clothes, aside from cash?
Is there any other kind?
Spammer. Kill it with fire!
We are getting closer to the future in which you WILL be able to stab people in the face over the internet.
Most silly thing I’ve seen in a while.
Is it supported by CBT though? It could look silly only to me.
CBT? It’s no accident the company that makes it is named Pavlok. The front page of its website says:
Bad dog! You will salivate!! X-D
I’m curious how effective it is. Getting beaten up can be a nice example: it’s painful can last for a moment. That zap thing can be
Easily circumvented
Takes the Pavlov banner while not actually being 100% loyal to it.
Isn’t actually much of a consequence.
Without any data I’ll take a wild-ass guess that it is effective for some people, probably a fairly small number. Most wouldn’t buy such a thing (for obvious reasons) and a great deal of those who do would discard it after the first few unpleasant experiences.
I am starting to look at the health insurance market.
This is a human-level search: where do I find the basic considerations to evaluate everything else with? Do you know of a good resource?
In any particular geographical or topical area?
Australia, NSW. I am a young and healthy person with no existing conditions, also good vision and no wisdom teeth. (looking to get health insurance)
It seems that it varies from just hospital, through to full associated cover including money back for having a gym membership, massage and more. I am hesitant because it seems that I would pay more for that than I would otherwise pay for services that I would use (as a healthy young person right now).
In your situation, in Australia, it’s mostly about forward planning. Do you have any foreknowledge of likely changes in your health or family situation?
The insurance market in Australia has historically been pretty poor in terms of transparency and easy comparisons. I’m sure you’ve found the various compare-policy tools online. I’m assuming you don’t want to piggyback on a family policy.
Are you looking for more data, or a list of considerations for insurance planning? If it’s the latter, try browsing around insurance industry planner websites for their policy documents. I can probably get some friends in the industry to email me more comprehensive things if you want to work of their approaches.
will continue via PM.
https://www.facebook.com/groups/144017955332/
The Facebook group has changed names. If you are looking for it, it goes by “Brain Debugging Discussion”. The link is the same.
WinSplit Revolution, which lets you set locations and sizes in pixels for various window options, worked beautifully for splitting my wide monitor into thirds, but did not survive the transition to Windows 10. I can find countless window managers that let me snap to the left half or the right half, or if they’re particularly fancy into quarters. But I have yet to find a tool with keyboard hotkeys that will divide the monitor space into thirds, or let me set custom locations so I can do it myself.
What am I missing?
Have you asked this question in a Windows specific forum? e.g. https://www.reddit.com/r/windows
I have now.
Ok, I have to hold my breath as I ask this, and I’m really not trying to poke any bears, but I trust this community’s ability to answer objectively more than other places I can ask, including more than my weak weak Google fu, given all the noise:
Is Sanders actually more than let’s say 25% likely to get the nod?
I had written him off early, but I don’t get to vote in that primary so I only just started paying attention. I’m probably voting Libertarian anyway, but Trump scares me almost as much as Clinton, so I’d sleep a little better during the meanwhile if it turns out I was wrong.
Thanks in advance. If this violates the Politics Commandment I accept the thumbs, but I’d love to also hear an answer I can trust.
He’s millions of votes and many many delegates down compared to HRC. I think the only realistic way he gets the Democratic nomination is if HRC abruptly becomes obviously unelectable (e.g., if the business with her email server starts looking like getting her into actual legal trouble, or someone discovers clear evidence of outright bribery from her Wall Street friends), in which case the “superdelegates” might all switch to Sanders. I don’t see any such scenario that actually looks more than a few percent likely.
(I make no claim to be an expert; I offer this only as a fairly typical LWer’s take on the matter.)
Thanks G, I feel more confident I understand. Can’t wait to see the debates; I’m open to the possibility my judgement on the matter might be wrong about one or both.
No.
To get the nomination he needs something extraordinary to happen. Something like Hillary developing a major health problem or the FBI indicting her over her private email server.
Someone pointed out a silver lining: the notion of President Trump might make progressives to be slightly less enthusiastic about imperial presidency. I’m not holding my breath, though.
Are progressives particularly enthusiastic about imperial presidency?
I haven’t noticed any such enthusiasm. I have noticed people being annoyed when “their guy” was in the White House but couldn’t do the things they wanted because Congress was on the other side, but that’s not at all the same thing.
Is it a thing progressives do more than conservatives? I dunno. It may be a thing progressives have done more of in the last decade or so because they’ve spent more of that time with the president on their side and Congress against, but that doesn’t tell us much about actual differences in disposition.
[EDITED for slightly less clumsy wording.]
I think so, yes. Here is an example, they are not hard to find. Of course, the left elides the word “imperial” :-/
More than annoyed. These people want to expand the presidential powers and use the executive branch to achieve their goals, separation of powers be damned.
Yes, because progressive are much more comfortable with the idea of Big State (not to mention the idea of upending traditional arrangements).
… whose authors say
> the consolidation of executive authority has led to a number of dangerous policies [see David Shipler, in this issue], and we strongly oppose the extreme manifestations of this power, such as the “kill lists” that have already defined Obama’s presidency
which doesn’t seem exactly like a ringing endorsement of “imperial presidency”.
So far as I can tell, the article isn’t proposing that the POTUS should have any powers he doesn’t already have; only that he should use some of his already-existing powers in particular ways. If that’s “imperial presidency” then the US already has imperial presidency and the only thing restraining it is the limited ambition of presidents.
Which people, exactly? Again, the article you pointed to as an example of advocacy for “imperial presidency” claims quite explicitly that the president already has the power to do all the things it says he should do. (Of course that might be wrong. But saying the president should do something that you wrongly believe he’s already entitled to do is not advocating for expanding presidential power.)
Do you have evidence that they actually do, as opposed to a bulveristic explanation of why you would expect them to?
I’m not sure how one would quantify that, but a related question would be which presidents have actually exercised more “imperial” power. A crude proxy for that that happens to be readily available is number of executive orders issued. So here are the last 50 years’ presidents in order by number of EOs, most to least: Reagan (R), Clinton (D), Nixon (R), Johnson (D), Carter (D), Bush Jr (R), Obama (D), Ford (R), Bush Sr (R). Seems fairly evenly matched to me.
I don’t think I understand your bulveristic explanation, anyway. Issuing more executive orders (or exercising more presidential power in other ways) is about the balance between branches of government, not about the size of the government.
Here’s an interesting article from 2006 about pretty much exactly this issue; it deplores the (alleged) expansion of presidential power and says both conservatives and progressives are to blame, and if you look at its source you will see that it’s not likely to be making that claim in the service of a pro-progressive bias.
That’s what I had thought originally. Thank you for the speedy reply!
Betfair says 5%. I’m not saying you shouldn’t second-guess prediction markets, but you should look at them. If you think the right number is 25%, maybe you should put money on it. Actually, I do say that you should second-guess them: low numbers are usually over-estimates because of the structure of the market.
I don’t know the right number; I just used it as a set point rather than saying “Can he win?” and getting “Well TECHNICALLY...” Thanks for the reply; I’ll keep current sleep patterns ;)
I’d estimate Sanders’ chances as less than 10%, maybe a bit more than 5%.He would need a mass defection of superdelegates at this point, and it’s possible they would be directed to jump en masse to someone else (like Biden) even if the DNC decides to dump Clinton.
Thanks K; good to have more supporting evidence. I won’t bother checking out his issues at this time; I’ll wait until I know who I get to choose.
Cues may not actually trigger drug seeking as much as we assume:
-WP: Cue reactivity
people become scientists because they’re lured by the charm of discovery and excitement, but in clinical research, that’s not what you get. No, doctors get that because they see individual cases over time, many.
-WP: RCT’s
[Interesting discussion on responses to statements like: ‘”wow it is so inspirational to see how you got through med school with three kids, I can’t imagine”’
Is there a free app for this kinda automatic language translator device
Unexpected reframe from ’rsdtyler
I am not sure if I read it here or on SSC, but someone tried to estimate how a “mary’s room” equivalent for the human brain would look like. A moon sized library on which robotic crawlers run around at decent fractions of c …
Anybody having info on that?
When you say “mary’s room”, do you actually mean Chinese Room rather than Mary’s Room?
What if Mary is Chinese? :P
I mean, what if there is a person not understanding Chinese in a room, operating the Chinese symbols they don’t understand, according to formal rules that make no sense to them. The system already “knows” (on the symbol level, not the person who operates it) everything about the “red” color, but it has never perceived the red color in its input. And then, one day, it receives the red color in the input. If there is an unusual response by the system, what exactly caused it? (For extra layer of complication, let’s suppose that the inputs to the “Chinese room” are bit streams containing JPEG images, so even the person operating the room has never seen the red color.)
To add more context, what if the perceived red object is a trolley running down the railway track...
See also: Can Bad Men Make Good Brains Do Bad Things?. That’s a JSTOR article which won’t be accessible for most readers, but some kind person has copied out its content here.
[EDITED to use a slightly better choice of some-kind-person.]
What if the Chinese room is operated by trolleys running on tracks and the signaling system works by putting some (smaller) number of fat people and some (greater) number of slim people onto appropriate tracks? X-0
...and then one day you find a giraffe on tracks.
Reminds me of this.
Huh. Indeed and of course I obviously mean chinese room. Might be enough help, thanks!
I think I have seen it in Scott Aaronson’s lecture notes.
Found it already. chinese instead of marys room yielded http://slatestarcodex.com/2014/09/01/book-review-and-highlights-quantum-computing-since-democritus/
wsfdsfds
#FoundThem − 21st Century Pre-Search and Post-Detection SETI Protocols for Social and Digital Media
https://arxiv.org/abs/1605.02947
https://theconversation.com/how-to-tell-the-world-youve-discovered-an-alien-civilisation-60014
Please escape the hash with a backslash (
\#
) or it formats the rest of the line as a title.