Open thread, 23-29 June 2014
If it’s worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the ‘open_thread’ tag.
2. Check if there is an active Open Thread before posting a new one.
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
- Open thread, 30 June 2014- 6 July 2014 by 30 Jun 2014 10:58 UTC; 7 points) (
- 23 Jun 2014 7:22 UTC; -1 points) 's comment on Open thread, 16-22 June 2014 by (
EDIT: I’ve removed this draft & posted a longer version incorporating some of the feedback here at http://lesswrong.com/lw/khd/confound_it_correlation_is_usually_not_causation/
I would prefer posts like that to stand on their own in discussion and not be posted in an open thread.
Hi, gwern it’s awesome you are grappling with these issues. Here are some jambling responses.
You might enjoy Sander Greenland’s essay here:
http://bayes.cs.ucla.edu/TRIBUTE/festschrift-complete.pdf
Sander can be pretty bleak!
I am not sure exactly what you mean, but I can think of a formalization where this is not hard to show. We say A “structurally causes” B in a DAG G if and only if there is a directed path from A to B in G. We say A is “structurally dependent” with B in a DAG G if and only if there is a marginal d-connecting path from A to B in G.
A marginal d-connecting path between two nodes is a path with no consecutive edges of the form → ← * (that is, no colliders on the path). In other words all directed paths are marginal d-connecting but the opposite isn’t true.
The justification for this definition is that if A “structurally causes” B in a DAG G, then if we were to intervene on A, we would observe B change (but not vice versa) in “most” distributions that arise from causal structures consistent with G. Similarly, if A and B are “structurally dependent” in a DAG G, then in “most” distributions consistent with G, A and B would be marginally dependent (e.g. what you probably mean when you say ‘correlations are there’).
I qualify with “most” because we cannot simultaneously represent dependences and independences by a graph, so we have to choose. People have chosen to represent independences. That is, if in a DAG G some arrow is missing, then in any distribution (causal structure) consistent with G, there is some sort of independence (missing effect). But if the arrow is not missing we cannot say anything. Maybe there is dependence, maybe there is independence. An arrow may be present in G, and there may still be independence in a distribution consistent with G. We call such distributions “unfaithful” to G. If we pick distributions consistent with G randomly, we are unlikely to hit on unfaithful ones (subset of all distributions consistent with G that is unfaithful to G has measure zero), but Nature does not pick randomly.. so unfaithful distributions are a worry. They may arise for systematic reasons (maybe equilibrium of a feedback process in bio?)
If you accept above definition, then clearly for a DAG with n vertices, the number of pairwise structural dependence relationships is an upper bound on the number of pairwise structural causal relationships. I am not aware of anyone having worked out the exact combinatorics here, but it’s clear there are many many more paths for structural dependence than paths for structural causality.
But what you actually want is not a DAG with n vertices, but another type of graph with n vertices. The “Universe DAG” has a lot of vertices, but what we actually observe is a very small subset of these vertices, and we marginalize over the rest. The trouble is, if you start with a distribution that is consistent with a DAG, and you marginalize over some things, you may end up with a distribution that isn’t well represented by a DAG. Or “DAG models aren’t closed under marginalization.”
That is, if our DAG is A → B ← H → C ← D, and we marginalize over H because we do not observe H, what we get is a distribution where no DAG can properly represent all conditional independences. We need another kind of graph.
In fact, people have come up with a mixed graph (containing → arrows and <-> arrows) to represent margins of DAGs. Here → means the same as in a causal DAG, but <-> means “there is some sort of common cause/confounder that we don’t want to explicitly write down.” Note: <-> is not a correlative arrow, it is still encoding something causal (the presence of a hidden common cause or causes). I am being loose here—in fact it is the absence of arrows that means things, not the presence.
I do a lot of work on these kinds of graphs, because these are graphs are the sensible representation of data we typically get—drawn from a marginal of a joint distribution consistent with a big unknown DAG.
But the combinatorics work out the same in these graphs—the number of marginal d-connected paths is much bigger than the number of directed paths. This is probably the source of your intuition. Of course what often happens is you do have a (weak) causal link between A and B, but a much stronger non-causal link between A and B through an unobserved common parent. So the causal link is hard to find without “tricks.”
The dependence that arises from a conditioned common effect (simplest case A → [C] ← B) that people have brought up does arise in practice, usually if your samples aren’t independent. Typical case: phone surveys are only administered to people with phones. Or case control studies for rare diseases need to gather one arm from people who are actually already sick (called “outcome dependent sampling.”)
Phil Dawid works with DAG models that are partially causal and partially statistical. But I think we should first be very very clear on exactly what a statistical DAG model is, and what a causal DAG model is, and how they are different. Then we could start combining without confusion!
If you have a prior over DAG/mixed graph structures because you are Bayesian, you can obviously have beliefs about a causal relationship between A and B vs a dependent relationship between A and B, and update your beliefs based on evidence, etc.. Bayesian reasoning about causality does involve saying at some point “I have an assumption that is letting me draw causal conclusions from a fact I observed about a joint distribution,” which is not a trivial step (this is not unique to B of course—anyone who wants to do causality from observational data has to deal with this).
Pearl has this hypothesis that a lot of probabilistic fallacies/paradoxes/biases are due to the fact that causal and not probabilistic relationships are what our brain natively thinks about. So e.g. Simpson’s paradox is surprising because we intuitively think of a conditional distribution (where conditioning can change anything!) as a kind of “interventional distribution” (no Simpson’s type reversal under interventions: http://ftp.cs.ucla.edu/pub/stat_ser/r414.pdf).
This hypothesis would claim that people who haven’t looked into the math just interpret statements about conditional probabilities as about “interventional probabilities” (or whatever their intuitive analogue of a causal thing is).
Good comment—upvoted. Just a minor question:
You probably did not intend to imply that this was an arbitrary choice, but it would still be interesting to hear your thoughts on it. It seems to me that the choice to represent independences by missing arrows was necessary. If they had instead chosen to represent dependences by present arrows, I don’t see how the graphs would be useful for causal inference.
If missing arrows represent independences and the backdoor criterion holds, this is interpreted as “for all distributions that are consistent with the model, there is no confounding”. This is clearly very useful. If arrows represented dependences, it would instead be interpreted as “For at least one distribution that is consistent with the DAG model, there is no confounding”. This is not useful to the investigator.
Since unconfoundedness is an independence-relation, it is not clear to me how graphs that encode dependence-relations would be useful. Can you think of a graphical criterion for unconfoundedness in dependence graphs? Or would dependence graphs be useful for a different purpose?
Hi, thanks for this. I agree that this choice was not arbitrary at all!
There are a few related reasons why it was made.
(a) Pearl wisely noted that it is independences that we exploit for things like propagating beliefs around a sparse graph in polynomial time. When he was still arguing for the use of probability in AI, people in AI were still not fully on board, because they thought that to probabilistically reason about n binary variables we need a 2^n table for the joint, which is a non-starter (of course statisticians were on board w/ probability for hundreds of years even though they didn’t have computers—their solution was to use clever parametric models. In some sense Bayesian networks are just another kind of clever parametric model that finally penetrated the AI culture in the late 80s).
(b) We can define statistical (causal) models by either independences or dependences, but there is a lack of symmetry here that the symmetry of the “presence or absence of edges in a graph” masks. An independence is about a small part of the parameter space. That is, a model defined by an independence will correspond to a manifold of smaller dimension generally that sits in a space corresponding to a saturated model (no constraints). A model defined by dependences will just be that same space with a “small part” missing. Lowering dimension in a model is really nice in stats for a number of reasons.
(c) While conceivably we might be interested in a presence of a causal effect more than an absence of a causal effect, you are absolutely right that generally assumptions that allow us to equate a causal effect with some functional of observed data take the form of equality constraints (e.g. “independences in something.”) So it is much more useful to represent that even if we care about the presence of an effect at the end of the day. We can just see how far from null the final effect number is—we don’t need a graphical representation. However a graphical representation for assumptions we are exploiting to get the effect as a functional of observed data is very handy—this is what eventually led Jin Tian to his awesome identification algorithm on graphs.
(d) There is an interesting logical structure to conditional independence, e.g. Phil Dawid’s graphoid axioms. There is something like that for dependences (Armstrong’s axioms for functional dependence in db theory?) but the structure isn’t as rich.
edit: there are actually only two semi-graphoids : one for symmetry and one for chain rule.
edit^2: graphoids are not complete (because conditional independence is actually kind of a nasty relation). But at least it’s a ternary relation. There are far worse dragons in the cave of “equality constraints.”
Thanks for reading.
I tried to read that, but I think I didn’t understand too much of it or its connection to this topic. I’ll save that whole festschrift for later, there were some interesting titles in the table of contents.
I agree I did sort of conflate causal networks and Bayesian networks in general… I didn’t realize there was no clean way of having both at the same time.
It might help if I describe a concrete way to test my claim using just causal networks: generate a randomly connected causal network with x nodes and y arrows, where each arrow has some random noise in it; count how many pairs of nodes are in a causal relationship; now, 1000 times initialize the root nodes to random values and generate a possible state of the network & storing the values for each node; count how many pairwise correlations there are between all the nodes using the 1000 samples (using an appropriate significance test & alpha if one wants); divide # of causal relationships by # of correlations, store; return to the beginning and resume with x+1 nodes and y+1 arrows… As one graphs each x against its respective estimated fraction, does the fraction head toward 0 as x increases? My thesis is it does.
Interesting, and it reminds me of what happens in physics classes: people learn how to memorize teachers’ passwords, but go on thinking in folk-Aristotelian physics fashion, as revealed by simple multiple-choice tests designed to hone in on the appealing folk-physics misconceptions vs ‘unnatural’ Newtonian mechanics. That’s a plausible explanation, but I wonder if anyone has established more directly that people really do reason causally even when they know they’re not supposed to? Offhand, it doesn’t really sound like any bias I can think of. It shouldn’t be too hard to develop such a test for teachers of causality material, just take common student misconceptions or deadends and refine them into a multiple-choice test. I’d bet stats 101 courses have as much problems as intro physics courses.
That seems to make sense to me.
I’m not sure about marginal dependence.
I’m afraid I don’t understand you here. If we draw an arrow from A to B, either as a causal or Bayesian net, because we’ve observed correlation or causation (maybe we actually randomized A for once), how can there not be a relationship in any underlying reality and there actually be an ‘independence’ and the graph be ‘unfaithful’?
Anyway, it seems that either way, there might be something to this idea. I’ll keep it in mind for the future.
This post is a good example of why LW is dying. Specifically, that it was posted as a comment to a garbage-collector thread in the second-class area. Something is horribly wrong with the selection mechanism for what gets on the front page.
Underconfidence is a sin. This is specifically about gwern’s calibration. (EDIT: or his preferences)
Not everyone is in the same situation. I mean, we recently had an article disproving theory of relativity posted in Main (later moved to Discussion). Texts with less value that gwern’s comment do regularly get posted as articles. So it’s not like everyone is afraid to post anything. Some people should update towards posting articles, some people should update towards trying their ideas in Open Thread first. Maybe they need a little nudge from outside first.
How about adding this text to the Open Thread introduction?
As a rule of thumb, if your top-level comment here received 15 or more karma, you probably should repost it as a separate article—either in the same form, or updated. (And probably post texts of similar quality directly as articles in the future.)
And a more courageous idea: Make a script which collects all top-level Open Thread articles with 15 and more karma from all Open Threads in history and sends their authors a message (one message per author with links to all such comments, to prevent inbox flood) that they should consider posting this as an article.
More generally, I’d say that if it’s longer than about four paragraphs, it’s probably better suited as its own article than as a comment.
Let’s talk about the fact that the top two comments on a very nice contribution in the open thread is about how this is the wrong place for the post, or how it is why LW is dying. Actually let’s not talk about that.
[deleted]
I think that deserves its own post.
It would be interesting to try to come up with good priors for random causal networks.
A simple informal reason for giving a small probability to “A causes B” could be this:
For any fixed values A and B, there is only one explanation “A causes B”, one explanation “B causes A”, and many explanations “C causes A and B” (for many different values of C). If we split 100% between all these explanations, the last group gets most of the probability mass.
And as you said, the more complex given field is, the more realistic values of C there are.
You are leaving out one type of causation. It is possible that you are conditioning on some common effect C of A and B.
You may argue that this does not actually give a correlation between A and B, it only give a correlation between A given C and B given C. However, in real life, there will always be things you condition on whenever you collect data, so you cannot completely remove this possibility.
This would be the same sort of selection-causing-pseudo-correlations that Yvain discusses in http://slatestarcodex.com/2014/03/01/searching-for-one-sided-tradeoffs/ ? Hm… I think I would lump that in with my response to Nancy (‘yes, we rarely have huge _n_s and can wish away sampling error and yes our data collection is usually biased or conditioned in ways we don’t know but let’s ignore that to look at the underlying stuff’).
Even if we promoted it to the level of the other 3 causation patterns, does that change any of my arguments? It seems like another way of producing correlations-which-aren’t-due-to-direct-causation just emphasizes the point.
Yes, I am talking about the exact same thing that Yvain is talking about there.
So, I think any time you observe a correlation, it is because of one of those 4 causation patterns, so even if the fourth does not show up as regularly as the other 3, you should include it for completeness.
Regarding the psychology of why people overestimate the correlation-causation link, I was just recently reading this, and something vaguely relevant struck my eye:
Say you do this, and you find that about 10% of all correlations in a dataset are shown to have a causal link. Can you then look for a correlation between certain aspects of a correlation (such as coefficient, field of study) and those correlations which are causal?
Building on this, you might establish something like “correlations at .9 are more likely to be causal than correlations at .7” and establish a causal mechanism for this. Alternatively, you might find that “correlations from the field of farkology are more often causal than correlations from spleen medicine”, and find a causal explanation for this.
Part or all of this explanation might involve the size of the causal network. It could well be that both correlation coefficients and field of study are just proxy variables to describe the size of a network, and that’s the only important factor in the ratio of correlations to causal links, but it might be the case that there is more to it.
This could lead to quite a bit of trouble in academic literature, as measures of what evidence a correlation is for causation will become dependent on a set of variables about the context you’re working in, and this could potentially be gamed. In fact, that could be the case even with gwern’s original proposition—claiming you’re working with a small causal net could be enough to lend strong evidence to a causal claim based on correlation, and it’s only by having someone point out that your causal net is lacking that this evidence can have its weighting adjusted.
All these thoughts are sketchy outlines of an extension of what gwern’s brought up. More considered comment is welcome.
I would be very surprised if this was not the case. Different fields already use different cutoffs for statistical-significance (you might get away with p<0.05 in psychology, but particle physics likes its five-sigmas, and in genomics the cutoff will be hundreds or thousands of times smaller and vary heavily based on what exactly you’re analyzing) and likewise have different expectations for effect sizes (psychology expects large effects, medicine expects medium effects, and genomics expects very small effects; eg for genetic influence on IQ, any claim of a allele with an effect larger than d=0.06 should be greeted with surprise and alarm).
I think that there is going to be a relationship, but it’ll be hard to describe precisely. Suppose we correlated A and B and found r=0.9. This is a large correlation by most fields’ standards, and it would seem to put constraints on the causal net that A and B are part of: either there aren’t many nodes ‘in between’ A and B (because each node is a chance for the correlation to diminish and be lost in influence from all the neighboring nodes, with their own connections) or the nodes are powerfully correlated so the net correlation can still be as high as 0.9.
To a large extent, this is already the case (see above). People justify results with relation to implicit models and supposed analysis procedures (‘we did reported t-test so we are entitled to declare p<0.05 statistically-significant (never mind all the tweaks we tried and interim tests while collecting data)’). The existing defaults aren’t usually well-justified: for example, why does psychology use 0.05 rather than 0.10 or 0.01? ‘Surely God loves p=0.06 almost as much as he loves the p=0.05’ one line goes.
This is a good point, and leads to what might be an interesting use of the experimental approach of linking correlations to causation: gauging whether the heuristics currently in use in a field are at a suitable level/reflect the degree to which correlation is evidence for causation.
If you were to find, for example, that physics is churning out huge sigmas where it doesn’t really need to, or psychology really really needs to up its standards of evidence (not that that in itself would be a surprising result), those could be very interesting results.
Of course, to run these experiments you need large samples of well-researched correlations you can easily and objectively test for causality, from all the fields you’re looking at, which is no small requirement.
If the falling price of gene sequencing lets us determine a lot about how genes influence human behavior social scientists, I predict, will get a lot better at figuring out the causal effects of social programs.
Once social scientists get past their taboo against genetic explanations.
Better genetic analysis will make it easier to discuss politically incorrect topics because rather than talking about IQ you could discuss complex gene clusters characterized by hard to understand mathematical correlations. And I strongly suspect that with a better understanding of genetics race would become much less significant in statistical analysis because after you account for genetics you would gain little statistical significance by directly adding race into a regression (i.e. if gene X does something important and 80% of Asians but only 5% of whites have the gene then without genetic analysis race is important but after you know who has the gene race isn’t statistically significant.)
There are at least two more possibilities: A and B are unrelated, but happen to be in sync for a while, and the data was collected wrong in some way.
I’m choosing to ignore that possibility to clarify the exposition of what I think is going on. Problems like that are what I’m referring to when I preface:
Even if we had enormous clean datasets showing correlations to whatever level of statistical-significance you please, you still can’t spin the straw of correlation into the gold of causation, and the question remains why.
You could say that “A and B happen to be in sync for a while” is possibility 3, where C is the passage of time. (Unless by “happen to be in sync for a while” you mean that they appear to be correlated because of a fluke.)
To generalize, it’s also possible that you’re observing survivor effects, i.e., both A and not B (or B and not A) cause the data to appear in your data set.
Test reply please ignore
This seems deserving of at the very least a full post, possibly in main.
..oh, that wasn’t a delete button. Not actually retracted.
When it comes to the replication of those breakthrough cancer results I think you can’t forget publication bias. A lab runs an experiment 6 times. If it produces results in one of those trials they write a paper.
I hate cauliflower! But a few days ago a restaurant gave me some pickled cauliflower with my salmon dinner. Not recognizing the cauliflower for what it was, I tried and greatly enjoyed the vegetable. I liked it so much that I asked my waitress what the food was so I could be sure to get more of it in the future. Alas, as soon as the waitress told me that the food was cauliflower, I experienced a horrible nauseating after-taste from having consumed the cauliflower, reinforcing my disgust of this vile weed.
That’s an interesting example of the power of the feed-forward branch in the memory-percpetion loop.
I remember having the same negative association with cauliflower, although not as strong as yours: could it be because of the strong sulphurous smell? Or the very bland taste and consistency? Surely the coupling of these two factors doesn’t help.
I remember my negative association dissolved one day when I tried deep fried cauliflower, they tasted delicious. Only a lot later I would discover the equally delicious pickled caulis.
Culinary wise, the bland taste is perfect to carry over the salty taste of the batter when deep fried or the acidic taste when pickled.
Not unexpectedly, the mysterious mass-downvoter was lying low during the recent discussion of the issue, but is back at it now.
The first LessWrong arch-villain.
Nah, I can think of at least one previous villain.
Had he or she such a cool denomination as “the mysterious mass-downvoter”?
Even cooler, and they would cause moderators to quake at the mere mention of their name.
What did Roko think of them?
FURIOUSLY DOWNVOTING in UNQUENCHABLE NERD RAGE.
Y’know, there’s someone who literally can’t think of anything better to do for the cause of existential risk. CLICK! CLICK! AHAHAHA! CLICK!!
A session I’m planning on organising for one of the London meetups in August: filling procedural knowledge gaps.
A motivating example is setting up investment in an index fund. At our last meetup, there was some division in the group between people who’d already done this and found it very straightforward, vs. others who’d started looking into it but found it prohibitively difficult. The blame was squarely placed on procedural knowledge gaps, and the proposed solution was someone assembling a 15-minute talk on index funds, along with a step-by-step guide to explain all the actions required to get from not having an index-fund product to having one.
A few other examples and suggestions were proposed, but I’d like to invite suggestions from the LW-readership. Is there anything Less Wrong has convinced you it’s a good idea to do, but for which you don’t know the next actionable step in doing?
Some areas that I have gaps in that I would like to fill with actionable steps:
Networking
Finding a reputable lawyer (for just in case)
Signing up for cryonics
Index funds is a great one
Oh, please make it also a text. With pictures (activating near mode thinking).
Alternatively, a LW wiki page. (Then it’s okay without pictures.)
I’ve actually volunteered to give the index fund talk. There are quite a few UK-specific points that don’t apply in other jurisdictions, so I would be hesitant to offer generic instructions for an international audience. For example, I understand how ISAs (a tax-efficient investment process) work in the UK, but I don’t know what the equivalent would be in other countries, or even if they exist.
This also holds for various other subjects with legal / geographic implications. I understand signing up for cryonics is a very different process if you’re outside of the US.
On wiki, the instructions for people from other countries could be added later.
Apologies for political content. It shouldn’t hurt too much.
Many of you are probably familiar with Upworthy, which uses a system of A/B testing to find the most link-baity headlines for left-wing/progressive social media content before inflicting them on the world. My typical reaction to such content was “this would seem amazingly insightful if you hadn’t thought about the issue for ten minutes already”. Last year, shortly before blocking all such posts on my Facebook feed, I remember wondering what the right-wing equivalent would be.
I’ve recently become aware of Britain First, a British Nationalist political group. They have a Facebook page which is the result of a deliberated, professional social media campaign, luring people in with schmaltzy motivational images and kitten pictures before sprinkling on a bit of “send the darkies back”. It’s not on the same level of technical sophistication or scale as Upworthy, but neither is its intended audience (c.f. my mum and dad).
This makes me wonder, where is the clever social media presence of all the sane stuff? There’s a recognisable cluster of “actually, I think you’ll find it’s a bit more complicated than that” to which Less Wrong belongs, but I rarely see it being endorsed in the same way as either of the two above examples. Is it just too difficult to spin a populist narrative for this sort of material? Is there an underlying cooperation problem? Or is it there hasn’t been a concerted effort yet?
tl;dr: can we raise the sanity waterline with clever use of social media?
Have you heard the slogan, “The truth is too complicated to fit on a bumper sticker”? I’d wholeheartedly endorse that if it’s brevity didn’t make me suspicious.
Some truths, like the sunk cost fallacy or the value of sensible communication, might be simple enough to fit in a brief funny video.
Firstly, it is an outrageous slur to think that the right-wing equivalent of Upworthy is the BNP. The right-wing equivalent is, of course, the Daily Mail, which has so mastered the art of click-baitery as to become the most read newspaper in the world (as of 2013 - may no longer be true).
Secondly, the social media cluster to which Less Wrong belongs is, obviously, Salon, Slate, and works of that ilk. Yes, Caliban, it’s true.
And no, you cannot raise the sanity waterline with social media. All you will get is (as the joke goes) people enthusiastically retweeting a study that finally proves what they’d always believed about confirmation bias.
This seems obviously untrue. Salon and Slate don’t seem to have any intellectual or philosophical content other than “left good, right bad.” Salon/slate are also borderline buzzfeed clones at this point. Laughably bad.
Slate is marginally better than that.
There have been some attempts to make “rationalist memes”, but they’re bland for people who are already aspiring rationalists and not funny enough for those that aren’t.
The best thing of the sort that I’ve seen has been Pretty Rational and that has stopped updating for quite some time now.
Yes. Populist narratives are aimed at people with the attention span of a goldfish who like to get the feeling of them having been right all along within the first few seconds.
“A bit more complicated than that” is a non-starter.
Like “perpetual motion machine”, the conjunction of “raise sanity waterline” and “use of social media” in the same sentence is immediately activating my red flags for “oxymoron; doomed to failure”.
For what it’s worth, I’m seeing some people getting trained to use Snopes. On the other hand, it wasn’t a social media campaign that did it. It was getting repeatedly pointed at Snopes when they posted low-quality links.
Critical thinking exercise for anyone who wants to try it! I wasted an evening on this, so I thought I’d share it… Here is a Wikipedia article: https://en.wikipedia.org/wiki/Bicycle_face It looks pretty good, well-cited, and is about something amusing. However, it needs to be burned with fire; why?
(Please avoid looking at the article history, and respond in ROT13. If you saw any of the
#lesswrong
conversation on this article, please refrain from commenting or giving hints.)V sbyybjrq n srj bs gur fbheprf naq vg frrzrq gb or n fznyy zvabevgl bs ohgguheg qbpgbef engure guna gur “zrqvpny rfgnoyvfuzrag”. Vf gung gur chapuyvar?
Pybfr. Vg’f npghnyyl jbefr guna gung: sbe ‘fznyy zvabevgl’ ernq ‘dhvgr cbffvoyl n fvatyr npghny qbpgbe jub qvqa’g rira pner gung zhpu nobhg vg’, naq sbe ‘frkvfg ercerffvba’ gel ‘vg jnfa’g traqrerq hagvy srzvavfgf n praghel yngre qrpvqrq gb qb fbzr zlgu-znxvat’. Gur ragver negvpyr vf fubg guebhtu jvgu OF hafhccbegrq ol gur cevznel fbheprf. Sbe rknzcyr, gurer’f mreb rivqrapr ovplpyr snpr qvq nalguvat gb qvfpbhentr jbzra sebz ovxvat, gubhtu gur negvpyr cebpynvzf vg unq n znwbe rssrpg. Vg ybbxf yvxr gur pbaprcg jnf pbasvarq gb bar be gjb bss-unaq zragvbaf naq fbzr arjfcncre tbffvc/pyvccvat pbyhzaf. Jvxvcrqvn jbegul? Jryy, ‘jub jubz’...
EDIT: Sbe n zber qrgnvyrq cbvag-ol-cbvag pevgvdhr, frr zl pbzzrag va uggcf://cyhf.tbbtyr.pbz/h/0/103530621949492999968/cbfgf/vK58f8UkL5x
Avpr. V’q abgr gung gur ebyr bs “srzvavfgf” va guvf zlgu-znxvat vf fbzrjung nanybtbhf (gubhtu boivbhfyl abg cresrpgyl fb) gb gur ebyr bs “gur zrqvpny rfgnoyvfuzrag” va cebzhytngvat gur vqrn bs ovplpyr snpr va gur svefg cynpr.
Lrnu, ohg V’q fnl gung gur srzvavfgf chfuvat guvf zlgu ner n zber prageny rknzcyr bs srzvavfgf guna gur ‘zrqvpny rfgnoyvfuzrag’ bs ovplpyr-snpr vf n prageny rknzcyr bs gur zrqvpny rfgnoyvfuzrag. Gurer jnf ab Ivpgbevna raplpybcrqvn hfvat vg nf na rknzcyr.
Gung fnvq, zvfvagrecergngvbaf bs gur cnfg ner enzcnag. Juvyr nethvat nobhg ovplpyr snpr ba T+, V qvfpbirerq nabgure snyfvsvpngvba bs gur cnfg: va gubfr yvfgf bs snvyrq crffvzvfgvp cerqvpgvbaf nobhg grpuabybtl, n pbzzba ragel vf gur dhbgr “Envy geniry ng uvtu fcrrq vf abg cbffvoyr orpnhfr cnffratref, hanoyr gb oerngur, jbhyq qvr bs nfculkvn”—rkprcg nf sne nf V pna gryy, abg bayl qvq gur fcrnxre abg fnl vg, gur dhbgr qvqa’g rira rkvfg hagvy 1980: uggcf://ra.jvxvcrqvn.bet/jvxv/Gnyx:Qvbalfvhf_Yneqare#Qvq_ur_npghnyyl_fnl_gung.3S Bl irl.
Does anyone else experience the following problem:
Something reminds you of an event which happened long ago; the event was annoying or created some other negative emotion; and you again feel that annoyance or negative emotion. I get these “annoyance flashbacks” now and then and it seems like they are more frequent now that I calorie restrict.
Any good ideas for dealing with this?
Quick little trick I do (this works if you can feel the annoyance manifest in your body, which is what happens for me):
I breath in slowly, and imagine the annoyance feeling I have as a red light, being sucked up into my forhead or into my threat
I breath out, imaging the red feeling either shooting out of my forhead or being expelled from my throat.
If the annoyance starts to come back, I’ll repeat this two or three times.
One, I have these things too. I have not yet found a clear trigger for these and no clear category to which these annoyances belong. Two, people on Reddit in askreddit commonly describe thinking back to some event and cringing. So it does seem to be not that uncommon.
Meditation?
Try having a pleasant mental image you can quickly contemplate as soon as you start to think about the bad event. If the bad event involved someone else doing something bad to you, forgive them.
I have this problem and do intermittent fasting. I never before thought there might be a causal connection.
Note that this can backfire, since the pleasant image may become mentally linked to the bad event and thus develop negative associations. (I still think it’s worth trying, but choose a pleasant image you can stand to lose.)
Easier said than done, of course.
Interestingly, I also have this and similar problems and don’t eat very much.
Thanks for the suggestion, I will try it.
Nominated for the funniest case of ‘fake causality’ this week.
Edit: MattG makes a fine point. Retracted.
Hunger makes people cranky… I think this as entirely plausible case of real causality.
I wasn’t sure what Benito meant by “fake causality,” but I would have to agree that there is a relationship between diet and irritability. Probably that’s a part of the reason why diets fail . . . it’s mildly uncomfortable to eat less than ad libitum amounts of food. Do it day after day and there’s a pretty good chance of slipping up. When you are in an uncomfortable state it seems you are more sensitive to additional negative stimulus.
Perhaps there is a scientific basis for the stereotype of the jolly fat man.
At the same time, it seems likely that there is a relationship between irritability and annoyance flashbacks. The latter would seem to be just a special case of general irritability.
I was getting this about a couple of specific events so I tried journaling about it: Pennebaker method, (http://en.wikipedia.org/wiki/Writing_therapy) - basically, 20 minutes non-stop writing, focusing on both thoughts and feelings about what happened, for three or four consecutive days—I find three is enough. If it’s working, the narrative will change. There was a noticeable reduction in the flashbacks and also a lift in mood. Probably doesn’t work for everyone, though.
I have the experience. No particular solution.
I used to get these split second emotional flashbacks all the time, primarily when stressed. They were usually about embarrassing or frustrating moments, and would very frequently cause some kind of verbal twitch—usually these compulsive phrases like “I want to go home” or “I hate this place” or “I hate myself”. Very embarassing in the rare instances it occured around other people.
Meditation seems to have brought down their frequency a lot. Or at least, of the many things that have changed, the frequency with which I meditate seems to correlate most closely.
Yeah. I push them out of my mind or take a break to do something I enjoy.
This describes my feelings about job-hunting pretty perfectly. The only way I’ve found to deal with this is to push the toughts out of my mind, which isn’t the most healthy solution.
Well, a popular solution in some circles is to declare the reminders “triggers” and insist that people avoid mentioning them in your presence. ;)
I’ve begun researching cryonics to see if I can afford it/want to sign up. Since I know plenty here are already signed up, I was hoping someone could link me to a succinct breakdown of the costs involved. I’ve already looked over Alcor’s webpage and the Cryonics Institute, but I’d like to hear from a neutral party. Membership dues and fees, average insurance costs (average since this would change from person to person), even peripheral things like lawyer fees (I assume you’ll need some legal paperwork done for putting your body on ice). All the main steps necessary to signing up and staying safe.
Basically, I would very much appreciate any help in understanding the basic costs and payoffs so I can budget accordingly.
Don’t forget Oregon Cryonics.
This is how I want to reply to half the comments on this site, including many of my own: https://medium.com/the-nib/my-blade-and-shield-80df0734f77c
How do you deal with the risk of people using high-power laser pointers on you? Where I am many “kids” are “playing” with strong laser pointers they ordered over the Internet. The strong ones can easily cause permanent damage or permanently blind you. If this becomes more prevalent, what can we do to protect ourselves?
This is an interesting example because it’s not actually as susceptible to what Christian proposes as many other forms of assault, since laser pointers are silent and can have a very long range. and, if you’re blinded, you’re not exactly in a position to see who did it, the way you might be with kids throwing rocks. Sunglasses will probably keep you safe from most publicly available lasers but it’s not exactly convenient if you don’t like wearing them already.
I think the authorities should focus on blocking the ability to buy them, since tracking down perpetrators may indeed be difficult. Sunglasses: I seriously doubt they would help.
What does this have to do with Xixidu? he’s not the authorities.
Also: dangerous lasers are sold packaged with protective glasses. If you’re worried about something like this yeah you might need better than standard sunglasses but normal ones should be sufficient to keep most lasers from blinding you before you can react, and most kids aren’t going to be spending 300 dollars to fuck with random strangers.
I was referring to what Christian proposed, namely allowing the authorities to deal with it. I realize it is not very helpful advice for individuals (unless the said individual intends to engage in raising public awareness or influencing policy in other ways).
Interesting. Citation?
Using Dark Arts for a good cause: Let’s invent an urband legend about a psychopathic killer who murdered five kids who pointed at him laser pointers. Then spread the legend to make sure most of the kids in your environment (and their parents) know it.
Yeah, technically there always is a risk that this story could inspire a real mentally unstable person, but… torture versus laser pointers… I say let’s do it.
Good one, although I was more thinking along the lines of glasses.
Successfully spreading the legend would keep people from playing with laser pointers, but would also lower the status of rationalists who aren’t in on the plan, since they would object that there’s no evidence this happened, but by hypothesis nobody would believe them. Furthermore, if you spread such rumors, you have little grounds to object the next time someone spreads a false rumor about a kid who recovered from illness using homeopathy (and you probably primed the general populace to believe such rumors anyway since you’ve trained them to believe things without evidence).
That’s a general problem with spreading lies for a cause, of course.
I doubt it. Urban legends focusing on socially or at least parentally discouraged behavior are pretty common; witness the popular and long-lived one about the killer targeting teenagers who have sex with each other in their cars. They don’t seem to deter many people, though.
Remember, by hypothesis we have kids with the resources to get dangerously powerful lasers and the will to use them. These aren’t five-year-olds that can be cowed into good behavior by spinning a tale about the alligators that’ll eat you if you go outside after dark; indeed, I didn’t find that entirely convincing when I was five. You might even get people trying to match the legend’s conditions just to see what’ll happen; show of hands, who here tried to invoke Bloody Mary as a child by standing in front of a dark mirror and chanting her name?
Do you know a fable about a boy and a wolf?
...stand back and look at what you’ve written. I don’t know whether to laugh or cringe. What connection could this… “Rationalist”-fanfic-thinking possibly have to the real fucking world?! This is not how urban legends work, how teenagers work, how speading disinformation works… not to mention the ethics of it (which would not come into play in practice, as you’d just get called out on your bullshit).
This sort of utter fucking idiocy comes from a long-time and highly-upvoted LW user! No wonder LW is already seen as a fucking joke in some circles, and not for the transhumanist/singularity stuff either.
And/or, it was a joke.
Was this a joke?
It’s a job for the police. If someone is intentionally harming others in a way that can cause lasting damage, then they need to be dragged in front of a court.
In the US you have a second amendment but in Germany we wouldn’t have any issue with simply outlawing or restricting strong laser pointers in a way that prevents kids from possessing them if there’s political will to do so.
The Second Amendment has generally not been held to prohibit banning or restricting dangerous objects that are not weapons, nor weapons without plausible self-defense functionality, nor narrow classes of weapon that are popular for reasons other than self-defense. It definitely wouldn’t prohibit slapping age limits on the things. (California, where I live, has a ban dating from the Eighties on many martial arts/”ninja” weapons, probably because they were thought at the time to be inordinately popular among young people. Take this how you will; I feel it’s kind of a joke, personally.)
That said, I feel this would be adequately covered by existing law without throwing bans around—aiming a laser pointer powerful enough to blind at someone would, at minimum, qualify as assault.
(IANAL, though.)
Oh, and XiXiDu is German too.
In Belgium, laser pointers are already illegal. This hasn’t stopped anyone from possessing them and in fact the only time i actually owned them was when I was a kid.
Just to point out that while it probably should be a job for the police, that’s not going to be a very viable solution in the short term.
Given the many different risks towards we are exposed, how many people suffer permanent damage per year due to laser pointer injuries? Is there really a case that we have to do much more?
To me this simply doesn’t seem like the kind of risk that our existing political structures are ill equipped to handle.
Recently, I started reading The Sound and the Fury by William Faulkner. I am ~50 pages in and I don’t understand any of it. The stream of consciousness narrative is infuriatingly hard to read, and the storyline jumps across decades without any warning. What are some techniques I can use to improve my comprehension of the book?
What are you trying to optimize for? Are you sure the experience you’re having now isn’t the whole point of the thing?
Are you saying the whole point of the book is to confuse readers?
I was thinking that might be part of the experience the book was trying to induce, yes. Something along the lines of this.
The entire book is actually just the greatest troll in the history of literature!
The beginning section, Benjy, is a bunch of different narrative timelines spliced together and switched out of order. Wait until you get to the Quentin section. It gets even worse.
I’m not sure, but my recollection is that the other parts aren’t as hard as Benjy’s bit. It’s been a few years, though.
Read reviews of the book rather than the book. If you are concerned the reviews of the book will not be as accurate as the book, or in some sense there is something to the book you won’t get from the reviews I suggest
1) The reviews will tell you why the book is famous, which is probably why you set about reading the book in the first place. The reviews of the book will tell you more efficiently why the book is famous than you could ever expect to gain from reading the book itself. If you determine the the reasons the book is famous are sufficient for you to want to read the book, it will be easier to tolerate your own anger as you read it.
2) On the off chance that the reviews of the book somehow miss the true literary point of the book, there is a vanishingly small probability that you will repair that deficit in your own reading.
3) On the chance that the book is only thought to be good but is not actually good (whatever that means), you will at least know why it was thought to be good, which is ultimately what brought you to the book in the first place.
Just as with anything else modern, modern literature, at least some of it, is written for a small audience made up of the kind of people who like modern literature and read a lot of it. Its like reading a physics paper not being a physics graduate student, or looking at abstract modern art not being an insider on that particular thing.
In that vein: why look at a sunset, when you can have someone describe how they look? Why go on a rollercoaster when you can have someone describe the sensation to you? Hearing a secondhand summary of an aesthetic experience != that experience.
If a review of The Sound And The Fury were as good to read as the book itself, then why would Faulkner bother writing the former instead of the latter? (Then again, I suppose Borges did famously take advantage of that shortcut.)
If you aIf you aIf you If you are reading the book to read it and enjoy it,, yay! The OP said he was reading the book and hating it. and wondering how to understand the book.
When someone asks about solar hydrodynamics do you object to the recommendation of a textbook? Why don’t they just get a telescope and smoky lenses and take the data and derive the theory for themselves?
There is more than one way to understand something. If one is not working, try another.
I guess we have different interpretations of charlemango’s motivation here. I assumed they (“they” because I don’t know his/her gender) were seeking to get some aesthetic enjoyment from the book but were struggling to do so. On the other hand, you state that they are probably reading it to determine why it’s famous. This seems strange to me: I don’t think someone would try reading a book for only this reason. I agree that if that is were indeed someone’s motivation for reading a book, they’d be as well reading the reviews.
Edited to add: there’s some insight in your claim that modernist literature is, in some sense, aimed at an audience of specialists.
The OP may have assumed that the book being famous would therefore also be unusually enjoyable. This might be a bad assumption, or it might be a good assumption, but the book is only unusually enjoyable for people who are properly prepared to read the book with particular bits of historical and/or literary knowledge in place.
That the OP persists in trying to wrest enjoyment from a book which is clearly not giving enjoyment to them suggests that the OP would be best served understanding more about why she thought to pick up this book in the first place. Whether the extra knowledge allows her to enjoy the book, or whether the extra knowledge makes it clearer to her why she will not enjoy the book, given what I know about reading books or seeing movies 50 or 100 years after they are produced, that learning the context will be revealing and valuable in many ways.
So even if the goal is enjoying the book, their best shot is to learn the history, learn why the people who rate this book so highly do so.
Agreed. My only point of disagreement is that this is a sufficient substitute for reading the thing itself, as opposed to a supplement to it. (In my own reply to the OP I suggested looking at a study guide.)
If you can find a decent study guide (online, or, if there’s a physical edition, then secondhand copies of it will doubtless cheaply available on Amazon from students who are done with them), then reading that along with the book isn’t cheating. Reading notes for something which is fiction and therefore ostensibly ‘leisure’ reading may seem a bit absurd, but I think it can perhaps be justified. Aside from anything else, it can supply useful context not otherwise easily available to those not living in early twentieth century America and/or part of the high modern literati.
Whether or not you want to invest that kind of effort over and above what you’re already doing is your call, though.
It is a hard book. I read it when I was a rather high-minded teenager, surely understanding very little of it, but it’s actually a little hard for me to conceive of myself reading something so difficult now.
Build up to it by reading other books by the same author or in the same style. As I Lay Dying is supposed to be easier. Woolf is also stream of consciousness. Henry James is modern but less stream of consciousness. Or you could try his brother, William.
Confirm that As I Lay Dying is indeed easier.
[LINK] Givewell discusses the progress of their thinking about the merits of funding efforts to ameliorate global existential risks.
I’m going to be participating in an interview of a candidate for a Software Engineer position. I’m supposed to ask him some technical questions.
Perhaps someone here has some good ideas for technical, programming questions/problems? (I could google for some, but they would be the ones that everyone has seen before).
Thanks!
Short answer: this popped up on r/programming the other day. Lots of interesting questions there, and they don’t come with answers. This will force you to solve them yourself, without spoilers, which is an incredibly valuable exercise which I strongly recommend for any questions you ask.
Long answer: you’re going to have to unpack your intentions a little. You only have an hour (or less!), and you want to provide the maximum possible resolving power, so to do the best possible job you must know what your company’s decision criteria are for this employee, and also what kinds of evidence your fellow interviewers are going to provide. “Ask technical questions” is too broad a mandate to be really ideal, so here are some questions which might help:
Do you need to do a basic competence check? This is a concrete coding question which a minimally capable developer can solve on a whiteboard, with fully correct syntax, in five minutes or less. Fizzbuzz is the canonical example; I’ve also seen “write a function to determine whether an input string is a palindrome” used to good effect. The point of this question is to efficiently divide your candidate pool into “people who maybe ask a few clarifying questions and then write out a correct, compilable answer as quickly as they can move the whiteboard marker” and “people who fumble around for half an hour and eventually come to something that’s kinda right”. Senior candidates will sometimes balk at this, especially if they’re older than you are; I’ve found it works well to present the problem as a warmup, before you get to the more interesting stuff.
Do they have to already know the languages/tools you use? If so, then you have to drill down on that. Write down a bunch of things which you do in your env every day, pick a random subset, and ask your candidate how to do them. Java people need to know the difference between overloading and overriding, Javascript people had better be able to explain callbacks, and so forth.
How important is it that the candidate be able to solve new problems from scratch? To evaluate this, you want a problem which is novel and not easily solved with a standard algorithm. From the link above, “Add two strings. For example “”423“ + “99” = “522”” is a good example of this sort of problem, I think (or else I just don’t know the standard algorithm). For contrast, question #46 is not good for this: some candidates will have read about reader/writer locks and will immediately start reciting various algorithms and tradeoffs, while others won’t have and are stuck trying to solve what was once an open research problem in the final half hour of your interview.
What algorithms/data structures must they already have mastered? Everybody needs arrays and hash tables, most people need sets and graphs. Good is to ask people to explain them, better is to ask a question that’s easily solved using them.
Do they need to understand the hardware? Probably not, but maybe you’re developing for a console, or doing embedded work, or something.
That got really long, and I don’t have time to make it shorter. Sorry. tl;dr be mindful: know what your criteria are before you ask the questions, not afterward when you’re trying to judge the answers.
If your applicant can solve googlable questions, s/he is probably way above the average. A version of Fizzbuzz is likely to trip up a lot of weak applicants.
What I used to do when interviewing software engineers is letting them write a simple (but complete) program (not on paper but in actual IDE, compile, run, debug etc.). I used a command line game I invented where the player navigates a simple “labyrinth” of squares, but feel free to invent your own. Typically, the candidates took 1.5-2.5 hours to complete it.
This gives you a way better idea than programming puzzles: you see how they write real live code, how well they understand the requirements, how they test their code, how they use the debugger etc.
You might be surprised. I’m always astonished at how many people I interview don’t have a clue how to solve interview questions I Googled up or remembered from my own interview process, even the older-than-dirt ones like “how do you detect a cycle in a singly linked list?”
Besides, effective use of Google and literature is also a useful engineering skill. So is willingness to front-load work in preparation for an expected challenge.
’How do you detect a cycle in a singly linked list?” I found that funny because when I interviewed for my current job, I was asked that question, and I had seen the answer while googling interview questions the night before!
I’ve seen the topic of flow discussed in a wide range of circles from the popular media to very specialized forums. It seems like people are in general agreement that a flow state would be ideal when working, and is generally easy to induce when doing something like coding since it meets most of the requirements for a flow inducing activity.
I’m curious if anyone has made substantial effort to reach a ‘flow’ state in tasks outside of coding, like reading or doing math etc etc., and what they learned. Are there easy tricks? Is it possible? Is flow just a buzzword that doesn’t really mean anything?
I find reading is just about the easiest activity to get into that state with. I routinely get so absorbed in a book that I forget to move. And I think that’s the experience of most readers. It’s a little harder with programming actually, since there are all these pauses while I wait for things to compile or run, and all these times when I have to switch to a web browser to look something up. With reading, you can just keep turning pages.
Awhile ago on my blog I broke the process down into three steps that seem to work for me:
Empty your head -Write down distracting thoughts, make important decisions, etc.
Focus your thoughts -Minimize distractions, relaxation techniques, etc.
Engage Your Action Mind -Use triggers, exercise, or a shock to your system.
Seems to work well for me but YMMV.
On an unrelated note it se,ems flow is actually the great state for peak performance, but it turns out to be a poor ideal for learning because it’s antithetical to interleaved practice.
Would “because it’s insufficiently challenging” be at least as good an explanation?
Only if had a control where I tried to make things harder without using interleaved practice. I haven’t done that, I have less reason to suspect that simply making learning harder would make you learn better.
All right. The thing is, I don’t see how “flow is antithetical to interleaved practice” leads to “flow is a poor ideal for learning”, so for me, the sentence “flow is a poor ideal for learning because flow is antithetical to interleaved practice” doesn’t make sense.
Actually, I also don’t see how flow is antithetical to interleaved practice. The article you linked to says that the “Mixers” (who used interleaved practice) were more successful than the “Blockers”, but it doesn’t seem to give much of a reason to think that the Blockers were in a state of flow and the Mixers were not.
I’m not sure if it’s been studied specifically, so all I can say is that interleaved practice tends to frustrate me, whereas block practice tends to get me into a rhythm which leads to flow. I’m unsure if this generalizes to others, is placebo, or is confounded by other factors.
Source?
I don’t find this to be true for me. Assuming that I’m in more or less flow state when I’m studying, that doesn’t stop me from switching to a different subject after a while. As long as you plan ahead how you want to interleave your learning, I can’t see how flow would be an issue.
Source is here: http://j2jenkins.com/2013/04/29/interleaved-practice-a-secret-enhanced-learning-technique/
It follows from the study he links, which says that interleaving causes students to do worse during the practice session, but better during the actual test. This seems to imply one of his conclusions, which is that you should avoid flow.
Also, it seems you may misunderstand interleaved practice (or maybe I do?). From my understanding, you should be switching skills every practice question. Your use of the phrase “after a while” seems to suggest that you’re doing block practice with smaller blocks, but not getting the full benefits of maximum interleaving.
I just looked over the discussion of interleaving in chapter 3 of Make It Stick: The Science of Successful Learning (really excellent book by the way—HT to zedzed for the recommendation). The authors describe interleaved practice as switching to a different topic before each practice is complete (pg. 65). That doesn’t mean switching every practice question, as you say, but rather practicing one skill just a few times, then switching to a different skill (pg. 50).
Also, you can achieve flow (from my understanding) while still switching between subjects / skills / techniques within an overall subject. Let’s say you’re practicing baseball. You might practice batting, then catching, then throwing. I don’t think you lose flow when switching from one to the other, as long as you keep practicing baseball.
My comment earlier assumed that interleaving also requires switching overall subjects every once in a while. But now that I look over the chapter, I don’t see that mentioned at all. So maybe you don’t even need that.
It’s possible, perhaps just not for me or the person who wrote the article.
Can you link me to the blog post?
Here’s the post, blog has since been deleted so had to find it on the wayback machine: https://web.archive.org/web/20110227181131/http://lifeofmatt.net/blog/2009/03/being-in-the-moment-without-all-the-bullshit/
You’ll notice that steps 2 and 3 have been reversed above. This is because i found the process to be easier if I first focus, then take action, rather than vice versa.
Everything I’ve read about it says that flow results from working at a challenge that’s not too hard and not too easy, and that you enjoy (not so sure about that last one though). Seems to work for me.
An important part of “flow” is temporarily forgetting about the rest of the world. Not sure if you can reach this state artificially, but you certainly can be artificially removed from it.
For me, being alone and in silence seems to be an important factor.
An Interesting paper. basically he draws upon several times academics were wrong and argues that’s why we should promote diversity of ideas. The examples might be useful to people who like this kind of stuff. http://damascusssteel.tumblr.com/post/90055407780/on-the-benefits-of-promoting-diversity-of-ideas
This is a nifty little diagram I made a while ago before I knew about the concepts of system one and system two. It was an attempt to reconcile what I knew about behaviorism with what I knew about cognitive psychology, and detail with how that played out in my own life and the self-help material I was using at the time.
Of particular interest to less-wrongers is the center, which details how to switch from system 1 to system 2 and vice versa. Obviously a simplified model but it’s a skill I’ve found incredibly useful.
Can you explain what the diagram means? I haven’t been able to come up with a good guess as to what the arrows mean, or how the principles govern what.
It’s been a bit since I’ve researched this stuff, so I’ll do my best:
The right hand side of the diagram represents the social cognitive theory of Albert Bandura, which applies mostly to our Long term planning brain (What I now think of as system 2)
Essentially, it’s saying that we have thoughts, which can affect our behaviors and and our perception of our environment. The thoughts can also be affected by our behaviors and our environment. Finally, our thoughts can effect how we use our behaviors to change our environment, and vice versa.
What this means is that we can use this mode of our brain take the long term view, using introspection to choose our behaviors and shape our environments such that we can ultimately achieve our goals. This is great for planning and stopping destructive behaviors.
It also suggests that the way to change our actions when in this mode is to change our environments and change our thought patterns.
The left side represents behaviorism, encompassing the instinctual processes of operant and classical conditioning (what I now know as system 1).
The arrows show how everything is caused by a stimulus, which either causes us to find that an action leads up to a result (classical conditioning), or causes us to associate an action with a positive or negative result (operant conditioning).′
What this means is that we can use this mode of our brain when we need immediate instincts on something because we’re under a time crunch, or we need to get ourselves to take immediate action. It’s great for time limited activities (social interactions, sports), as well as when we want to take action immediately (beat procrastination).
It also means that if we want to change our behavior when in this mode, we should work to change either our immediate stimuli, or our immediate rewards/punishments.
The top and bottom represent the mutual laws that govern both modes. The Profit of Action Principle says that they both want to get the most reward they can, the principle of least effort says that they both want to do as little as possible to get it.
What this means is that no matter which mode we’re in, our natural state is to be productive/lazy (which are essentially synonyms with different connotations.).
The trick is that both modes perceive different things to be effortful, and different things to be rewarding (and also react differently when you throw time in the mix).
I didn’t describe the differences on the diagram, but the key is that knowing the differences, you can work to switch between the two modes depending on which mode views your desired action as most rewarding and least effortful. Alternatively, if you don’t want to do an action, choose the system that makes the action least rewarding and most effortful.
The center of diagram shows how to switch between the two modes. The short term brain is mostly concerned with the body and the emotions, so the easiest way to switch to it is to provoke strong emotions or use exercise.
The long term brain is mostly concerned with thinking and judging (as bad or good), so the easiest way to switch to it is to turn logical or focus on our values.
Hope that helps. I just reread that and realized it’s pretty difficult to follow, so let me know if you need any clarification.
What do the physicists on here think of Sean Carroll’s attempt at deriving the Born rule here?
Is it correct, interesting but flawed, wrong, or what?
I coincidentally read that paper today (confession: I am not a physicist yet, still a student), and I am really suspicious of his use of unitary transformations. A transformation is unitary if and only if it preserves the l^2-norm, which is precisely what the Born rule describes (i.e. that the l^2-norm is the correct norm on wavefunctions). I asked myself which step would break down if rather than the Born rule the actual probability was the amplitude to the power 4, and I haven’t found it yet (provided we also redefine ‘unitary’). But (hopefully) I’m just misunderstanding the problem..?
They address this in footnote 4: they’re just deriving that the amplitudes squared should be interpreted as probabilities using quantum mechanics as defined, which includes unitary evolution and all that.
You could try the same thing with a QM variant with different mathematical structure, although you might be interested to know that linear transformations that preserve l^p norm for p other than 2 are boring (generalized permutation matrices). So you wouldn’t be able to evolve your orthogonal environmental states into the right combinations of identical environments + coin flips. There also are other reasons why p = 2 is special. Scott Aaronson has written about this (and also linearity and the use of complex numbers) in the context of whether quantum mechanics is an island in theoryspace.
Going a bit deeper: it seems like all of the work is done by factoring out the environment. That is, they identify unitary transformations of the environment as producing epistemically equivalent states, but why shouldn’t non-unitary transformations also be epistemically equivalent, whether or not unitary evolution is what happens in quantum mechanics? They have to leave the environment states orthogonal since that’s assumed by decoherence, but why not (say) just multiply one of those environment states by an arbitrary number and derive any probability you want (i.e. why shouldn’t the observer be indifferent to the relative measure of environment branches, since the environment is supposed to be independent, and then why not absorb any coefficients you like into the environment part)?
The answer is that you can’t think of non-unitary transformations as acting independently on one part of a system, and that this is also part of the way quantum mechanics is specified. Given the mathematics of quantum mechanics, it only makes sense to talk about two parts of a wavefunction as independent under unitary transformations of the individual parts. See Appendix B of their companion paper, and think about what happens if you replace U_B with something non-unitary in equation B.4.
I think it is correct, but works only in a limited setup, and the authors claim
is rather optimistic. I’m going to write an explanatory post in before the weekend hits.
Wolfram Programming Cloud is now up and running, for anybody interested. Speed seems to be particularly slow right now, presumably because it just went live and there’s tons of traffic.
Here’s a quick intro for programmers. It’s also supposed to be pretty easy to pick up for people with little to no programming background.
EDIT: This appears to be a step-by-step comprehensive guide to the language. For some reason I couldn’t find any direct link to this page.
Given the Snowden leaks, is there some good Voice-over-IP software that you would recommend that encrypted calls decently?
Tox may provide the service you’re looking for. http://tox.im/
Silly question for people who work at MIRI: If you had the choice between receiving one flash drive from the 5-year-future MIRI employees, and acquiring one year’s supply of NZT-48, which would you pick?
I don’t work at MIRI but: in the movie the guy cranks out a novel in like one night. That’s years of work compressed into a few hours. He then proceeds to understand enough about markets to become extremely wealthy (thus negating the time travel informed betting angle), itself a many-year task. Most importantly: NZT-48 lets him figure out how to make MORE NZT-48 with fewer side effects, thus ensuring an indefinite supply. The NZT-48 is definitely the correct choice.
Approaching the question from the other direction: How much has MIRI really accomplished in the last 5 years? I think it’s safe to say that a large part of what they’ve achieved is in terms of popularization and awareness raising rather than actual research/information generated. If they sent back a flash-drive to five years ago, that wouldn’t jumpstart them anywhere near 5 years in terms of effort invested.
Presumably sending one flash drive back can be accomplished every 5 years in a classic time loop, letting MIRI figure out how to send MORE and more advanced data back, eventually designing the FAI in 5 years total, without having to worry about the competition.
So the penultimate timelines would look something like
MIRI wins several lotteries by buying the correct lottery tickets.
MIRI uses several hundred million dollars to hire Google’s AI team.
Said team spends 4.9 years checking the work of itself/alternate teams hired in past runs.
This team decides we finally have proven-safe FAI.
And the “final” timeline goes like
Receive strange flash-drive, plug into computer, FOOM.
Almost. The final send-back is unnecessary.
Also, nothing as conspicuous as winning multiple lotteries.
But assuming that it works right, you’re gaining a five year head start—which is very significant. For one, you could save all the people who would die in those five years, and also, you would probably be able to colonize more galaxies in the future, etc…
True, but at this point the nascent FAI will do the correct utility calculation and decide what to do.
Ah yes, that is a good point—assuming the FAI has access to the same ability (the window might be very narrow, for example).
It seems to be a more stable end-loop than the penultimate one. The second to last timeline will involve the FAI fooming and then deciding whatever the perfect way to ensure the loop outcome is.
You’re probably right about the lotteries thing, there’s a huge amount of possible money-making bets given future knowledge. Though I do think betting is the fastest way to make a huge pile of money.
Interesting thought: Maybe one should regularly buy lottery tickets so it’s not suspicious if you win through shenanigans.
Is the chance of such shenanigans high enough that the expected value of a lottery ticket is worth it? That seems like just a geeky and more roundabout way of committing the same error as the people who normally buy lottery tickets.
Well you have a lot more input on whether or not you’re going to be part of shenanigans. So if you have concrete plans to build a time machine, you can start buying lottery tickets in order to cover for your future plans. The same goes for if you’re going to manipulate or calculate the outcome of a lottery in a more mundane way. Someone who buys 1 lottery ticket in their life and wins 290,000,000 dollars is somewhat suspicious. Someone who’s been buying 1 every week for year is not.
So I was reading so8res’ story and I felt the same way. Like I had memorized a bunch of signals without gaining deeper understanding during my education. Question: how do you start over and “get” stuff all over again?
You could get the textbooks you used in school, and read them again. This time, noticing when you are confused.
Or use alternative textbooks, if available. That could reduce the feeling of “I have already seen this”.
Or take a Coursera or Udacity lesson on the subject. They are usually full of exercises = quick feedback.
It’s also good to have a positive attitude about learning outside the school. I mean, it’s sad that the school didn’t teach you better, but in any case, the science goes forward, the school time is limited, so everyone who really wants to understand their subject needs to study more after they leave school.
I think it’s very important to be conscious about your questions. If you don’t know something, don’t go for the first answer but write the question down and revisit it regularly.
Have something you want to actually do/accomplish with the knowledge, and start doing it as soon as you can. Worked for me with programming (unlike physics where advanced topics largely went in one ear and out the other).
“Every problem looks like a nail to a person with only a hammer in his toolbox.”
I see this with people not as far in their education as me. Though I noticed a different, but similar phenomenon: Learning of a new method or tool tempts me to use this new toy every time I am faced with a problem. This is a far more benign phenomenon but still, it is there.
Like recently I learned about factor analysis. Now I am tempted to use this to quantify if the plethora of human attributes, including physical and mental health, come down to just a handful of factors that can be influenced independently of each other. Of course I know of OCEAN personality factors and g, all of which are the recult of factor analysis, which only further increases my curiosity.
If all you have is a hammer, then you should be hammering on quite a few things that aren’t nails. Heck, I’ve fixed a propane regulator by whacking it with a hammer. It’s just that you should also.. you know.. buy some more tools.
If you’re aware of the non-nailness of the thing in question, I don’t see any problem with trying out your new tool on it—how else are you to learn its limits!? Using standard tools in a domain where it’s not normally applied is often a source of fresh insight.
This is a really cool point. There’s a relevant quote about how mathematicians use this technique:
It’s from here
Life is like an adventure game. If all you have is a hammer, use your hammer on everything. Once you get a screwdriver, use it on everything, since you already tried the hammer.
Factor Analysis is a very flawed technique for many circumstances; in particular, “independent” and “not linearly correlated” are wildly different.
So yes, try to be acutely aware that not everything is a nail. Be aware of your hammer’s limitations, and what it’s effectiveness is based on.
That said, I’ve been guilty of this too. And certainly, we rely on people pushing their hammers in strange directions; some esoteric new hammering technique for metaphorical screws can open the floodgates of innovation. That said, don’t confuse the fact that some people have managed to hone their skills with one tool with that tool being a panacea.
Factor analysis and principal components analysis are related but subtly different techniques.
Out of personal curiosity I started to read texts about psychology. I recognise some of the phenomena described there as problems the community here deals with like akrasia or procrastination. Psychologists though deal with these things as emotional problems as opposed to a technical problem like this community does. Has anyone had a similar observation? Are there psychologists among us who want to point us to helpful ressources?
I think most people realize these problems can have emotional roots. Some are just inclined to try to treat their own emotional state mechanistically (increase exercise, decrease expectations → lower stress, become happier about doing work).
I just wonder, there seem to be so many roads to “recovery”, like physical training, meditation, psychoanalysis, medication or CBT, that I’m inclined to ask if they are all manifestations of the same basic principle or are they genuinely different approaches? If the latter, is any one the “correct” one? Or are they different approaches that “fix” different underlying causes? Scarequotes because a great system in the wrong environment is still a great system, just not in that environment.
This comes back to the famous Dodo Bird Verdict in psychology “everyone has won and all must have prizes.”
It seems like most approaches to therapy work better than a no—treatment control, but none works better than any of the others...
There have been some attempts, like the transtheoretical model, to try to figure out specific instances where certain therapies are needed, but as far as I know none of these approaches have come out as clearly empirically correct.
Unfortunately it’s a mix. Sometimes the underlying cause is different, sometimes the treatment chosen would work but for its poor application by the patient (which is something the person treating them should try to deal with, but may not always be able to).
What are some ways to effectively practice and apply Maths learning?
I’ve been doing a lot of learning and found that practicing on paper is generally easier than Latex or a cobbled syntax in electronic documents, but would like to know if I just really need to bite the bullet and do it this way (Latex or similar).
Once I have (what I believe) an understanding of the problem types, I will generally write code to do it for me as doing this makes it even clearer in my head. Problem is though, once I do the code, I generally don’t practice on paper anymore and I am not sure if this is going to be hindrance in understanding more complex topics.
My end goal is to be able to read and understand the maths in any AI focused research paper, and then be able to do some maths which isn’t just practice examples but I am not sure on how to get to that last step.
I have heard (I have no citation and it’s probably apocryphal, but I found the anecdote enlightening) that Enrico Fermi’s way of reading articles was to read the abstract, put the paper away, do the maths by himself and once he was done, compare his results with the article. That’s probably a bit hardcore, but you should be able to start from somewhere in the paper’s reasoning and do a few steps forward.
But where are you in your paper reading at the moment ? Is there a particular problem that spurred this question ?
That technique would be beyond me at this stage—I have done courses in Calculus, Linear Algebra and Logic and can finally understand most of the syntax and flow in research papers, though I still don’t feel at all competent, seeing that I feel I could not recreate the proofs they come up with.
I think that is my issue—I can read lots of maths, I can do the exercises but I am not sure how to go about ‘doing something real’ - ask me to write any software you can think of and I can do that, but I feel I am missing some fundamental point / learning in maths. Or maybe I am over thinking this, and haven’t had a concrete problem to play with.
Try taking a thing you know how to do, and figure out why it works.
Why does integration-by-parts work, say?
If you ever want to communicate with others, LaTeX is the lingua franca of mathematics.
Good point, I should learn it anyway. But in terms of learning and solving problems, do you work them out using LaTex or do you use pen and paper / whiteboard?
I write everything on paper, although I duplicate all the important parts to LaTeX. This means that I go through a notebook approximately every 3 weeks, but it’s definitely worth it. I only use LaTeX to communicate with others and store really important bits (although recently I’ve also just been scanning my paper). I believe this is pretty much standard among my peers.
I’ve done different things at different parts of my life. I used to work everything out on paper first, but that got too time intensive. For most of grad school I worked in LaTeX exclusively, but I gather from my peer group that being able to do this is kind of rare. I bought a Surface Pro 2 a couple months ago, so now I do both simultaneously, sketching out bits in windows notebook while writing exposition in LaTeX.
I don’t think there’s really a general best practice to be found here; I think you just need to try different things until you find a workflow that you can live with.
LaTex is for typesetting. I know of nobody who “does the math” in LaTex; they do the math on paper and write it up in LaTex for presentation or publication (or if they need to ask a question on MathOverflow, or something like that).
That said, you should learn LaTex if you ever want to do research in anything mathematical.
Hello! Nice to meet you!
My questions are how (what editor) and why?
LaTeX seems an awful way to do scratch work, which is most of math.
I started using LaTeX for my physics homework because I kept making algebraic mistakes (mostly sign errors) when I’d copy expressions between steps. Ended up saving me time on net.
I use vim now (with syntax highlighting plus some useful macros), but I used nano for a few years and it wasn’t too bad either. I compile in the command line and have a pdf open in another window.
I use Kile. Being able to commit, tag and branch in git (heck, just being able to erase a part in the middle and rewrite it without ending up with a chain of arrows across three different pieces of paper) makes things easier to be worth the (slight) writing slowdown, and most of the time I can express myself in latex—after a while it just becomes the language you think in, \int becomes the symbol for integration and so on. Very occasionally I’ll write something I know is incorrect notation but close enough that I’ll know what I meant, and can go back and correct it later.
I totally do some math in latex :). It’s just easier to convert mentally sometimes than get paper out.
Last year I made a deliberate choice to produce all my maths assignments in LaTeX, the upshot of which is that I’m now pretty comfortable with LaTeX. I’m pretty damn sure you wouldn’t want to use it as substitute to pencil-and-paper working, though.
This is a bit of a dumb question, but I can’t seem to find a clear answer online:
Does self-similarity with respect to F mean that every part of a whole that is F, is F? Or does it mean that at least one part of a whole that is F, is F?
It’s neither of these. The first condition is not necessary for self-similarity, and the second is not sufficient.
Consider an archetypical example of a self-similar structure, the Sierpinski triangle. Looking at that picture, you can see that not every part of the triangle looks like the triangle. There are parts of the triangle that look like two triangles side by side, for instance. So it’s not necessary that every part of the whole be identical to the whole.
On the other hand, it is also not sufficient that at least one part of the whole be identical to the whole. First of all (and slightly pedantically), this would make any structure trivially self-similar, since according to the standard axioms of mereology every whole is a part of itself.
More substantively, even if you modified your definition to say “proper part” instead of just “part”, it still wouldn’t be sufficient. You can see the mathematical definition of self-similarity here. The definition is slightly opaque, but basically what it’s saying is that a bounded set is self-similar if it can be built up as a finite union of smaller-scale copies of itself. There are sets where some proper part resembles the whole, but which still cannot be built up in this way, so a proper part resembling the whole is not sufficient for self-similarity.
As an example, consider this picture. If the process it shows goes on to infinity, then there will be a proper part of the picture that will be a smaller scale version of the picture as a whole. However, the picture as a whole is not a finite union of such parts, so it is not self-similar, unlike the Sierpinski triangle.
So here’s a colloquial definition of self-similarity that captures the idea: A structure is self-similar if it can be exhaustively divided into a finite number of parts (greater than 1) such that each part exactly resembles the whole (except for scale). The Sierpinski triangle, for instance, can be exhaustively divided into three parts, all of which are smaller-scale versions of the triangle itself.
Hmm, thanks, that’s very clear. Maybe you can help me. I’m writing a philosophy paper about time, and I’d like to come up with a name for a pair of conditions on the time of, say, a change. So suppose an occurrence or situation E, and the time-interval AB (excuse the miserable notation, I can’t do anything about it):
A) there is no stretch of time CD which is a part of AB in which E does not occur.
B) there is no stretch of time CD which is a part of AB in which E occurs.
I’d especially like to come up with a way to characterize (B), and it seemed to me that ‘self-dissimilarity’ might be a good way of talking about it. But upon reading your description, I think it may just not be a close enough analogy to the geometrical case.
I think my third Natural Ergonomic Keyboard 4000 died in a span of 3 years from little water spills. Is there a good alternative keyboard that water resistant?
“Good” and “water-resistant” pretty much don’t occur together in practice, sadly.
(So, I want a Natural-shaped keyboard … with clicky keys … that’s water-resistant.)
I’m look into productivity and stuff like that. I see lukeprog so8res have some stuff on that. I saw a couple commentors recommend David Allen. Any recommendations on stuff to start with or books I should start reading? Thanks in advance
Also can anyone get the sequences by so8res and lukeprog in epub? I looked at the wiki where all teh sequences were in epub and didn’t find them there. It would be really helpful if I could get them there.
I am a fan of Cal Newport, whose advice is especially relevant in college.
Your Brain at Work is far and away the best productivity book I’ve ever read, at least for me. It’s also recommended by CFAR.
Cool. Any advice on lectures to look at?
What are you struggling with? What would you most like to improve about your productivity?
Stuff like focus and accomplishing tasks within a predetermined timeframe. Just being efficient in my work and more well-adjusted in society in general as well. The psychology of adjustment stuff lukeprog posted is really interesting to me.