Interesting comment by Gregory Cochran on torture not being useless as is often claimed.
Torture can be used to effectively extract information. I can give you lots of examples from WWII. People often say that it’s ineffective, but they’re lying or deluded. Mind you, you have to use it carefully, but that’s true of satellite photography. Note: my saying that it works does not mean that I approve of it.
… At the Battle of Midway, two American fliers, whose planes had been shot down near the Japanese carriers, were pulled out of the water and threatened with death unless they revealed the position of the American carriers. They did so, and were then promptly executed. Later, at Guadalcanal, the Japanese captured an American soldier who told them about a planned offensive – with that knowledge the Japanese withdrew from the area about to be attacked. I don’t why he talked [the guy didn’t survive] – maybe a Japanese interrogator spent a long time building a bond of trust with that Marine. But probably not. For one thing, time was short. I see people saying that building such a bond is in the long run more effective, but of course in war, time is often short.
You could consider the various agents that the Germans inserted into England: the British captured almost every one of them, and gave them the choice of cooperation (which included active participation in British deception schemes) or execution. Most cooperated.
The Germans tortured members of the various underground groups in Europe – and some of them never broke. But some did. You may have heard of Jean Moulin not breaking under torture, even unto death: but the Gestapo caught him because Jean Multon did break. To avoid being tortured, Multon agreed to work for the Gestapo. Over the next few days he led his captors to more than 100 members of the Resistance in Marseilles. He then gave away more in Lyons. Some of those he betrayed themselves broke under torture by the Gestapo. Things snowballed, and the whole network was torn to pieces.
People often argue that people under torture will say anything that their interrogators want to hear, and are thus useless as sources of information. There is something to that, but to a large degree that depends on what goals the interrogators actually have. For example, in the Iraq war, American higher-ups often didn’t want information – they wanted their fantasies confirmed. They knew that anti-American guerrillas couldn’t be motivated by nationalism or Islam – they had to be paid Baathist agents. Or there had to a connection between Saddam and Al-Qaeda. Whatever. Most told something close to the truth, but that wasn’t good enough, and so, torture. In much the same way, Stalin tortured until he got what he wanted – false confessions for show trials, rather than actual information about Trotskyist conspiracies (that didn’t even exist). Most people broke – I remember that a Chekist said, admiringly, that Lev Landau held out a long time – three broken ribs before giving in. The Japs at Midway wanted real info, not ammunition for their fantasies.
If an interrogator wants valid information, he can see if the stories of several different prisoners agree. He can see if their story checks with other sources of information. etc. It’s like any other kind of intelligence.
At least some of the arguments about the effectiveness of torture are obviously false, not even meant to make sense. For example, I have seen people argue that torture is pointless because the same information is always available by other means. Of course, since the products of various kinds of intelligence often overlap, you could use that argument to claim that any flavor of intelligence [ cryptanalysis, sigint, satellite recon, etc) is useless. But multiple leads build confidence. Sometimes, you can get information via torture available in no other way. If you are smart, and if information is what you really want.
This seems an insightful and true statement. We seem to like “protecting” ought by making false claims about what is.
We seem to like “protecting” ought by making false claims about what is.
Possibly related to the halo or overjustification effects; arguments as soldiers seems especially applicable—admitting that torture may actually work is stabbing one’s other anti-torture arguments in the back.
I read somewhere that lying takes more cognitive effort than telling the truth. So it might follow that if someone is already under a lot of stress—being tortured—then they are more likely to tell the truth.
On the other hand, telling the truth can take more effort than just saying something. Very modest levels of stress or fatigue make it harder for me to remember where, when, and with whom something happened.
I agree that it is a PC thing to say now in the US liberal circles that torture doesn’t work. The original context was different, however: torture is not necessarily more effective than other interrogation techniques, and is often worse and less reliable, so, given its high ethical cost to the interrogator, it should not be a first-line interrogation technique. This eventually morphed into the (mostly liberal) meme “torture is always bad, regardless of the situation”. This is not very surprising, lots of delicate issues end up in a silly or simplistic Schelling point, like no-spanking, zero-tolerance of drugs, no physical contact between students in school, age restrictions on sex, drinking, etc.
FM 34-52 Intelligence Interrogation, the United States Army field manual, explains that torture “is a poor technique that yields unreliable results, may damage subsequent collection efforts, and can induce the source to say what he thinks the interrogator wants to hear.”[4] Not only is torture ineffective at gathering reliable information, but it also increases the difficulty of gathering information from a source in the future.
We seem to like “protecting” ought by making false claims about what is.
This has interesting implications for consequentialism vs. deontology. Consequentialists, at least around here, like to accuse deontologists of jumping through eleborate hoops with their rules to get the consequences they want. However, it is just as common (probably more so) for consequentialists to jump through hoops with their utility function (and even their predictions) to be able to obey the deontological rules they secretly want.
This seems an insightful and true statement. We seem to like “protecting” ought by making false claims about what is.
Certainly true—I believe a lot of claims about the healthiness of vegetarianism fall into that category.
Another problem is taking something that’s true in some cases, or even frequently, and claiming that it’s universal. In the case of torture, it’s one thing to claim that torture rarely produces good information, and another to claim that it never does.
The point on torture being useful seems really obvious in hindsight. Before reading this I pretty much believed it was useless. I think it settled my head in the mid 2000s, arriving straight from political debates. Apparently knowing history can be useful!
Overall his comment is interesting but I think the article has more important implications, someone should post it. So I did. (^_^)
I don’t see anything insightful about the statement. It rather trival to point out that there were events were torture produced valuable information. Nobody denies that point.
It rather sounds like he doesn’t understand the position against which he’s arguing.
If an interrogator wants valid information, he can see if the stories of several different prisoners agree. He can see if their story checks with other sources of information. etc. It’s like any other kind of intelligence.
It’s not like any other kind of intelligence. This ignores the psychological effects of the torture on the person doing the torturing. Interrogators feel power over a prisioner and get information from them. That makes them spend to much attention of that information in contrast to other information.
This ignores the psychological effects of the torture on the person doing the torturing. Interrogators feel power over a prisioner and get information from them. That makes them spend to much attention of that information in contrast to other information.
And this is different from someone who, say, spends a lot of effort turning an agent, or designing a spy satellite, how?
Beating someone else up triggers primal instincts. Designing a spy satelite or using it’s information doesn’t.
There’s motivated reasoning involving is assessing the information that you get by doing immoral things as high value.
Pretending that there are no revelant psychological effects from the torture on the person doing the torturing just indicates unfamiliarity with the arguments for the position that torture isn’t effective.
I would add that that as far as the description of the battle of Midway in the comment goes, threating people with execution isn’t something that in the US would be officially torture. Prosecutors in Texas do it all the time to get people to agree to plea bargains.
It’s disgusting but not on the same level as putting electrodes on someone’s genitals. It also doesn’t have the same effects on the people doing the threating as allowing them to inflict physical pain.
If you threaten someone with death unless he gives you information you also don’t have the same problem of false information that someone will give you to make the pain stop immediately.
As far as the other example in that battle goes, the author of the comment doesn’t even know whether torture was used and seems to think that there are no psychological tricks that you can play to get information in a short amount of time. Again an indication of not having read much about how interrogation works.
Here on Lesswrong we have AI players who get gatekeepers to let the AI go in two hours of text based communication. As far as I understand Eliezer did that feat without having professional grade training in interrogation. If you accept that’s possible in two hours, do you really think that a professional can’t get useful information from a prisioner in a few hours without using torture?
As far as the other example in that battle goes, the author of the comment doesn’t even know whether torture was used and seems to think that there are no psychological tricks that you can play to get information in a short amount of time.
From what I heard, most of said psychological tricks relay on the person you’re interrogating not knowing that you’re not willing to torture them.
Here on Lesswrong we have AI players who get gatekeepers to let the AI go in two hours of text based communication.
Not reliably. This worked on about half the people.
If you accept that’s possible in two hours, do you really think that a professional can’t get useful information from a prisioner in a few hours without using torture?
Depending on the prisoner. There are certainly many cases of prisoners who don’t talk. If the prisoners are say religious fanatics loyal to their cause, this is certainly very hard.
From what I heard, most of said psychological tricks relay on the person you’re interrogating not knowing that you’re not willing to torture them.
Being able to read bodylanguage very well is also a road to information. You can use Barnum statements to give the subject the impression that you have more knowledge than you really have and then they aren’t doing anything wrong if they tell you what you know already.
Depending on the prisoner. There are certainly many cases of prisoners who don’t talk. If the prisoners are say religious fanatics loyal to their cause, this is certainly very hard.
In the case in the comment the example was an American soldier who probably doesn’t count as religious fanatic. The person who wrote it suggested that the fast transfer of information is evidence of there being torture involved.
It was further evidence for my claim that the person who wrote the supposedly insightful comment didn’t research this topic well.
I case wasn’t that there certain evidence that torture doesn’t work but that the person who wrote the comment isn’t familiar with the subject matter and as a result the comment doesn’t count as insightful.
Not reliably. This worked on about half the people.
Similarly, basilisks would work as motivation to develop a certain kind of FAI but there’s a ban on discussing them here. Why? Isn’t it worth credibly threatening to torture people for 50 years to eventually save some large number of future people from dust specks (or worse) by more rapidly developing FAI?
It’s possible that the harm to society of knowing about and expecting torture is greater than the benefit of using torture. In that case, torturing in absolute secret seems to be the way to maximize utility. Not particularly comforting.
A) It’s not credible
B)The basilisk only “works” on a very few people and as far as I can tell it only makes them upset and unhappy rather than working as hard as they can on FAI.
C) Getting people on your side is pretty important. Telling people they will be tortured if they don’t get on your side is not a very good move for a small organization.
It’s possible that the harm to society of knowing about and expecting torture is greater than the benefit of using torture. In that case, torturing in absolute secret seems to be the way to maximize utility. Not particularly comforting.
Um, the threat of torture only works if people know about the threat.
Some early experimental studies with LSD suggested that doses of LSD too small to cause any noticeable effects may improve mood and creativity. Prompted by recent discussion of this claim and the purely anecdotal subsequent evidence for it, I decided to run a well-powered randomized blind trial of 3-day LSD microdoses from September 2012 to March 2013. No beneficial effects reached statistical-significance and there were worrisome negative trends. LSD microdosing did not help me.
I recently played and won an additional game of AI Box with DEA7TH. Obviously, I played as the AI. This game was conducted over Skype.
I’m posting this in the open thread because unlike my lastfewAI Box Experiments, I won’t be providing a proper writeup (and I didn’t think that just posting “I won!” is enough to validate starting a new thread). I’ve been told (and convinced) by many that I was far too leaky with strategy and seriously compromised future winning chances of both myself and future AIs. The fact that one of my gatekeepers guessed my tactic(s) was the final straw. I think that I’ve already provided enough hints for aspiring AIs to win, so I’ll stop giving out information.
The fact that one of my gatekeepers guessed my tactic(s) was the final straw.
I guess you used words. That seems to be all the tactical insight needed to develop an effective counter-strategy. I really don’t get how this escaping thing works on people. Is it due to people being systematically overconfident in their own stubbornness? I mean I know I couldn’t withstand torture for long. I expect even plain interrogation backed by credible threats would break me over time. Social isolation and sleep deprivation would break me too. But one hour of textual communication with a predefined and gamified objective and no negative external consequences? That seems so trivial..
Other people have expressed similar sentiments, and then played the AI Box experiment. Even of the ones who didn’t lose, they still updated to “definitely could have lost in a similar scenario.”
Unless you have reason to believe your skepticism comes from a different place than theirs, you should update towards gatekeeping being harder than you think.
Unless you have reason to believe your skepticism comes from a different place than theirs, you should update towards gatekeeping being harder than you think.
Unless I have already heard the information you have provided and updated on it, in which case updating again at your say so would be the wrong move. I don’t tend update just because someone says more words at me to assert social influence. Which is kind of the point, isn’t it? Yes, I do have reason to believe that I would not be persuaded to lose in that time.
Disagreement is of course welcome if it is expressed in the form of a wager where my winnings would be worth my time and the payoff from me to the gatekeeper is suitable to demonstrate flaws in probability estimates.
He wrote Reactionary Philosophy in an enormous, planet-sized nutshell back in March, as a precursor to a reactionary take-down essay that never seemed to materialize, other than a few bits and pieces, such as the one on how war is on the decline. This faq seems to be the takedown he was aiming for, so I imagine he’s been building it for at least the past seven months, probably longer.
(ETA: In the comments on the Anti-reactionary FAQ, Scott says it took roughly a month, so I guess it wasn’t as much of an on-going project as I predicted.)
I’m keeping this question to this thread as to not spam political talk on the new open thread.
What does a post scarcity society run by reactionaries look like? If state redistribution is not something that is endorsed, what happens to all the people who have no useful skills? In a reactionary utopia where there is enough production but lacking an efficient way to distribute resource base on ability or merit, what happens to the people who have been effectively replaced by automation? Is it safe to assume that there are no contemporary luddites among reactionaries?
What does a post scarcity society run by reactionaries look like?
I can answer that question to a certain extent, as I’ve talked to several people in reaction who have thought about it, as have I, at least once we look far into the posthuman era, it might be most easily imagined as a society of gods above and beasts below, something the ancient Greeks found little difficulty imagining and certainly didn’t feel diminished their humanity. An important difference between the post human fantasies often imagined is that the superiority of transhuman minds would not be papered over by fictional legal equality, there would be a hierarchy based on the common virtues the society held in regard and there would be efforts to ensure the virtues remained the same, to prevent value drift. Much of the society would be organized along the lines of striving to enable (post)human flourishing as defined by the values of the society.
An aristocracy prevailing, indeed “rule of the best”, with at least a ceremonial Emperor at its apex. Titles of nobility where in theory in ancient society awarded, both to incentivize people for long term planning, define their place, formalize their unique influence as owners of land and warriors, define what the social circle you are expect to compare yourself with is and expecting the good use of such privilege by people from excellent families. Extending and indeed much improving such concept offers fascinating possibilities, compatible with human imagination and preferences, think of the sway that nobility, let alone magical or good queens, dukes and knights hold over even our modern imagination. Consider that in a world where aging is cured and heredity, be it genetic or otherwise full understood, where minds are emulated and merge and diverge, the line between you and your ancestors/previous versions blurs. A family with centuries of diligent service, excellence, virtue, daring and achievement… I can envision such a grand noble lineage made up of essentially one person.
But this is an individual aspect of the vision. The shape of galactic civilization is often to most the more motivating aspect. To quote Aaron Jacob:
There will be extrasolar planets covered in cathedrals and flags, and Nike will be a Greek word and nothing more. #progress
But there is a subgroup of reaction that includes Francis St. Pol that might go into more strongly emphasizing raw intelligence maximization. And Nick Land embraces capitalism in all its forms tightly.
In a reactionary utopia where there is enough production but lacking an efficient way to distribute resource base on ability or merit, what happens to the people who have been effectively replaced by automation?
This is I think an example of a near future issue. The answers I have heard are the welfare state but with eugenics (many agree with the basic income guarantee), makework, especially relatively fulfilling makework such as perhaps crafting “handmade” items for consumption or perhaps farming and the pod option (virtualization). The latter is indeed at least partial wireheading, but I wonder how much it actually would be if the humans living virtual lives are allowed to interact with each other in something like a very fun MMO, their social relations would still be quite real and I think fundamentally this is all most really care about. This option becomes especially economical if uploading minds becomes cheap. I would add the option of humane suicide, but I’m not sure how many would agree.
Is it safe to assume that there are no contemporary luddites among reactionaries?
To a large extent yes, but enthusiasm for technology varies. Most are moderately enthusiastic about technology and believe Progressivism is holding back civilization. Nick Land is an example of an outlier in the pro-technology direction, but there are a few on the other end, those agree with a variant of the argument Scott Alexander has rediscovered that technology and wealth inevitably caused the change of values they find detrimental, but I don’t recall any of them arguing for a technological rollback, because none think it feasible.
From what I understood based on reading the anti-reactionary faq, Scott’s interpretation of Moldbug’s interpretation of an ideal reactionary king would either arrange infrastructure such that there are always jobs available, or start wireheading the most useless members of society (though if I’m reading it right, Moldbug isn’t all that confident in that idea, either). I’d not mind a correction (as Scott points out, either option would be woefully inefficient economically).
As a part of my Master’s thesis in Computer Science, I am designing a game which seeks to teach its players a subfield of math known as Bayesian networks, hopefully in a fun and enjoyable way. This post explains some of the basic design and educational philosophy behind the game, and will hopefully also convince you that educational games don’t have to suck.
I will start by discussing a simple-but-rather-abstract math problem and look at some ways by which people have tried to make math problems more interesting. Then I will consider some of the reasons why the most-commonly used ways of making them interesting are failures, look at the things that make the problems in entertainment games interesting and the problems in most edutainment games uninteresting, and finally talk about how to actually make a good educational game. I’ll also talk a bit about how I’ll try to make the math concerning Bayesian networks relevant and interesting in my game, while a later post will elaborate more on the design of the game.
I was thinking recently that if soylent kicks something off and ‘food replacement’ -type things become a big deal, it could have a massive side effect of putting a lot of people onto diets with heavily reduced animal and animal product content. Its possible success could inadvertently be a huge boon for animals and animal activists.
Personally, I’m somewhat sympathetic towards veganism for ethical reasons, but the combination of trivial inconvenience and lack of effect I can have as an individual has prevented me from pursuing such a diet. Soylent would allow me to do so easily, should I want to. Similarly, there are people who have no interest in animal welfare at all. If ‘food replacements’ become big, it could mean for the incidental conversion of those who might have otherwise never considered veganism or vegetarianism to a lifestyle that fits within those bounds, for only their personal cost or convenience reasons.
I anticipate artificial meat having a much bigger impact than meal-replacement products. I anticipate that demand for soylent-like meal replacement products among the technophile cluster will peak within the next three years, and will wager $75 to someone’s $100 that this is the case if someone can come up with a well-defined metric for checking this.
Note that the individual impact you can have by being a vegetarian is actually pretty big. Sure, it’s small in terms of _percentage_of the problem, but that’s the wrong way to measure effect. If you saw a kid tied to railroad tracks, you wouldn’t leave them there on account of all the children killed by other causes every day.
Let $X= the cost to me of being a vegetarian. I’m indifferent between donating $X to the best charity I can find or being a vegetarian. For what values of $X would you advise me to become a vegetarian assuming that if I don’t become a vegetarian I really will donate an extra $X to, say, MIRI?
Being a vegetarian does not have a positive monetary cost, unless it makes you so unhappy that you find yourself less motivated at work and therefore earn less money or some such. Meat may be heavily subsidized in the US, but it’s still expensive compared to other foods.
I would rather pay $8,000 a year than be a vegetarian. Consequently, if my donating $8,000 to a charity would do more good for the rest of the world than my becoming a vegetarian would, it’s socially inefficient for me to become a vegetarian.
You can make a precommitment to do only one or the other, but if you become vegetarian you don’t actually lose the $8,000 and become unable to give it to MIRI. In this sense it is not a true tradeoff unless happiness and income are easily interconvertible for you.
I fight the hypothetical—there is no such tradeoff.
A more concrete hypothetical: Suppose that every morning when you wake up you’re presented with a button. If you press the button, an animal will be tortured for three days, but you can eat whatever you want that day. If you don’t press the button, there’s no torture, but you can’t eat meat. By the estimates in this paper, that’s essentially the choice we all make every day (3:1 ratio taken by a_m times l_m = at least 1000 animal-days of suffering avoided per year of vegetarianism ~= 3 days of torture per day of vegetarianism).
Anyway—you should not be a vegetarian iff you would press the button every day.
This is absurd. I really, really would rather pay $8,000 a year than be a vegetarian. Do you think I’m lying or don’t understand my own preferences? (I’m an economist and so understand money and tradeoffs and I’m on a paleo diet and so understand my desire for meat.)
I would rather live in a world in which I donate $8,000 a year to MIRI and press the button to one in which I’m a vegetarian and donate nothing to charity.
There is no market for your proposed trade. In this case using money as a proxy for utility/preference doesn’t net you any insight because you can’t exchange vegetarianism or animal-years-of-torture for anything else. Of course you can convert to dollars if you really want to, but you have to convert both sides—how much would you have to be paid to allow an animal to be tortured for three days? (This is equivalent to the original question, we’ve just gone through some unnecessary conversions).
Have you/they thought about other environmental implications? Processing everything down to simple nutrients to make the drink doesn’t sound very energy efficient. Might compete with eating meat, but definately not with veganism.
Personally, I haven’t really thought of it. Might be an angle worth looking at the product from, you’re right.
I haven’t really been following their progress or anything, so I don’t know, but it’s possible they’ve touched on it at some point before. You could dig around on the soylent forum or even start the topic yourself if you really felt like it. I think the creators of the product are reasonably active on there.
One of the primary ingredients of soylent is whey protein, which is produced from cow’s milk. It is not a vegan product.
Whey is a byproduct of cheesemaking, which is why it is currently relatively inexpensive. If people started consuming whey protein en masse, it would shift the economics of whey production and dairy cow breeding in potentially highly unfavorable directions for both the cows and the soylent enthusiasts (because it would become more expensive).
Sadly, there doesn’t seem to be any viable alternative to whey at this point (if there was, they’d use that, but there isn’t).
Previously the only factor preventing Soylent from being vegan was the use of whey protein. Whey is attractive due to its high absorption rate and complete amino acid profile, granting it a perfect PDCAAS score of 1.0. However, it is an animal product, some whey proteins can trigger allergic responses, and concerns were raised over the potential presence of lactose.
To allay these issues we have switched to a rice protein isolate / pea protein isolate blend. Rice protein is mostly complete except for a lack of Lysine and Leucine. This is why rice and beans became such a staple food, the beans make up for the Lysine deficiency of rice. In our staple food the blend of pea and rice protein isolate provide a complete amino acid profile with minimal risk of inflammation or allergic reactions.
soylent blog, 2013-07-24
We have found that Pea Protein is not available at the scale we demand. To compensate for this, we had to source and integrate pure Lysine into the formula, so everyone will get their complete amino acid profile.
Thanks for the info. While I suppose this is an improvement, I wonder about the scalability of this approach and the impact on the environment. Rice doesn’t exactly produce that much protein per acre of land. I’ll have to look at the numbers though.
I know someone who has a young child who is very likely to die in the near future. This person has (most likely) never heard of cryonics. My model of this person is very unlikely to decide to preserve their child even if they knew about it.
I don’t know if I should say something. At first I was thinking that I should because the social ramifications are negligible. After thinking about it for a while, I changed my mind and decided that possibly I was just trying to absolve myself of guilt at the cost of offending a grieving parent. I am not sure if this is just rationalization.
You should reconsider this assumption. I would imagine that making suggestions about what to do with someone’s soon-to-be-dead child’s body would be looked upon coldly at best and with active hostility at worst. It’s like if you suggested you knew a really good mortician; it’s just not the sort of thing you’re supposed to be saying.
There’s also the fact that, as a society, we are very keen when watching the bereaved for signs they haven’t accepted the death. To most people cryonics looks like a sort of pseudoscientific mummification and the idea that such a person could be revived as delusional. It is easy to imagine that if your friend shelled out hundreds of thousands on your say-so for such a project people might see you as preying on a mentally vulnerable person.
This is not to make a value judgement or a suggestion, just pointing out that the social consequences are quite possibly non-negligible.
If you have not signed up for cryonics yourself, you could ask this person for advice as to whether you should. If you have signed up, you could work this into a conversation. Or just find some video or article likely to influence the parent and forward it to him, perhaps an article mentioning Kim Suozzi.
What expert advice is worth buying? Please be fairly specific and include some conditions on when someone should consider getting such advice and focus on individuals and families versus, say, corporations.
I ask because I recently brainstormed ways that I could be spending my money to make my life better and this was one thing that I came up with and realized I essentially never bought except for visiting my doctor and dentist. Yet there are tons of other experts out there willing to give me advice for a fee: financial advisers, personal trainers, nutritionists, lawyers, auto-mechanics, home inspectors, and many more.
Personal fitness folk: doing starting strength is three hours a week that will make all the rest much better and a personal trainer will make your form good, which is really important. If your conscientiousness is normal tutors rock. If you can afford one, hire a tutor.
Most personal trainers will not be able to help you have awesome form in powerlifting (starting strength) lifts. You’re better off submitting videos of your form to forums devoted to such things than with the average PT.
How many people here use Anki, or other Spaced Repetition Software (SRS)?
[pollid:565]
I’m finding it pretty useful and wondering why I didn’t use it more intensively before. Some stuff I’ve been adding into Anki:
Info about data structures and algorithms (I’m reading a book on them, and think it’s among the most generally useful useful knowledge for a programmer)
Specific commands for tools I use a lot (git, vim, bash—stuff I used to put into a cheat sheet)
Some Japanese (seems at least half of Anki users use it to learn Japanese)
Tidbits from lukeprog’s posts on procrastination
Some Haskell (I’m not studying it intensively, but doing a few exercises now and then, and adding what I learn in Anki)
I have much more stuff I’d like to Ankify (my notes on Machine Learning, databases, on the psychology of learning; various inspirational quotes, design patterns and high-level software architecture concepts …).
Some ways I got better at using Anki:
I use much less pre-made decks
I control the new-cards-per-day depending on how much I care about a topic. I don’t care much about vim, so have 3 to 5 new cards per day, but go up to 20 for procrastination or
I reorder my decks according to how much I care about them (I have a few decks prefixed with zzz that I review only if the others are done; I don’t mind forgetting about those)
For Japanese, I use double-sided cards and Japanese character input for creating them (I used to manually make both-way cards)
I have various google docs for stuff I’d like to eventually put into Anki, that I then copy-paste by batch into the web interface (there are probably even more convenient ways, but so far I find that the quickest—I want to be able to work on my list of entries before it goes in Anki)
I should probably make a top-level “reminder: used Spaced Repetition” post, but I’m still going to wait a bit more to have a bit more perspective.
Any other tips/advice/spaced repetition stories?
I’ve abandoned many decks almost completely because I made too complex cards.
Make the cards simple and combat interference. That doesn’t mean you can’t learn complex concepts. Now that I’ve got it right, I can go through hundreds of reviews per day if I’ve fallen behind a bit, and don’t find it exhausting. If I manage to review every day, it’s because I’m doing it first in the morning.
I use a plugin/option to make the answer show automatically after 6 seconds, so it’s easy to spot cards that are formatted badly or cause interference, and take too much time.
Some general Anki tips:
If you use it to learn a foreign language use the Awesome TTS plugin. Whenever Anki displays a foreign word it should also play the corresponding sound. Don’t try to consciously get the sound. Just let Anki play the sound in the background.
I use a plugin that adds extra buttons to new cards I changed it in a way that gives the 6th button a timeframe of 30-60 days till the new card shows up the second time. I use that button for cards that are highly redundant.
Frozen fields is a plugin that’s useful for creating cards and I wouldn’t want to miss it. It allow you to present specific fields in the new card dialog from being cleared when you create a new card.
Quick Colour Changing is another useful addon. It allows you to use color more effectively to highlite aspects of cards.
On of the core ideas that I developed over the last time is that you really want to make cards as easy as possible.
I think the problem with most premade card that you find online is that they just aren’t easy enough. They take too much for granted.
Take an issue such as the effect of epinephrine on the heart. It raises hard rate.
Most of the deck that you find out there would ask something like: “What’s the effect of epinephrine on the heart?”
That’s wrong. That’s not basic enough. It’s much simpler to ask: “epinephrine ?(lowers/raises)? heart rate”
I think that idea also helps a lot with language learning. I think the classic idea of asking
“What does goood mean in French?” is problematic. If you like in the dictionary you will find multiple answer and the card can only hold one answer. A card that just asks: “good means ?(bon/mal)?” is much simplier.
I have made a French Anki deck using that principle and it’s astonishing for myself how well the learning flows.
If someone wants to test the deck I’m happy to share it. I would estimate the effects to be that for a lifetime investment of 20 hours you get the 200 most common French words + ~100 additional words. For most of the verbs you will be able to recognize the three basic times (present, future and passé simple). I think you will know the words well enough to understand them when you read a test. If you want conversational fluency with those words I think you will need additional practice. Deck is (French/English) For those of you who want to start using Anki I think it would be a good start.
If I would for example start now a Vim deck I would group functions. Take something knowledge like:
w next word (by punctuation); W next word (by spaces)
b back word (by punctuation); B back word (by spaces)
e end word (by punctuation); E end word (by spaces)
This make cards: ?(w/W)? → next word by punctuation ?(w/W)? → next word by spaces
?(w/b/e)? → next word by punctuation
?(W/B/E)? → next word by spaces
I would also add:
?(q/w/e/r/t)? next word by punctuation
?(w/s/x)? next word by punctuation
This gives you probably a rate of being able to answer the card in ~4 seconds. Cards that aren’t hard. You can simply integrate a new deck of 500 of those cards in an hour once the deck is ready.
Using it regularly is the most important thing by far. I don’t use it anymore, the costs to starting back up seem too high (in that I try and fail to re-activate that habit), I wish I hadn’t let that happen. Don’t be me; make Anki a hardcore habit.
Why not just restart from scratch with empty decks? It should be less daunting at first...
My strategy to avoid losing the habit is having decks I care less about than others, so that when I stopped using Anki for a few weeks, I only had to catch up on the “important” decks first, which was less daunging than catching up with everything (I eventually catched up with all the decks, somewhat to my surprise).
I’m also more careful than before in what I let in—if content seems too unimportant, it gets deleted. If it’s difficult, it gets split up or rewritten. And I avoid adding too many new cards.
Continuing with your current deck should be strictly superior to starting from scratch, because you will remember a substantial portion of your cards despite being late. Anki even takes this into account in its scheduling, adjusting the difficulty of cards you remembered in that way. If motivation is a problem, Anki 2.x series includes a daily card limit beyond which it will hide your late reviews. Set this to something reasonable and pretend you don’t have any late cards. Your learning effectiveness will be reduced but still better than abandoning the deck.
I’ve previously let Anki build up a backlog of many thousand unanswered cards. I cleared it gradually over several months, using Beeminder for motivation.
I think when restarting a deck after a long time it’s important to use the delete button a lot. There might be cards that you just don’t want to learn and it’s okay to delete them.
You could also gather the cards you think are really cool and move them into a new deck and then focus on learning that new deck.
When using pre-made decks the only efficient way is to follow along, i.e. if you don’t know the source book/course it’s not very good. Partial exception, vocabulary lists.
Agreed—and you can even go wrong with vocabulary lists if they’re too advanced (some German vocabulary got overwhelming for me, I just dropped everything).
Another partial exception can be technical references (learning keywords in a programming language or git commands).
People who want to eat fewer animal products usually have a set of foods that are always okay and a set of foods that are always not (which sometimes still includes some animal products, such as dairy or fish), rather than trying to eat animal products less often without completely prohibiting anything. I’ve heard that this is because people who try to eat fewer animal products usually end up with about the same diet they had when they were not trying.
I wonder whether trying to eat more of something that tends to fill the same role as animal products would be an effective way to eat fewer animal products.
I currently have a fridge full of soaking dried beans that I have to use up, and the only way I know how to serve beans is the same as the way I usually eat fish, so I predict I’ll be eating much less fish this week than I usually do (because if I get tired of rice and beans, rice and fish won’t be much of a change). I’m not sure whether my result would generalize to people who use more than five different dinner recipes, though. I should also add that my main goal is learning how to make cheap food taste good by getting more practice cooking beans—eating fewer animal products would just be a side effect.
Now that I write this, I’m wishing I’d thought to record what food I ate before filling my fridge with beans. (I did write down what I could remember.)
People who you know want to eat fewer animal products. If I just decided to eat less meat, you’d be much less likely to find out this fact about me than if I decided to become fully lacto-ovo-vegetarian.
People who want to eat fewer animal products usually have a set of foods that are always okay and a set of foods that are always not (which sometimes still includes some animal products, such as dairy or fish), rather than trying to eat animal products less often without completely prohibiting anything.
I don’t think that an accurate description of the average vegetarian. A lot of self labeled vegetarians do eat animal products from time to time.
Most people who tell you that they try to eat only healthy food and no junk food, still eat junk food from time to time. The same goes for vegetarians eating flesh.
Additionally eat less red meat is part of the official mantra on healthy eating. A lot of people subscribed to the idea that limiting the amount of red meat they eat is good while not eliminating it completely.
I’ve heard that this is because people who try to eat fewer animal products usually end up with about the same diet they had when they were not trying.
I find this hard to believe, knowing several people who have become vegetarians and vegans and hardly ever eating meat myself. Do you have any support for this claim? Anecdotally, one new vegan (from being a vegetarian) stopped eating pizza which had previously been more-or-less a mainstay of his. My sister became a vegetarian as a kid despite actually quite liking meat at the time; not only did her eating habits changed but that of my entire family did significantly. My parents describe it as going from thinking “What meat is for dinner?” to thinking “What is for dinner?” ever night.
I would like recommendations for a small, low-intensity course of study to improve my understanding of pure mathematics. I’m looking for something fairly easygoing, with low time-commitment, that can fit into my existing fairly heavy study schedule. My primary areas of interest are proofs, set theory and analysis, but I don’t want to solve the whole problem right now. I want a small, marginal push in the right direction.
My existing maths background is around undergrad-level, but heavily slanted towards applied methods (calculus, linear algebra), statistics and algorithms. My knowledge of pure maths is pretty fractured, not terribly coherent, and mostly exists to serve the applied areas. I am unlikely to undertake any more formal study in pure mathematics, so if I want to consolidate this, I’ll have to do it myself.
This came to my attention as I’ve recently started teaching myself Haskell. This is mostly an intellectual exercise, but at some point in the future I would like to work with provable systems. I can recognise the homology between some constructs in Haskell and mathematical objects, but others I don’t notice until they’re explicitly pointed out. I get the very strong impression that my grasp on functional programming would be a lot more powerful if I had a stronger grounding in pure maths.
If you like Haskell’s type system I highly recommend learning category theory. This book does a good job. Category theory is pretty abstract, even for pure math. I love it.
I can recognise the homology between some constructs in Haskell and mathematical objects, but others I don’t notice until they’re explicitly pointed out.
Essentially, this kind of math is called category theory. There is this book, which is highly recommended, and fills your criteria decently well. I am currently working through this book, and I am happy to discuss things with you if you would like.
I am not sure if it is good for you background and needs, but I would like to mention The Book of Numbers. I read and understood this book in high school without any formal training of calculus. I think this book is very effective at showing people how math can be beautiful in a context that does not have many prerequisites.
I sometimes use the term ‘accessible’ in the Microsoft sense.
The mouthful version of ‘accessible’ is something like this: To abstractly describe the character of a human interactive or processed experience when it is tailored to not exceed the limitations of the particular human being to which it is being presented.
So, if you are blind or paralyzed, your disability prevents you from using a computer terminal in the normal way without some assistive technology. If you are confined to a wheelchair, you cannot easily enter a building without a sloped ramp.
And everyone is ‘disabled’ in terms of not having infinitely capable brains or strengths of will. We can only absorb so much information so fast, and we all have limited cognitive potential and capacity to resist detrimental impulse.
The ‘accessible society’ is an ideal where we cease to propagate the common legal fiction of ‘choice, agreement, and contract by notice and informed consent’ and are honest with people that they will only be given the choices that they have the potential to make responsibly for themselves. This is the same kind of custodianship / guardianship relationship we insist upon for the legally incompetent, like children or the senile, and when we admit that all adults are in reality ‘disabled’ and ‘incompetent’ below the libertarian ideal to some degree or another, then it is just enlightened paternalism.
Even if you are smart, but you are not an expert in a complex licensed profession (say, the law), or practiced in some skilled trade (say, auto repair) then sometimes that ‘assistive technology’ is another person, perhaps an agent or ombudsman, who can ‘boil it all down for you’, and ‘bring it down to your level’ as a layman. He presents simple questions to you to establish your preferences and priorities, and then he uses his skills to take care of the rest. It’s a black box to you, and a form of specialization for which we are usually willing to pay. Gains from trade and all that.
The theory of general suffrage in a republic also uses this justification to rationalize how individuals who are incompetent to govern can nevertheless express their preferences and have fiduciary-like representatives of their interests govern on their behalf. Obviously, it doesn’t work this way. Because it can’t.
Part of the problem is presented by the question, “What if you can’t ‘black box’ the mess away?” The principal is required to make certain difficult decisions, but the complexity involved in making a genuine individual choice is irreducible. And what if, furthermore, something is so complicated that there simply are no human agents actually able to navigate the confusing maze?
So, in this sense of ‘accessible’, I mean something like ‘comprehensible’, ‘digestible’, ‘fathomable’, ‘intelligible’, etc.
So, while it might be possible to build manned fighter jets capable of taking turns at 20g, it would be pointless for us to do so because it would turn the pilot’s brains into pulp. In general, nothing should be built that exceeds the potential of the individuals who must wield it. This category includes the governance of organizations.
...
How can we make things more accessible? Here’s one clever way from the pre-financial crisis, pre-CFPB real world. Too lazy to google the source at the moment, but I was taught about a regulation concerning a certain key part of Credit Card contracts. The idea was that the agency involved would take the language directly from a bank’s advertised agreement and would then form a kind of focus group which would be a, ahem, ‘cognitively-representative’ sample of the, ahem, ‘most vulnerable’ set of target consumers.
The agency would have these poor, nearly-but-not-quite-incompetent-to-contract individuals read the language of the offer (as if anyone, even smart people, actually did that), and they would then give them a very simple, true-false quiz about the key elements of the offer – the interest rate, delinquency penalty, etc. If the tender minds didn’t do at least a little better than random guessing on the quiz, then the agency wouldn’t permit the bank to advertise the offer in that form. Back to the drawing board!
Of course, this lowest-common denominator approach to accessibility will certainly overprotect more competent and sophisticated adults from entering into higher-risk-higher-reward agreements. Instead of presuming maturity and competence, government can discriminate and only license the most savvy individuals (or some proxy for astuteness, like wealth) to participate in such ventures, much as the SEC already does with its rules governing Accredited Investors.
But in general, the lesson is that when the government really cares about the capacity for something to be understood, it tests for that comprehension and nothing gets past the post without such verification of accessibility.
I upvoted this, even though the part where wealth is suggested as a filter for competence completely fails to distinguish the Bill Gates (rich because competent) from the Paris Hiltons (rich because someone somewhere in the ancestry was competent and/or lucky). (Though it’s possible I just upvoted it because it starts out talking about accessibility and how the existence of imperfect beings kinda nukes the idea of libertarian free will, both of which I wish more people understood.)
After Conrad decided to give 97% of his fortune to charity, it appears to me that Paris will earn more money than she will inherit. Even if she is as stupid as the character she plays, she has acquired competent agents.
I don’t have much of a point, but people who win the fame tournament are probably not famous by accident.
His argument against Haidt’s ideas on differences between liberals and conservatives related to his moral foundation theory differing psychology is similar to the ones Vladimir_M and Bryan Caplan made, but he upgrades it with a plausible explanation for why it might seem otherwise. The references are well worth checking out.
I recently found out a surprising fact from this paper by Scott Aaronson. P=NP does not (given current results) imply that P=BQP. That is, even if P=NP there may still be substantial speedups from quantum computing. This result was surprising to me, since for most computational classes we normally think about that are a little larger than P, they end up equaling P if P=NP. This is due to the collapse of the polynomial hierarchy. Since we cannot resolve that BQP lives in the polynomial hierarchy, we can’t make that sort of argument.
Sure, but that’s just saying that P=NP is not a robust hypothesis. Conditional on P=NP, what odds do you put that P is not P^#P or PSPACE? (though maybe the first is a robust hypothesis that doesn’t cover BQP)
Conditional on P=NP, what odds do you put that P is not P^#P or PSPACE? (though maybe the first is a robust hypothesis that doesn’t cover BQP)
I’m not sure. If P=NP this means I’m drastically wrong about a lot of my estimates. Estimating how one would update conditioning on a low probability event is difficult because it means there will be something really surprising happening, so I’d have to look at how we proved that P=NP to see what the surprise ended up being. But, if that does turn out to be the case, I’m fairly confident I’d then assign a pretty high probability to P=PSPACE. On the other hand we know that of the inequalities between P, NP, PSPACE and EXP, at least one of them needs to be strict. So why should I then expect it to be strict on that end? Maybe I should then believe that PSPACE=EXP? PSPACE feels closer to P than to EXP but that’s just a rough feeling, and we’re operating under the hypothetical that we find out that a major intuition in this area is wrong.
I like the ideas of (1) providing an alternative video introduction, because some people like that stuff, and (2) having the last part of “what to do after reading LessWrong”.
I think the rationality videos should be even linked from the LW starting page. Or even better, the LW starting page should start with a link saying “if you are here for the first time, click here”, which would go to a wiki page, which would contain the links to videos (with small preview images) on the top.
Cheers—yeah, especially for my friends for whom reading a couple of those posts would be a big deal, the talks are very useful. I’ll make a top-level comment on next week’s open thread proposing the idea :)
Added: By the way, as to the ‘post LW’ section, you might’ve noticed that the last post in ‘welcome to Bayesianism’ is a critique of LessWrong as a shiny distraction rather than of actual practical use. I’m hoping the whole thing leads people to be more practically rational and involved in EA and CFAR.
Might be useful to hae introduction points for people with a certain degree of preexisting knowledge of the subject but from other sources. E.g. If I want to introduce a philosophy postgrad to lesswrong I would want to start with a summary of lesswrong’s specific definition of ‘rationality’ and how it compares to other versions, rather than starting from scratch.
I’m sorry, I had a little difficulty parsing your comment; are you saying that my introduction would be useful for a philosophy postgrad, or that my summary is starting from scratch and the former would be something for someone to work on?
LW tells people to upvote good comments and downvote bad comments. Where do I set the threshold of good/bad? Is it best for the community if I upvote only exceptionally good comments, or downvote only very bad comments, or downvote all comments that aren’t exceptionally good, or something else? Has this been studied? Is it possible to make a karma system where this question doesn’t arise?
Information theory says that you communicate the most if you send the three signals of up, down, nothing equally often. This would be a psychological disaster if everyone did it, but maybe you should.
It seems to me that the total voting ought to reflect the “net reward” we want to give the poster for their action of posting, like a trainer rewarding good behavior or punishing bad. For this reason, my voting usually takes into account the current total score. I think the community already abides by this for most negatively scored posts—they usually don’t sail much below −2. For posts that I feel I really benefited from, though, I don’t really follow my own policy per se. -- I just “pay back” what I got out of it to them.
Where do I set the threshold of good/bad?
I basically only downvote if there’s some line of argument that I object to in the post. I think I need to say what I’m objecting to specifically when I do this more often.
Is it best for the community if I upvote only exceptionally good comments, or downvote only very bad comments, or downvote all comments that aren’t exceptionally good, or something else?
My opinion is it has to depend on the current score of the post. [At least under the current system, which reports, if you will, net organic responses; in a different system where responses were from solicited peer-review requests, different behavior would be warranted.]
Has this been studied? Is it possible to make a karma system where this question doesn’t arise?
Good questions. I don’t know. There’s some further discussion here.
It seems to me that the total voting ought to reflect the “net reward” we want to give the poster for their action of posting,
This should be implemented in the system if done at all. Downvoting “nondeservingly” upvoted posts will make obvious but true comments look controversial. I think inconsistently meta-gaming the system just makes it less informative.
If you don’t think something deserves the upvotes, but isn’t wrong, then simply don’t vote.
ETA: I assume you didn’t mean that downvoting to balance the votes is good, but you didn’t mention it either.
Downvoting “nondeservingly” upvoted posts will make obvious but true comments look controversial.
Good point. I don’t actually do that, I do the “don’t vote” policy you mentioned, but I hadn’t thought about why, or even noticed that I do it correctly. Thanks. Your point that it would make the voting look controversial is well taken.
I would be tempted to upvote something that I thought had karma that was too low. This would tend to cause it to look “controversial” when, maybe, I agreed that it deserved a negative score. Is upvoting behavior also a bad idea in this case and I should just “not vote”?
This should be implemented in the system if done at all.
I don’t see how that’s possible without it having more information.
I don’t want to overthink this too much as I can’t help but think that these issues are artifacts of the voting system itself being a bit crude: e.g. should I be able to “vote” for a target karma score instead of just up or down? The score of the post could be the median target score.
I don’t know. I’m quite green here too. I don’t usually read heavily downvoted comments, as they’re hidden by default. Downvoted comments are less visible anyway, so any meta-gaming on them has less meaningful impact.
I might upvote a downvoted comment, if I don’t understand why it’s downvoted and wanted it to be more visible so that discussion would continue. It would be a good to follow up with a comment to clarify that, but many times I’m too lazy :(
I think making the system more complicated would just make people go even more meta.
I think that if we could coordinate perfectly what we mean by good comments, and each comment has a score between 0 and 1, then we should all upvote a comment with a positive score with a probability equal to its score, and downvote a comment with negative score with probability equal to its negative score.
This would cause the karma assigned to a post to drift over time unboundedly with expectation of: (the traffic that it recieves)*(the average score of voters), which seems problematic to me.
Nitpick: maybe you want the score to run between −1 and 1 and voting probability to be according to the absolute score? I’m confused by your phrase “comment with negative score”.
Why are AMD and Intel so closely match in terms of processor power?
If you separated two groups and incentivized them to develop the best processors and came back in 20 years, I wouldn’t expect both groups to have done approximately comparably. Particularly so if the one that is doing better is given more access to resources. I can think of a number of potential explanations none of which are entirely satisfactory to me, though. Some possibilites:
there is more cross-talk between the companies than I would guess (through hiring former employees, reading patents, reverse engineering, etc.)
Outside factors matter a lot: eg the fab industry actually determines a lot of what they can do
Companies don’t work as hard as they can when they know that they’re slightly beating their competitors (and the converse)
Selection bias: I’m not comparing Intel to Qualcomm or to any competitors that went out of business and companies that do worse in performance would naturally transition to other niches like low-power. Nor am I considering markets where there was a clear dominator until their patents expired.
Basic research drives some improvements and is mostly accessible to both
Though none of these are particularly compelling individually, taken together they seem pretty plausible. Am I missing anything? I know basically nothing about this industry so I wouldn’t be surprised if there was a really good reason for this.
Companies don’t work as hard as they can when they know that they’re slightly beating their competitors (and the converse)
I’m afraid I didn’t keep information about the citation, but when I was reading up on chip fabs for my essay I ran into a long article claiming that there is a very strong profit motive for companies to stack themselves into an order from most expensive & cutting-edge to cheapest & most obsolete, and that the leading firm can generally produce better or cheaper but this ‘uses up’ R&D and they want to dribble it out as slowly as possible to extract maximal consumer surplus.
there is more cross-talk between the companies than I would guess (through hiring former employees, reading patents, reverse engineering, etc.)
There is lots of cross-talk. Note also that Intel and AMD buy tools from other companies- and so if Cymer is making the lasers that both use for patterning, then neither of them has a laser advantage.
I find in general very hard to predict what kind of acceptance will my post receive, basing on the karma point of each. While as a policy I try not to post strategically (that is, rationality quotes, pandering to the Big Karmers, etc.), but just only those things I find relevant or interesting for this site, I have found no way to reliably gauge the outcome. It is particularly bewildering to me that comments that (I hope) are insightful gets downvoted to the limit of oblivion or simply ignored while comments or requests of clarification are the most upvoted. Have someone constructed a model of how the consensus works here on LW? Just curious...
comments that (I hope) are insightful gets downvoted to the limit of oblivion
Curious about specific examples.
or simply ignored
This can have many reasons. Posting too late, when people don’t read the article. Using difficult technical arguments, so people are not sure about their correctness, so they don’t upvote.
If you click on my name, the first two comment at −2 are the ones: I was seriously thinking of being contributing to the discussion.
This can have many reasons. Posting too late, when people don’t read the article. Using difficult technical arguments, so people are not sure about their correctness, so they don’t upvote.
Yeah, this does not bother me much, I’m more puzzled by the “trivial comment → loads of karma” side: “How did you make those graphs” and “How do you say ‘open secret’ in English” attracted 5 karmas each. Loads here has to be intended as relative to the average of points my posts receive.
Before, I modeled karma as a kind of power-law: all else being equal, those who have more karma will receive more karma for their comment. So I guessed that the more you align to the modus cogitandi of Big Karmers, the more karma you will receive. This doesn’t explain the situation above, though.
Upvoted because I was going to write the same thing, and upvoting the comment is what I usually do when I see that someone has already written what I was going to write.
+1 for explaining why. I’m not sure I agree with the behavior particularly, since it could give a lot of credit for something relatively obvious. I probably wouldn’t do it if the question had more than +5 already unless I was really glad.
Oh, I will give extra +1′s when the context made me think it would be hard for the person to ask the question they asked, e.g. because it challenges something they’d been assuming.
As a rule I don’t think it’s productive to worry about karma too much, and I’m going to assume you agree and that you’re asking “what am I missing, here” which is a perfectly useful question.
Before I get into your question, here’s an example that was at −2 when I encountered it, but that I see has now risen to having +5, so there’s definitely some fluidity to the outcome (you might be interested in the larger discussion on that page anyway).
So the two examples that you mention at −2 presently are 1 and 2.
Part of the problem in those examples seems to be an issue of language, but I don’t think that’s all of it. For example, you offer to clarify that when you say “natural inclination” you mean an “innate impulse [that] is strongly present almost universally in humans” and give examples of things humans seek regularly (“eating, company, sex”). From my interpretation of the other posts, when they say “natural inclination” they mean “behavior that would be observed in a group of humans (of at least modest size) unless laws or circumstances specifically prevent it”. I suspect that the downvotes could be because your meaning was sufficiently unexpected that even when you wrote to clarify what it was, they couldn’t believe that that was what you meant. And, on balance, no, that doesn’t seem right to me since you were making an honest effort to clarify terms.
For what it’s worth, here’s why I’d object to your choice of terms, and this could explain some of the downvotes, since it’s obviously much less effort to just downvote than explain. I’d object because your definition inserts an implied “and the situation is normal” into the definition. For example, in normal situations a person would rather have an ice cream than kill someone. But if the situation is that you’re holding a knife and the man in front of you has just raped your sister and boasts about doing it again soon, maybe the situation is different enough that the typical innate impulse is different. Since what’s usually of interest is behavior over a long period of time, the dependency on the situation is problematic.
As for the second comment, I don’t understand it. Maybe I’m missing context. You seem to set up an unreasonable partition of the possibilities into 3 things.
Anyway, sometimes the negative votes can tell us what we’re doing wrong, sometimes they seem to just be a consequence of saying something that’s not mainstream for the site, but I don’t want to let myself get trapped into dismissing them all that way, so I usually take a minute to think about it when it happens.
Incidentally, I think it would be a big mistake to actively try to get maximum +karma on your comments. On the benign side you’d start trying hard to be the first poster on major articles. On the more negative side you’d have the incentive to approve of the prevailing argument with clever words. To exaggerate: “Be proud that you don’t have too much that was merely popular.” That said, some of the highly voted articles, at least, clearly deserve it.
I’d object because your definition inserts an implied “and the situation is normal” into the definition.
There are possible privileged situations, however. If you are in the environment of evolutionary adaptedness, living with your tribe out on the African savannah, how many days per year are you going to have an “inclination” to kill another human, vs. how many days are you going to have an “inclination” to eat, have sex and socialize. I’m guessing the difference is something like 1 vs. 360, unless tribal conflicts were much more common in that environment than I expect, and people desired to kill during those conflicts more than I expect (furthermore I would expect people to see it as an unfortunate but necessary action, which doesn’t jive with my sense of the definition of “inclination”, but that’s not critical to the point). Clearly putting them on the same level carves up human behavior in a particular way which is not obvious just from the term “natural inclination.”
That all seems fair to me. To be honest I haven’t read enough of the context to know how relevant these distinctions are to it, and I agree the term seems problematic which is all the more reason that trying to nail it down is actually useful behavior, hence MrMind’s concern, I guess.
One reason is people vote to signal simple agreement.
Not saying it would work, but there could be “warm fuzzy votes” that don’t contribute to karma at all, or contribute much less, and are shown separately. Comments could be arranged by those too if need be. It would be an interesting experiment to see how much people agree with posts that have no other value.
Statements that are short and that are non-controversially in line with the position that most readers would approve of and flow with the context well and get a lot of “traffic” are the most likely to have skyrocketing +1′s.
If it has a useful insight or a link to an important resource this also helps, but only if it’s lucid enough in its explanation.
I am interested in reading further on objective vs subjective Bayesianism, and possibly other models of probability. I am particularly interested in something similar to option 4 in What Are Probabilities, Anyway. Any recommendations on what I should read?
I recently memorized an 8-word passphrase generated by Diceware.
Given recentadvances in password cracking, it may be a good time to start updating your accounts around the net with strong, prescriptively-generated passphrases.
Added: 8-word passphrases are overkill for most applications. 4-word passphrases are fairly secure under most circumstances, and the circumstances where in which they are not may not be helped by longer passphrases. The important thing is avoiding password reuse and predictable generation mechanisms.
I find it much easier to use random-character passwords. Memorize a few, then cycle them. You’ll pretty much never have to update them. If you can’t memorize them all, use software for that.
You’re right, removed it. I’m not sure I understand why people prefer using passphrases though. Isn’t it incredibly annoying to type them over and over again?
Another is that, although they’re harder to type because they’re longer, they’re easier to type because they don’t have a bunch of punctuation and uppercase letters, which are harder to type on some smartphones (and slower to type on a regular keyboard). And while I’m at it, one more minor advantage (not relevant for people making up their own passwords) is that the average person does not know punctuation characters very well, e.g., does not know the difference between a slash and a backslash.
They may be easier to type the first few times, but after your “muscle memory” gets it even the trickiest line noise is a breeze.
That smartphone thing is a good point, though. My phone is my greatest security risk because of this problem. Probably should ditch the special characters.
Yes, no one should use line noise passwords because they are hard to type. If you want 100 bits in your password, you should not use 16 characters of line noise. But maybe you should use 22 lower case letters.
The xkcd cartoon is correct that the passwords people do use are much less secure than they look, but that is not relevant to this comparison. And lparrish’s links say that low entropy pass phrases are insecure.
But why do you want 100 bit passwords? The very xkcd cartoon you cite says that 44 bits is plenty. And even that is overkill for most purposes. Another xkcd says “The real modern danger is password reuse.” Without indicating when you should use strong passwords, I think this whole thread is just fear-mongering.
According to the Diceware FAQ, large organizations might be able to crack passphrases 7 words or less in 2030. Of course that’s different from passwords (where you have salted hashes and usually a limit on the number of tries), but I think when it comes to establishing habits / placing go-stones against large organizations deciding to invest in snooping to begin with, it is worthwhile. Also, eight words isn’t that much harder than four words (two sets of four).
One specific use I have in mind where this level of security is relevant is bitcoin brainwallets for prospective cryonics patients. If there’s only one way to gain access to a fortune, and it involves accessing the memories of a physical brain, that increases the chances that friendly parties would eventually be able to reanimate a cryonics patient. (Of course, it also means more effort needs to go into making sure physical brains of cryonics patients remain in friendly hands, since unfriendlies could scan for passphrases and discard the rest.)
What I meant is that those properties are specific to the secret part of login information used for online services, as distinct from secret information used to encrypt something directly.
How are salting and limits properties of passwords (but not passphrases)?
Sorry, what I meant is something more like like ‘encryption phrases’ and ‘challenge words’. Either context could in principle refer to a word or a phrase, actually. However, when you are encrypting secret data that needs to stay that way for the long term, such as your private PGP key, it is more important to pick something that can’t concievably be brute forced, hence the usage of the term ‘passphrase’ usually applies to that. If someone steals your hard drive or something, your private key will only stay private for as long as the passphrase you picked is hard to guess, and they could use that to decrypt any incoming messages that used your public key.
When you are simply specifying how to gain access to an online service, it is a bit less crucial to prevent the possibility of brute forcing (so a shorter ‘password’ is sort of okay), but it is crucial for the site owner to use things like salt and collision-resistant hash functions to prevent preimage attacks, in the event that the password-hash list is stolen. (Plaintext passwords should never be stored, but unsalted hashes are also bad.)
If someone was using a randomly generated phrase of 4+ words or so for their ‘password’, salt would be more or less unnecessary due to the extremely high probability that it is unique to begin with. This makes for one less thing you have to trust the site owner for (but then, you do still have to trust that they aren’t storing plaintext, that the hash they use is collision-resistant, etc).
I’m not sure if it is possible to use salt with something like PGP. I imagine the random private key is itself sufficient to make the encrypted key as a whole unique. Even if the passphrase itself were not unique, it would not be obvious that it isn’t until after it is cracked. The important thing to make it uncrackable is that it be long and equiprobable with lots of other possibilities (which incidentally tends to make it unique). Since the problem isn’t uniqueness to begin with, but rather the importance of it never being cracked even with lots of time and brute force, salt doesn’t do a lot of good.
Bitcoin private keys are bound to the number of bits of entropy stored in the public address, which I believe is 122 or so. Since the presence of coins at a public address is public information, brute force attacks should be expected to track the cost of computing power / the value of coins stored. It seems to be pretty good security for the near term, but Douglas_Knight predicts that quantum computers will break bitcoin. (Presumably later versions will be more robust against quantum computers, or something other than bitcoin will take dominance.)
In any case, while I have been calling the phrase used for a bitcoin brainwallet a ‘passphrase’, and it is more in that category than not (being important to protect from brute force, not having a salt, and not being part of a login sequence), note that it is unlike a PGP passphrase in that it represents the seed for the key in its entirety rather than something used to encrypt the key.
Yes, there are some uses. I’m not convinced that you have any understanding of the links in your first comment and I am certain that that it was a negative contribution to this site.
If you really are doing this for such long term plans, you should be concerned about quantum computers and double your key length. That’s why NSA doesn’t use 128 bits. Added: but in the particular application of bitcoin, quantum computers break it thoroughly.
I am certain that that it was a negative contribution to this site.
Well, that’s harsh. My main intent with the links was to show that the system for picking the words must be unpredictable, and that password reuse is harmful. I can see now that 8-word passphrases are useless if the key is too short or there’s some other vulnerability, so that choice probably gives us little more than a false sense of security.
in the particular application of bitcoin, quantum computers break it thoroughly.
This is news to me. However, I had heard that there are only 122 bits due to the use of RIPEMD-160 as part of the address generation mechanism.
I am certain that that it was a negative contribution to this site.
Rudeness doesn’t help people change their minds. Please elaborate what you mean by this. Even if he’s wrong, the following discussion could be a positive contribution.
There are 7776 words in Diceware’s dictionary. Would you rather memorize 8 short words, 22 letters (a-z case insensitive), or 16 characters (a-z case sensitive, plus numerals and punctuation marks?)
If I really had to type them in myself every time I wanted to use them, 16 random characters absolutely. Repeatedly typing the 8 words compared to 16 characters probably takes more time in the long run than memorizing the random string. Memorizing random letters isn’t significantly easier in my experience than memorizing random characters.
I find myself over sensitive to negative feedback and under-responsive to positive feedback.* Does anyone have any advice/experience on training myself to overcome that?
*This seems to be a general issue in people with depression/anxiety, I think its something to do with how dopamine and serotonin mediate the reward system but I’m not an expert on the subject. Curiously sociopaths have the opposite issue, underresponding to negative feedback.
Spend more cognitive resources on dealing with positive feedback.
When someone says that you have a nice shirt, think about why they said it. Probably they wanted to make you feel good. What does that mean? They care about making you feel good. You matter to them.
Gratitude journaling is a tool with a good evidence base. At the end of every day, write down all good feedback that you got. It doesn’t matter if it was trival. Just write stuff down.
Meditation is also a great tool.
Curiously sociopaths have the opposite issue, underresponding to negative feedback.
I wouldn’t be sure about that claim. I think sociopaths rather have different criteria of what constitutes negative feedback.
I think physical pain would have the same effect on a sociopaths as on a regular person.
The Feeling Good Handbook has good evidence as a treatment for depression and could help you to identify and address your automatic thoughts caused by negative feedback.
I’d like to highly recommend Computational Complexity by Christos H. Papadimitriou. Slightly dated in a fast changing field, but really high quality explanations. Takes a bit more of a logic-oriented approach than Hopcroft and Ullman in Introduction to Automata Theory, Languages, and Computation. I think this topic is extremely relevant to decision theory for bounded agents.
Those who have been reading LessWrong in the last couple of weeks will have little difficulty recognizing the poster of the following. I’m posting this here, shorn of identities and content, as there is a broader point to make about Dark Arts.
These are, at the time of writing, his two most recent comments. I will focus on the evidential markers, and have omitted everything else. I had to skip entirely over only a single sentence of the original, and that sentence was the hypothetical answer to a rhetorical question.
That’s very interesting. At what point can one … I don’t know the actual reasons … I figure … assume … I am aware of reasons that were given … though I don’t know the relationship of those reasons to why it was actually … Survey results suggest … A reasonable person might think … Such a person would also want … in the absence of knowing the actual reasons … I also don’t know …
Someone replied to that, and his reply was:
You raise interesting points. One could hypothesize … It seems an unlikely interpretation … does weigh heavily … I think … probably … Who read it? … probably … intriguing … I speculate … I guess that’s fine, but maybe … more than is ideal? It is of course just speculation. I’m interested in alternative hypotheses.
In every sentence, he is careful to say nothing, while appearing to say everything. His other postings are not so dense with these thin pipings of doubt, but they are a constant part of his voice.
Most of us have read or watched Tolkien. Some have read C.S. Lewis. We know this character, and we can recognise his voice anywhere. Lewis called him Professor Weston; Tolkien called him Grima Wormtongue.
Those who have been reading LessWrong in the last couple of weeks will have little difficulty recognizing the poster of the following.
I’m having difficulty recognizing the poster of the following, and searching individual phrases is only turning up this comment. While I approve of making broad points about Dark Arts, I’m worried that you’re doing so with a parable rather than an anecdote, which is a practice I disapprove of.
I, thankfully, missed that the first time around. Worry resolved. (Also, score one for the deletion / karma system, that that didn’t show up in Google searches.)
I agree that being slippery and vague is usually bad, and one way to employ Dark Arts.
However, avoiding qualifiers of uncertainty and not softening one’s statements at all exposes oneself to other kinds of dark arts. Even here, it’s not reasonable to expect conversants to be mercifully impartial about everything. Someone who expects strong opposition would soften their language more than someone whose statements are noncontroversial.
There’s slippery, and there’s vague. The one that I have not named is certainly being slippery, yet is not at all vague. It is quite clear what he is insinuating, and on close inspection, clear that he is not actually saying it.
However, avoiding qualifiers of uncertainty and not softening one’s statements at all exposes oneself to other kinds of dark arts.
Qualifiers of uncertainty should be employed to the degree that one is actually uncertain, and vagueness to the degree that one’s ideas are vague. In diplomacy it has been remarked that what looks like a vague statement may be a precise statement of a deliberately vague idea.
If your concerns are valid, then it doesn’t help those that are not aware of who you are talking about, by hiding the identity of the accused. We’re all grown ups here we can handle it.
I think the pattern is also important per se. You can meet the pattern in the future, in another place.
It’s a pattern of how to appear reasonable, cast doubt on everything, and yet never say anything tangible that could be used against you. It’s a way to suggest that other people are wrong somehow, without accusing them directly, so they can’t even defend themselves. It is not even clear if the person doing this has some specific mission, or if breeding uncertainty and suspicion is their sole mission.
And the worst thing, it works. When it happens, expect such person to be upvoted, and people who point at them (such as Richard), downvoted.
As Viliam_Bur says, it is the general pattern that is my subject here, not to heap further opprobrium on the one who posted what I excerpted. Goodness knows I’ve been telling him to his virtual face enough of what I think of him already.
I have a hard time terminating certain subroutines in my brain. This most regularly happens when I am thinking about a strategy game or math that I am really interested in. I will continue thinking about whatever it is that is distracting me even when I try not to.
The most visible consequence of this is that it sometimes interferes with my sleep. I usually get to bed at a regular time, but if I get distracted it could take hours for me to get to sleep, even if I cut myself off from outside stimulus. It can also be a problem when I am in a class that I find less interesting that whatever math I was working on before the class.
I know there are drugs to help with sleep, but I am especially interested in a meta-thinking solution to this problem. Is there a way that I can force myself to clear my brain and get it to stop thinking about something for a while?
One idea I had is to give my brain another distracting activity that causes it to think, but has no way to actively stay in my head after the activity is finished. For example, perhaps I could solve a Sudoku or similar logic puzzle? I have not tried this yet, but I will next time I am in this situation.
Any other ideas? Is this a problem many people face?
I use certain videogames for something similar. I’ve collected a bunch of (Nintendo DS, generally) games that I can play for five minutes or so to pretty much reset my mind. Mostly it’s something I use for emotions, but the basic idea is to focus on something that takes up all of that kind of attention—that fully focuses that part of my brain which gets stuck on things.
Key to this was finding games that took all my attention while playing, but had an easy stopping point after five minutes or so of play—Game Center CX / Retro Game Challenge is my go-to, with arcade style gameplay where a win or loss comes up fairly quick.
StepMania is great for this (needs specialized hardware). It needs the mind and the body. When playing on a challenging level, I must pay full attention to the game—if my mind starts focusing on any idea, I lose immediately.
Intensive exercise—I remember P.J.Eby saying he’d use intensive exercise (in his case I thin it was running across his house) as a “reset button” for the mind. It’s pretty cheap to try! (I have occasionally did that—pushups, usually—though it’s more often to get rid of angry annoyance than distractions)
Physical pain will do it. Exercise is one option, but for me it always seems to be the bad “I am destroying my joints” kind of pain so I stop before it hurts enough to reset my thought patterns. Holding a mug of tea that’s almost but not quite hot enough to burn, and concentrating on that feeling to the exclusion of everything else, seems to work decently. A properly forceful backrub is better, though it requires a partner. And if your partner is a sadist then you begin to have many excellent options.
Addressing the sleep half: if meditation or sleep visualization exercises are hard for you, try coloring something really intricate and symmetrical. Like these. The idea is to keep your brain engaged enough to not think about the intrusive thing you were thinking about before, but calm enough to move towards sleep.
I don’t know if a citation would help—alcohol’s effect on sleep (and other things) is fairly personal. If you don’t already know, you’ll need to experiment and find out how it works for you.
In any case, alcohol is just the easiest of the hit-the-brain-below-the-cortex options. There are other alternatives, too, e.g. sex or stress.
I’d love to hear some first hand accounts. It sounds like all the things I enjoyed about going to church when I was a Christian, with the Christianity part.
If you enjoyed going to church as a Christian, and considered it enough to make this post, then you should probably just go. There is not much penalty for trying.
I go to a UU church, which looks kind of similar. (They are not all atheist, but they are all different things and agree to disagree about theology.) I don’t really enjoy the singing that much, at least not the hymns, and I still enjoy the experience as an atheist. Just don’t expect to get the same level of intelligence or rationality you get from here though. If you are looking for good philosophical discussion, that probably isn’t the place to get it.
Overview of systemic errors in science—wishful thinking, lack of replication, inept use of statistics, sloppy peer review. Probably not much new to most readers here, but it’s nice to have it all in one place. The article doesn’t address fraud very much because it may have a small effect compared to unintentionally getting things wrong.
Account of a retraction by an experiment’s author Doing the decent thing when Murphy attacks. Most painful sentence: “First, we found that one of the bacterial strains we had relied on for key experiments was mislabeled.”
Stock market investment would seem like a good way to test predictive skills, have there been any attempts to apply lw style rationality techniques to it?
Stock market investment would seem like a good way to test predictive skills...
I disagree and hope that more people would update regarding this belief. There is no alpha (risk adjusted excess returns), at least not for you. Here is why:
For all intents and purposes, stocks markets are efficient; even if you don’t agree, you would still have to answer the question “to what degree of inefficiency is there that will allow you to extract or arbitrage gains”? Your “edge” is going to be very very small if you even have one.
Assuming you have identified measurable inefficiencies, your trading costs will negate it.
The biggest players have access to better information, both insider and public, at faster speeds than you could ever attain and they already participate in ‘statistical arbitrage’ on a huge scale. This all makes the stock market very efficient, and very difficult for you, the individual investor to game a meaningful edge.
The assumption that one could test for significantly better predictive skills in the stock market, would imply that risk free arbitrage is common – You could just buy one stock and sell an index fund or vice verse, then apply this with the law of large numbers and voila you are now a millionaire but alas this does not commonly happen.
I happen to disagree. I don’t think this statement is true.
For all intents and purposes, stocks markets are efficient
First, there are many more financial markets than the stock market. Second, how do you know that stock markets are efficient?
your trading costs will negate it
That seems to be a bald assertion with no evidence to back it up, especially given that we haven’t specified what kind of trading we are talking about.
The biggest players
The biggest players have their own set of incentives and limitations, there are not necessarily the best at what they do, and, notably, they are not interested in trades/strategies where the payoffs are not measured in many millions of dollars.
The assumption that one could test for significantly better predictive skills in the stock market, would imply that risk free arbitrage is common
I don’t see how that implies it. Riskless arbitrage, in any case, does not require any predictive skills given that it’s arbitrage and riskless. You test for predictive skills in the market by the ability to consistently produce alpha (properly defined and measured).
Upvoted because your reservations are probably echoed by many.
I happen to disagree. I don’t think this statement is true.
I’d like to change your mind specifically when it comes to “playing the stock market” for excess returns. My full statement is “There is no alpha (risk adjusted excess returns), at least not for you”. This reflects my belief that while alpha is certainly measurable and some entities may achieve long term alpha, for most people this will not happen and will be a waste of time and money.
First, there are many more financial markets than the stock market. Second, how do you know that stock markets are efficient?
First, OP mentions stock market I’m not particularly picking on it. Second, for all intents and purposes for the individual, it is. Think about it this way, instead of saying whether or not the stock market is efficient, like it’s binary, lets just ask how efficient it is. In the set of all markets, is the stock market among the most efficient markets that exist? I would see no reason why it wouldn’t be. Have you ever played poker with 9 others of the best players in the world? Chances are you haven’t, because they aren’t likely to be part of your local game, but the stock market is easy to enter and anyone one may participate. While you sit there analyzing your latest buy low and sell high strategy, you are playing against top tier mathematicians and computer engineers synergistically working with each other with the backing of institutions. A lone but very smart and rational thinking programmer isn’t likely to win. Why would you choose to make that the playground for you to test your predictions skills? There are better places, like prediction book.
That seems to be a bald assertion with no evidence to back it up, especially given that we haven’t specified what kind of trading we are talking about.
Even dirt cheap discount brokers charge about $5 a trade, but if you were something of a professional then you could join a prop firm and get even cheaper maybe .005 per share. But now you have the problem of maintaining volume of trades in order to keep that rate. If you are a buy and holder you would still need to diversify and balance your portfolio with transaction trades, 1. To prove you statistically did better than the market rather than variance. 2. Prevent individual stock risk. If you have a strategy of anything other than a buy and hold strategy you will incur more trading costs.
The biggest players have their own set of incentives and limitations
Any incentives and limitations that big players have are more adverse for the individual. Strategies that are ignored by the truly big players are picked up by the countless mutual fund managers that year after year try to beat the market yet the majority don’t what makes an individual think they could do better?
I don’t see how that implies it. Riskless arbitrage, in any case, does not require any predictive skills given that it’s arbitrage and riskless.
I should rephrase, when I say arbitrage I mean statistical arbitrage. But strong stat arb might as well be just as good as riskless if you truly have an edge. Assume you have a measured alpha of a significant degree and probability. One would essentially be orchestrating a “risk-free” arbitrage by simply applying your alpha producing strategy and simultaneously shorting S&P etf’s to create a stat arbitrage. But that doesn’t happen commonly, because free lunches are quick and leave none for you. Strategies by nature are ephemeral, they last until rational agents exploit it until there is nothing left. For example there used to be a strategy where monitoring the monthly reported cash in flows to mutual funds could predict upward movement in equity markets. The idea is that with lots of cash, fund managers start to buy. This was exploited until this strategy no longer produces a measurable edge. Unless you have reason to think that you will discover a neat unexploited strategy, you shouldn’t play the stock market, just buy etfs.
I have personal experience in this industry and I think I only know one person that has been able to pull it off and is not lying about it. His experience is consistent with my beliefs that the stock market is getting more efficient. His earnings were the greatest during the earlier part of his career and has been steadily declining.
A great deal of things will not happen “for most people”. Getting academic tenure, for example. Or having a net wealth of $1m. Or having travelled to the Galapagos Islands. Etc., etc.
First, OP mentions stock market
Yes, but that’s the basic uninformed default choice when people talk about financial markets. It’s like “What do you think about hamburgers? Oh, I think McDonalds is really yucky”. Um, there’s more than that.
If you look at what’s available for an individual American investor with, say, $5-10K to invest, she can invest in stocks or bonds or commodities (agricultural or metals or oil or precious metals or...) or currencies or spreads or derivatives—and if you start looking at getting exposure through ETFs, you can invest into pretty much anything.
The focus on the stock market is pretty much a remnant from days long past.
A lone but very smart and rational thinking programmer isn’t likely to win.
I don’t know. It depends on how smart and skilled he is.
He might also join forces with some smart friends. Become, y’know, one of those teams of “top tier mathematicians and computer engineers” who eat the lunch of plain-vanilla investors. But wait, if the markets are truly efficient, what are these top-tier people doing in there anyway? :-/
Why would you choose to make that the playground for you to test your predictions skills?
Because the outcomes are direct and unambiguous. Because some people like challenges. Because it’s a way to become rich quickly.
Strategies that are ignored by the truly big players are picked up by the countless mutual fund managers
Mutual fund managers are very restricted in what they can do. Besides outright constraints (for example, they can’t go short) they are slaves to their benchmarks.
But strong stat arb might as well be just as good as riskless if you truly have an edge.
Oh, no. “Riskless” and “I think it’s as good as riskless” are very, very different things.
a “risk-free” arbitrage by simply applying your alpha producing strategy and simultaneously shorting S&P etf’s to create a stat arbitrage.
That doesn’t get you anywhere near “riskless”. That just makes you hedged with respect to the market, hopefully beta-hedged and not just dollar-hedged.
Strategies by nature are ephemeral
True, but people show a very consistent ability to come up with new ones when old ones die.
In any case, no one is arguing that you can find a trade or a strategy and then milk it forever. You only need to find a strategy that will work for long enough for you to make serious money off it. Rinse and repeat, if you can. If you can’t, you still have the money from a successful run.
Disclaimer: I day trade, so this might be influenced by defensiveness.
The thinking patterns I’ve learned on LW haven’t really helped me to discover any new edge over the markets. Investment, or speculation, feels more like Go or blackjack as an activity. Being a rationalist doesn’t directly help me notice new trades or pick up on patterns that the analysts I read haven’t already seen.
On the other hand, the most difficult thing about dealing with financial matters is remaining calm and taking the appropriate action. LW techniques have helped me with this a lot. I believe that reading LW has made me a more consistent trader.
I’m not sure that the above was written clearly, let me try again. My proficiency as a speculator goes up and down based on my state of mind. Reading LW hasn’t made the ups higher, but its made me less likely to drop to a valley.
On a tangent, while I’m thinking about it.
Has anyone else just been baldly disbelieved if they mention that they made money in a nontraditional way? The only other time I’ve seen it happen is making money at Vegas. I’ve met people who seem to have ‘The House Always Wins’, or ‘You Can’t Beat The Market’ or ‘Sweepstakes/Lotteries Are A Waste Of Money’ as an article of faith to the point that, presented with a counter example, they deny reality.
At my current level of investment, I probably have received substantial benefit from other skills that seem Less Wrong related that are not predictive, like not panicking, understanding risk tolerance and better understanding the math behind why diversification works.
But I suppose those aren’t particularly unique to Less Wrong even though I feel like reading the site does help me apply some of those lessons.
I would guess that to the extend that some hedge fund uses lw style rationality tecniques to train the predictive skills of their staff, they wouldn’t be public about effective techniques.
Awhile back I posted a comment on the open thread about the feasibility of permanent weight-loss. (Basically: is it a realistic goal?) I didn’t get a response, so I’m linking it here to try again. Please respond here instead of there. Note: most likely some of my links to studies in that comment are no longer valid, but at least the citations are there if you want to look those up.
I think the substance is that there are plenty of people who change their weight permanently. On the other hand the evidence for particular interventions isn’t that good.
None of those address permanent weight loss per se. They all address the more specific problem of permanent weight loss through dietary modification.
A successful approach to weight loss would incorporate a change in diet and exercise habits along with an investigation of the ‘root cause’ of the excess weight i.e. the psychological factor that causes excessive eating (Depression? Stress? Pure habit? etc.)
I also question your implicit premise that “If it ain’t permanent it ain’t worth doing”. That sounds like a rationalization to me. For a woman who’s 25 and looking to maximize her chance of reproductive success (finding a mate), ‘just 5 years’ of weight loss would be extraordinarily superior to no weight loss. Permanent weight loss would be only marginally better.
(Barring you being a metabolic mutant. If you have tried counting calories and it didn’t work for you, then please ignore this post; weight loss is a lot more complicated than how I am about to describe it here.)
Permanent weight loss is possible and feasible; however it will probably require constant effort to maintain.
For example, count your daily caloric intake on myfitnesspal.com (my username is shokke, if you wish to use the social aspect of it too). Eat at a caloric deficit (TDEE minus ~500) until desired weight is attained, then continue counting calories and eat at maintenance (TDEE) indefinitely. If you stop counting calories you will very likely regain that weight.
This requires you to count calories for the rest of your life, or at least until you no longer care about your weight. Or we develop a better method of weight control.
I believe there is a named cognitive bias for this concept but a preliminary search hasn’t turned anything up:
The tendency to use measures or proxies that are easily available rather than the ones that most accurately measure the cared about outcome.
I have this fragment of a memory of reading about some arcane set of laws or customs to do with property and land inheritance. It prevented landowners from selling their land, or splitting it up, for some reason. This had the effect of inhibiting agricultural development sometime in the feudal era or perhaps slightly after. Anyone know what I’m talking about?
(I’m aware of the opposite problem, that of estates being split up among all children (instead of primogeniture) which caused agricultural balkanization and prevented economies of scale.)
This sounds like the system that France had before the first French Revolution. That is, up until 1789; I’m not sure when it started. I wouldn’t be surprised if a similar system existed in other European countries at around the same time, but I’m not sure which. (I’ve only been reading history for a couple years, and most of it has been research for fiction I wanted to write, so my knowledge is pretty specifically focused.)
Under this system, the way property is inherited depends on the type of property. Noble propes is dealt with in the way you describe—it can’t be sold or given away, and when the owner dies, it has to be given to heirs, and it can’t be split among them very much. My notes say the amount that goes to the main heir is the piece of land that includes the main family residence plus 1⁄2 − 4⁄5 of everything else, which I think means there’s a legal minimum within that range that varies by province, but I’m not completely sure. Propes* includes lands and rights over land (land ownership is kind of weird at this time—you can own the tithe on a piece of land but not the land itself, for example) that one has inherited. Noble propes is propes that belongs to a nobleperson or is considered a noble fief.
Commoner inheritance varies a lot by region. Sometimes it’s pretty similar to noble inheritance (all or most of propes must go to the first living heir), sometimes the family can choose one heir to inherit more than the others, sometimes an equal split is required. There’s no law against selling or giving away common (non-noble) propes, but some of the provinces that require an equal split have laws to prevent parents from using gifts during their lifetime to give one child more than the others.
I’m not sure what effect noble property law had on agricultural development. I know France’s agriculture was lagging far behind England’s during the 18th century, but I never saw it attributed to this, at least not directly. (The reasons I can remember seeing are tenant farming with short tenures, and farmers having insufficient capital to buy newer tools.) The commoner inheritance system did fragment the land holdings, as you said. The main problems I remember hearing about with that were farms becoming too small to support a person (so the farmers would also work part-time as tenant farmers or day laborers, or abandon the farm and leave), and limiting social mobility by requiring wealthy commoners to divide their wealth with each new generation.
Most of this is coming from notes I took on the book Marriage and the Family in 18th Century France by Traer. I’m not sure how much you wanted to know, so ask if there’s anything you’re curious about that I didn’t include, and I’ll see if I can dig it up. If you want to research this, my impression is that finding good history books about a specific place is much easier if you can read the language spoken there, so it might be worth checking what the property laws were in places that speak the languages you know. If you need sources in English, having access to a university library helps a lot. When looking for information on France during this time period, “Ancien Regime” and “early modern” are useful keywords.
Lease-like arrangements that are practically selling are allowed, though. The only one I can think of at the moment is called alienation—basically you sell it except the new “owner’s” (or their heirs or whoever they sell the land to) pay your family rent for the land, forever. Something similar can be done with money, as a sort of loan that is never paid off. (These are called rentes fonciérs and rentes constituées, respectively—in case you ever want to look up more information.) They’re technically movable property, but they’re legally counted as propes, and treated the same way as noble land.
Yeah, at least in France, land can’t make you noble, even if it’s a whole noble fief with a title attached. (Then you’re just a rich commoner who owns a title but can’t use it.) You could become noble by holding certain jobs for a long enough time (usually three generations), though. And people did buy those. (Not through bribes—the royal government sold certain official posts to raise revenues, so it was legal.)
There was also a sort of real estate boom after the revolutionary government passed some laws to make it easier for commoners to buy land, which was sort of like what you describe—all the farmers who could afford it would buy all the land they could at higher values than it was worth, because it made them feel like they were rich landowners.
Adam Smith reported that this was how the law worked in the Spanish territories in the Americas, in order to ensure the continued existence of a wealthy and powerful landed aristocracy and so maintain social stability. He theorized that this policy was the reason that the Spanish territories were so much poorer than the English territories, even though the former had extensive gold deposits and the latter did not.
Yeah I did some more research, apparently they were called “fee tails” or “entails”. They were designed to keep large estates “in the family”, even if that ended up being a burden to the future generations.
As I want to fix my sleep (cycle) I am looking for a proper full spectrum light to screw in my desk light. But when I shop for “full spectrum” light it turns out that they only have three peaks and do not even come near a black body in lighting. Is there something for less than a small fortune for a student like I am looking for? E27 socket, available in the EU.
I can ask more generally: What is the lighting situation at your desk and at your home? I aim for lighting very low in blue in the evening and as close to full daylight during work. For that I have f.lux on my computers and want to put a full-spectrum light in my desk lamp. I do not know what I should do for my room, I am thinking having a usual ‘warm’ lamp for the whole room and quite an orange light for reading late at night.
What evidence do you have that full spectrum light is beneficial? It seems you already know that it’s the blue spectrum that primarily controls the circadian rhythm.
No particualar evidence but the closer light is to natural sunlight the better it looks. I could also argue that the closer I come to ‘natural’ conditions, that is much sun-like light the better I should fare.
Orange goggles/glasses for late at night aren’t that bad and are very cheap. I don’t have a good solution for the full spectrum issue. MIRI is getting by with the regular full spectrum bulbs AFAIK (is there a followup on the very bright lights experiment?)
I use a bedside lamp with a full-size Edison screw (I think E27 is full size). Daylight-spectrum bulbs are readily available in all manner of fittings on eBay. Last lot we got were 6x30W (equivalent 150W) with UK bayonet fittings for £5 each (though I don’t use something that bright for my bedside lamp).
The essence of EA is that people are equal, regardless of location. In other words, you’d rather give money to poor people in far away countries than people in your own country if it’s more effective, even though the latter feel intuitively more close to you. People care more about their own countries’ citizens even though they may not even know them. Often your own country’s citizens are similar to you culturally and in other ways, more than people in far-way countries and you might feel a certain bond with your own country’s citizens. There are obviously examples of this kind of thinking concretely affecting people’s actions. In the Congo Crisis (1960–1966) when the rebels started taking white hostages, there was an almost immediate military operation conducted by the United States and Belgium and the American and European civilians of this area were quickly evacuated. Otherwise this crisis was mostly ignored by western powers and the UN operation was much more low key than the rescue operation.
In Effective Altruism, should how much you intuitively care about other people be a factor in how much you allocate resources to them?
Can you take this kind of thinking to its logical conclusion: you shouldn’t allocate any money or resources to people that you feel are close to you, like your family or friends because you can more effectively minimize suffering by allocating those resources to far-away people?
Note, I’m not criticizing effective altruism or actually supporting this kind of thinking. I’m just playing a devil’s advocate.
A possible counterargument: one’s family and friends are essential to one’s mental well-being and you can be a better effective altruist if you support your friends and family.
Essentially, I could do things that help other people and me, or I could do things that only help other people but I don’t get anything (except for a good feeling) from it. The latter set contains much more options, and also more diverse options, so it is pretty likely that the efficient solution for maximizing global utility is there.
I am not saying this to argue that one should choose the latter. Rather my point is that people sometimes choose the former and pretend they chose the latter, to maximize signalling of their altruism.
“I donate money to ill people, and this is completely selfless because I am healthy and expect to remain healthy.” So, why don’t you donate to ill people in poor countries instead of your neighborhood? Those people could buy greater increase in health for the same cost. “Because I care about my neighbors more. They are… uhm… my tribe.” So you also support your tribe. That’s not completely selfless. “That’s a very extreme judgement. Supporting people in my tribe is still more altruistic than many other people do, so what’s your point?”
I guess my point is, if your goal is to support your tribe, just be honest about it. Take a part of your budget and think about the most efficient way of supporting your tribe. And then take another part of your budget and spend it on effective altruism. (The proportion of these two parts, that’s your choice.) You will be helping people selflessly and supporting your tribe, probably getting more points on each scale than you are getting now.
“But I also want a recognition of my tribe for my support. They will reward me socially for helping in-tribes, but will care less about me helping out-tribes.” Oh, well. That’s even less selfless. I am not judging you here, just suggesting to make another sub-budget for maximizing your prestige within the tribe and optimize for that goal separately.
“Because that’s too complicated. Too many budgets, too much optimization.” Yeah, you have a point.
Also, if it turns out that I have three sub-budgets as you describe here (X, Y, Z) and there exist three acts (Ax, Ay, Az) which are optimal for each budget, but there exists a fourth act B which is just-barely-suboptimal in all three, it may turn out that B is the optimal thing for me to do despite not being optimal for any of the sub-budgets. So optimizing each budget separately might not be the best plan.
Generally, you are right. But in effective altruism, the axis “helping other people” is estimated to do hundred times more good if you use a separate budget for it.
This may be suboptimal for the other axes, though. Taking the pledge and having your name on the list could help along the “signalling philantropy” axis.
Expanding on this, isn’t there an aspect of purchasing fuzzies in the usual form of effective altruism? I know there’s been a lot of talk of vegetarianism and animal-welfare on LW, but there’s something in it that’s related to this issue.
At least some people believe it’s been pretty conclusively proven that mammals and some avians have a subjective experience and the ability to suffer, in the same way humans have. In this way humans, mammals, and those avian species are equal—they have roughly the same capacity to suffer. Also, with over 50 billion animals used to produce food and other commodities every year, one could argue that the scope of suffering in this sphere in greater than in the human kind.
So let’s assume that the animals used in the livestock have an equal ability to suffer when compared to humans. Let’s assume that the scope of suffering is greater in the livestock industry than in the human kind. Let’s also assume that we can more easily reduce this suffering than the suffering of humans. I don’t think it’s a stretch to say that these three assumptions could actually be true and this post analyzed these factors in more detail. From these assumptions, we should conclude not only that we should become vegetarians, like this post argues, but also that the animal welfare should be our top priority. It is our moral imperative to allocate all the resources we dedicate to buying utilons to animal welfare, until the marginal utility for it is lower than for human welfare.
Again, just playing a devil’s advocate. Are there other reasons to help humans other than the fact they belong to our tribe more than animals? The counterarguments raised in this post by RobbBB are very relevant, especially 3. and 4. Maybe animals don’t actually have the subjective experience of suffering and what we think as suffering is only damage-avoiding and damage-signaling behavior. Maybe sapience makes true suffering possible in humans and that’s why animals can’t truly suffer on the same level as humans.
I had this horrible picture of a future where human-utilons-maximizing altruists distribute nets against mosquitoes as the most cost-efficient tool to reduce the human suffering, and the animal-utilons-maximizing altruists sabotage the net production as the most cost-efficient tool to reduce the mosquito suffering...
That’s a worthwhile concern, but I personally wouldn’t make the distinction between animal-utilons and human-utilons. I would just try to maximize utilons for conscious beings in general. Pigs, cows, chicken and other farm animals belong in that category, mosquitoes, insects and jellyfish don’t. That’s also why I think eating insects is on par with vegetarianism because you’re not really hurting any conscious beings.
Since we’re playing the devil’s advocate here: much more important than geographical and cultural proximity to me would be how many values I share with these people I’m helping, were I ever to come in even remote contact with them or their offspring.
Would you effective altruist people donate mosquito nets to baby eating aliens if it cost effectively relieved their suffering? If not, where do you draw the line in value divergence? Human?
So, what’s all this about a Postivist debacle I keep hearing? Who were the positivists, what did we have in common with them, what was different, and how and why did they fail?
Positivism states that the only authentic knowledge is that which allows verification and assumes that the only valid knowledge is scientific.[2] Enlightenment thinkers such as Henri de Saint-Simon, Pierre-Simon Laplace and Auguste Comte believed the scientific method, the circular dependence of theory and observation, must replace metaphysics in the history of thought. Sociological positivism was reformulated by Émile Durkheim as a foundation to social research.[13]
Wilhelm Dilthey, in contrast, fought strenuously against the assumption that only explanations derived from science are valid,[9] Dilthey was in part influenced by the historicism of Leopold von Ranke.[9] restating the argument, already found in Vico, that scientific explanations do not reach the inner nature of phenomena[9] and it is humanistic knowledge that gives us insight into thoughts, feelings and desires.[9]
I’m no expert on the history of epistemology, but this may answer some of your questions, at least as they relate to Eliezer’s particular take on our agenda.
We consider probabilities authentic knowledge. Since we are Bayesianists and not Frequentists those probabilities are sometimes about questions which can not be scientifically tested. Science requires repeatable verification, and our probabilities don’t stand up to that test.
For several years now I’ve lived in loud apartments, where I can often hear conversations or music late into the night.
I often solve this problem by wearing earplugs. However, I don’t want to sleep with earplugs every night, and so I’ve made a number of attempts to adjust to the noise without earplugs, either going “cold-turkey” for as long as I can stand, or by progressively increasing my exposure to night-time noise.
Despite several years of attempts, I don’t think I’ve habituated at all. What gives?
Other information that might be relevant:
I adjust fine to noise during the day, and to other stimulus at night.
I have no mental illness.
“Information-less” noise is fine, (for example, traffic or the hum of an appliance). Problem noises involve voices or music, or things like video games.
Since you are already fine with white noise, you should try using white noise to drown out the music or voices. A quick search of white noise on the internet lead me to simplynoise where you can stream white noise over the internet. If not, then try a phone app.
I don’t need such a thing for sleeping, but I find SimplyNoise gives a satisfactory sound having a much steeper fall-off with frequency than white noise (flat spectrum of energy vs. frequency) or pink noise (3dB fall-off per octave), both of which sound unpleasantly harsh to me. They also have a few soundscapes (thunderstorm, river, etc.). The app is not free, but cheap, and there are also pay-what-you-want mp3 download files.
Let’s assume society decides that eating meat from animals lacking self-awareness is ethical, and anything with self-awareness is not ethical to eat, and that we have a reliable test to tell the difference. Is it ethical to deliberately breed tasty animals to lack self-awareness, both before or after their species has self-awareness?
My initial reaction to the latter is ‘no, it’s not ethical, because you would necessarily be using force on self-aware entities as part of the breeding process’. The first part of the question seems to lean towards ‘yes’, but this response definitely sets off an ‘ugh’ field in my mind just attempting to consider the possible implications, so I’m not confident at all in my line of reasoning.
I think any question of the form “Assume X is ethical, is X’ also ethical?” is inherently malformed. If my ethics do not follow X, then the change in my ethics which causes me to include X may be very relevant to X’.
I don’t think anyone who is a vegetarian regardless of self-awareness would be able to answer the question you are asking.
I think the big question that implies this one is “Should we eat baby humans? Why?”
I believe the answer is “No, because there is no convenient place to draw the line between baby and adult, so we should put the line at the beginning, and because other people may have strong emotional attachment to the baby.”
I think the first part of my reason is eliminated by your “reliable test.” If the test is completely reliable, that is a very good place to draw the line.
The second part is not going away. It has been evolved in us for a very long time, however, it is not clear if people will get the same attachment to non-human babies. I think that our attachment to non-humans is much lower, and there is not a significant difference between their attachment before and after self awareness.
However, the question asked assumes that our ethics distinguish between creatures with and without self awareness. If that distinction is caused by us having different levels of emotional attachment to the animal depending on its self awareness, then it would change my answer.
As for the first part, I would say that it’s fairly common for an individual and a society to not have perfectly identical values or ethical rules. Should I be saying ‘morals’ for the values of society instead?
I would hope that ethical vegetarians can at least give me the reasons for their boundaries. If they’re not eating meat because they don’t want animals to suffer, they should be able to define how they draw the line where the capacity to suffer begins.
You do bring up a good point—most psychologists would agree that babies go through a period before they become truly ‘self-aware’, and I have a great deal of difficulty conceiving of a human society that would advocate ‘fresh baby meat’ as ethical. Vat-grown human meat, I can see happening eventually. Would you say the weight there more on the side of, ‘This being will, given standard development, gain self-awareness’, or on the side of ‘Other self-aware beings are strongly attached to this being and would suffer emotionally if it died’? The second one seems to be more the way things currently function—farmers remind their kinds not to name the farm animals because they might end up on their plate later. But I think the first one can be more consistently applied, particularly if you have non-human (particularly non-cute) intelligences.
You could put strict statistical definitions around it if you wanted, but the general idea is, ‘infants grow up to be self-aware adults’.
This may not always be true for exotic species. Plenty of species in nature, for example, reproduce by throwing out millions of eggs / spores/ what have you that only a small fraction of which grow up to be adults. Ideally, any sort of rule you’d come up with should be universal, regardless of the form of intelligence.
At some point, some computer programs would have to be considered to be people and have a right to existence. But at what stage of development would that happen?
I’ve got a few questions about Newcomb’s Paradox. I don’t know if this has already been discussed somewhere on LW or beyond (granted, I haven’t looked as intensely as I probably should have) but here goes:
If I were approached by Omega and he offered me this deal and then flew away, I would be skeptical of his ability to predict my actions. Is the reason that these other five people two-boxed and got $1,000 due to Omega accurately predicting their actions? Or is there some other explanation… like Omega not being a supersmart being and he never puts $1 million in the second box? If I had some evidence that people actually have one-boxed and gotten the $1 million then I would put more weight on the idea that he actually has $1 million to spare, and more weight on the possibility that Omega is a good/perfect predictor.
If I attempt some sort of Bayesian update on this information (the five previous people two-boxed and got $1,000) these two explanations seem to equally explain this fact. The probability of Omega putting the $1,000 in the previous five peoples’ boxes given that he’s a perfect predictor seems to be observationally equivalent to the probability that Omega doesn’t ever put $1 million in the second box.
Then again, if Omega actually knew my reasoning process, he might actually provide me with the evidence that would make me choose to one-box over two-box.
It also seems to me that if my subjective confidence in Omega’s abilities of prediction are over 51%, then it makes more sense to one-box than two-box… if my math/intuition about this is correct. Let’s say my confidence in Omega’s abilities of prediction are at 50%. If I two-box, there are two possible outcomes: I either get only $1,000 or I get $1,001,000. Both outcomes have a 50% chance of happening due to my subjective prior, so my decision theory algorithm is 50% $1,000 + 50% $1,001,000. This sums to a total utility/cash of $501,000.
If I one-box, there are also two possible outcomes: I either get $1,000,000 or I lose $1,000. Both outcomes, again, have a 50% chance of happening due to my subjective probability about Omega’s powers of prediction, so my decision theory algorithm is 50% $1,000,000 + 50% -$1,000. This sums to $499,000 in total utility.
Does that seem correct, or is my math/utility off somewhere?
Lastly, has something like Newcomb’s Paradox been attempted in real life? Say with five actors and one unsuspecting mark?
I had a random-ish thought about programming languages, which I’d like comments on: It seems to me that every successful programming language has a data structure that it specialises in and does better than other languages. Exaggerating somewhat, every language “is” a data structure. My suggestions:
C is pointers
Lisp is lists (no, really?)
Ruby is closures
Python is dicts
Perl is regexps
Now this list is missing some languages, for lack of my familiarity with them, and also some structures. For example, is there a language which “is” strings? And on this model, what is Java?
Well, different languages are based on different ideas. Some languages explore the computational usefulness of a single data structure, like APL with arrays or Forth with stacks. Lisp is pretty big, but yes you could say it emphasizes lists. (If you’re looking for a language that emphasizes strings, try SNOBOL or maybe Tcl?) Other languages explore other ideas, like Haskell with purity, Prolog with unification, or Smalltalk with message passing. And there are general-purpose languages that don’t try to make any particular point about computation, like C, Java, JavaScript, Perl, Python, Ruby, PHP, etc.
Pointers in C aren’t data structures—they are a low-level tool for constructing data structures. Neither closures nor regexps are “data structures”. And Perl was historically well-known for relying on hashes which you assigned to Python as dicts.
Certainly each programming language has a “native” programming style that it usually does better than other languages—but that’s a different thing.
Java is classes—a huge set of standardized classes, so for most things you want to do, you choose one of those standard classes instead of deciding “which one of the hundred libraries made for this purpose should I use in this project?”.
At least this was until the set of standardized classes became so huge that it often contains two or three different ways to do the same thing, and for web development external libraries are used anyway. (So we have AWT, Swing and JavaFX; java.io and java.nio; but we are still waiting for the lambda functions.)
Different languages are good at different things. For some languages it happens to be a data structure:
Lisp is lists
Tcl is strings
APL is arrays
Forth is stacks
SQL is tables
Other languages are good at something specific which isn’t a data structure (Haskell, Prolog, Smalltalk etc.) And others are general languages that don’t try to make any particular point about computation (C, Java, JavaScript, Perl, Python, Ruby etc.)
I’m not sure R fits this metaphor—the closest I can get is “R is CRAN”, but the C?AN concept is not unique to R.
Hmm… maybe R is data.frames.
Java is prepare your anus for objects.
Interesting comment by Gregory Cochran on torture not being useless as is often claimed.
This seems an insightful and true statement. We seem to like “protecting” ought by making false claims about what is.
Possibly related to the halo or overjustification effects; arguments as soldiers seems especially applicable—admitting that torture may actually work is stabbing one’s other anti-torture arguments in the back.
I read somewhere that lying takes more cognitive effort than telling the truth. So it might follow that if someone is already under a lot of stress—being tortured—then they are more likely to tell the truth.
On the other hand, telling the truth can take more effort than just saying something. Very modest levels of stress or fatigue make it harder for me to remember where, when, and with whom something happened.
I agree that it is a PC thing to say now in the US liberal circles that torture doesn’t work. The original context was different, however: torture is not necessarily more effective than other interrogation techniques, and is often worse and less reliable, so, given its high ethical cost to the interrogator, it should not be a first-line interrogation technique. This eventually morphed into the (mostly liberal) meme “torture is always bad, regardless of the situation”. This is not very surprising, lots of delicate issues end up in a silly or simplistic Schelling point, like no-spanking, zero-tolerance of drugs, no physical contact between students in school, age restrictions on sex, drinking, etc.
Could you provide evidence for this claim?
Going by the links on Wikipedia. A quote:
test
This has interesting implications for consequentialism vs. deontology. Consequentialists, at least around here, like to accuse deontologists of jumping through eleborate hoops with their rules to get the consequences they want. However, it is just as common (probably more so) for consequentialists to jump through hoops with their utility function (and even their predictions) to be able to obey the deontological rules they secretly want.
Real humans are neither consequentialists nor deontologists, so pretending to be one of these results in arguments like that.
Certainly true—I believe a lot of claims about the healthiness of vegetarianism fall into that category.
Another problem is taking something that’s true in some cases, or even frequently, and claiming that it’s universal. In the case of torture, it’s one thing to claim that torture rarely produces good information, and another to claim that it never does.
Arguments as soldiers in regards to universities divesting from fossil fuels.
The point on torture being useful seems really obvious in hindsight. Before reading this I pretty much believed it was useless. I think it settled my head in the mid 2000s, arriving straight from political debates. Apparently knowing history can be useful!
Overall his comment is interesting but I think the article has more important implications, someone should post it. So I did. (^_^)
I don’t see anything insightful about the statement. It rather trival to point out that there were events were torture produced valuable information. Nobody denies that point. It rather sounds like he doesn’t understand the position against which he’s arguing.
It’s not like any other kind of intelligence. This ignores the psychological effects of the torture on the person doing the torturing. Interrogators feel power over a prisioner and get information from them. That makes them spend to much attention of that information in contrast to other information.
And this is different from someone who, say, spends a lot of effort turning an agent, or designing a spy satellite, how?
Beating someone else up triggers primal instincts. Designing a spy satelite or using it’s information doesn’t.
There’s motivated reasoning involving is assessing the information that you get by doing immoral things as high value.
Pretending that there are no revelant psychological effects from the torture on the person doing the torturing just indicates unfamiliarity with the arguments for the position that torture isn’t effective.
I would add that that as far as the description of the battle of Midway in the comment goes, threating people with execution isn’t something that in the US would be officially torture. Prosecutors in Texas do it all the time to get people to agree to plea bargains. It’s disgusting but not on the same level as putting electrodes on someone’s genitals. It also doesn’t have the same effects on the people doing the threating as allowing them to inflict physical pain.
If you threaten someone with death unless he gives you information you also don’t have the same problem of false information that someone will give you to make the pain stop immediately.
As far as the other example in that battle goes, the author of the comment doesn’t even know whether torture was used and seems to think that there are no psychological tricks that you can play to get information in a short amount of time. Again an indication of not having read much about how interrogation works.
Here on Lesswrong we have AI players who get gatekeepers to let the AI go in two hours of text based communication. As far as I understand Eliezer did that feat without having professional grade training in interrogation. If you accept that’s possible in two hours, do you really think that a professional can’t get useful information from a prisioner in a few hours without using torture?
From what I heard, most of said psychological tricks relay on the person you’re interrogating not knowing that you’re not willing to torture them.
Not reliably. This worked on about half the people.
Depending on the prisoner. There are certainly many cases of prisoners who don’t talk. If the prisoners are say religious fanatics loyal to their cause, this is certainly very hard.
getting half your prisoners to capitulate is still pretty damn good.
Being able to read bodylanguage very well is also a road to information. You can use Barnum statements to give the subject the impression that you have more knowledge than you really have and then they aren’t doing anything wrong if they tell you what you know already.
In the case in the comment the example was an American soldier who probably doesn’t count as religious fanatic. The person who wrote it suggested that the fast transfer of information is evidence of there being torture involved.
It was further evidence for my claim that the person who wrote the supposedly insightful comment didn’t research this topic well.
I case wasn’t that there certain evidence that torture doesn’t work but that the person who wrote the comment isn’t familiar with the subject matter and as a result the comment doesn’t count as insightful.
Nothing works 100% reliably.
Similarly, basilisks would work as motivation to develop a certain kind of FAI but there’s a ban on discussing them here. Why? Isn’t it worth credibly threatening to torture people for 50 years to eventually save some large number of future people from dust specks (or worse) by more rapidly developing FAI?
It’s possible that the harm to society of knowing about and expecting torture is greater than the benefit of using torture. In that case, torturing in absolute secret seems to be the way to maximize utility. Not particularly comforting.
A) It’s not credible B)The basilisk only “works” on a very few people and as far as I can tell it only makes them upset and unhappy rather than working as hard as they can on FAI. C) Getting people on your side is pretty important. Telling people they will be tortured if they don’t get on your side is not a very good move for a small organization.
Um, the threat of torture only works if people know about the threat.
I decided to publish http://www.gwern.net/LSD%20microdosing ; summary:
Discussion elsewhere:
https://news.ycombinator.com/item?id=6565869
http://www.reddit.com/r/Nootropics/comments/1onbz3/lsd_microdosing_a_randomized_blind_selfexperiment/
AI Box Experiment Update
I recently played and won an additional game of AI Box with DEA7TH. Obviously, I played as the AI. This game was conducted over Skype.
I’m posting this in the open thread because unlike my last few AI Box Experiments, I won’t be providing a proper writeup (and I didn’t think that just posting “I won!” is enough to validate starting a new thread). I’ve been told (and convinced) by many that I was far too leaky with strategy and seriously compromised future winning chances of both myself and future AIs. The fact that one of my gatekeepers guessed my tactic(s) was the final straw. I think that I’ve already provided enough hints for aspiring AIs to win, so I’ll stop giving out information.
Sorry, folks.
This puts my current AI Box Experiment record at 2 wins and 3 losses.
I guess you used words. That seems to be all the tactical insight needed to develop an effective counter-strategy. I really don’t get how this escaping thing works on people. Is it due to people being systematically overconfident in their own stubbornness? I mean I know I couldn’t withstand torture for long. I expect even plain interrogation backed by credible threats would break me over time. Social isolation and sleep deprivation would break me too. But one hour of textual communication with a predefined and gamified objective and no negative external consequences? That seems so trivial..
Other people have expressed similar sentiments, and then played the AI Box experiment. Even of the ones who didn’t lose, they still updated to “definitely could have lost in a similar scenario.”
Unless you have reason to believe your skepticism comes from a different place than theirs, you should update towards gatekeeping being harder than you think.
The heuristic of ignoring secretive experiments that don’t publish their details has served me well in the past.
I have played the game twice and updated in the opposite direction you claim.
In fact, my victories were rather trivial. This is despite the AIs trying really really hard.
Did you play against AI that do have won sometime in the past?
I do not honestly know. I will happily play a “hard” opponent like Eliezer or Tux. I have said this before, I estimate 99%+ chance of victory.
Unless I have already heard the information you have provided and updated on it, in which case updating again at your say so would be the wrong move. I don’t tend update just because someone says more words at me to assert social influence. Which is kind of the point, isn’t it? Yes, I do have reason to believe that I would not be persuaded to lose in that time.
Disagreement is of course welcome if it is expressed in the form of a wager where my winnings would be worth my time and the payoff from me to the gatekeeper is suitable to demonstrate flaws in probability estimates.
Probably you’re right, but as far as I can tell the rules of the game don’t forbid the use of ASCII ar
Just so long as it never guesses my fatal flaw (_).
The Anti-Reactionary FAQ by Yvain. Konkvistador notes in the comments he’ll have to think about a refutation, in due course.
I was surprised by the breadth of ideas he addresses. It blew my mind that he put that together in under a month.
I assume he’s been thinking about this stuff for years, given he’s known the people in the Reactionary subculture that long.
He wrote Reactionary Philosophy in an enormous, planet-sized nutshell back in March, as a precursor to a reactionary take-down essay that never seemed to materialize, other than a few bits and pieces, such as the one on how war is on the decline. This faq seems to be the takedown he was aiming for, so I imagine he’s been building it for at least the past seven months, probably longer.
(ETA: In the comments on the Anti-reactionary FAQ, Scott says it took roughly a month, so I guess it wasn’t as much of an on-going project as I predicted.)
I’m keeping this question to this thread as to not spam political talk on the new open thread.
What does a post scarcity society run by reactionaries look like? If state redistribution is not something that is endorsed, what happens to all the people who have no useful skills? In a reactionary utopia where there is enough production but lacking an efficient way to distribute resource base on ability or merit, what happens to the people who have been effectively replaced by automation? Is it safe to assume that there are no contemporary luddites among reactionaries?
I can answer that question to a certain extent, as I’ve talked to several people in reaction who have thought about it, as have I, at least once we look far into the posthuman era, it might be most easily imagined as a society of gods above and beasts below, something the ancient Greeks found little difficulty imagining and certainly didn’t feel diminished their humanity. An important difference between the post human fantasies often imagined is that the superiority of transhuman minds would not be papered over by fictional legal equality, there would be a hierarchy based on the common virtues the society held in regard and there would be efforts to ensure the virtues remained the same, to prevent value drift. Much of the society would be organized along the lines of striving to enable (post)human flourishing as defined by the values of the society.
An aristocracy prevailing, indeed “rule of the best”, with at least a ceremonial Emperor at its apex. Titles of nobility where in theory in ancient society awarded, both to incentivize people for long term planning, define their place, formalize their unique influence as owners of land and warriors, define what the social circle you are expect to compare yourself with is and expecting the good use of such privilege by people from excellent families. Extending and indeed much improving such concept offers fascinating possibilities, compatible with human imagination and preferences, think of the sway that nobility, let alone magical or good queens, dukes and knights hold over even our modern imagination. Consider that in a world where aging is cured and heredity, be it genetic or otherwise full understood, where minds are emulated and merge and diverge, the line between you and your ancestors/previous versions blurs. A family with centuries of diligent service, excellence, virtue, daring and achievement… I can envision such a grand noble lineage made up of essentially one person.
But this is an individual aspect of the vision. The shape of galactic civilization is often to most the more motivating aspect. To quote Aaron Jacob:
But there is a subgroup of reaction that includes Francis St. Pol that might go into more strongly emphasizing raw intelligence maximization. And Nick Land embraces capitalism in all its forms tightly.
This is I think an example of a near future issue. The answers I have heard are the welfare state but with eugenics (many agree with the basic income guarantee), makework, especially relatively fulfilling makework such as perhaps crafting “handmade” items for consumption or perhaps farming and the pod option (virtualization). The latter is indeed at least partial wireheading, but I wonder how much it actually would be if the humans living virtual lives are allowed to interact with each other in something like a very fun MMO, their social relations would still be quite real and I think fundamentally this is all most really care about. This option becomes especially economical if uploading minds becomes cheap. I would add the option of humane suicide, but I’m not sure how many would agree.
To a large extent yes, but enthusiasm for technology varies. Most are moderately enthusiastic about technology and believe Progressivism is holding back civilization. Nick Land is an example of an outlier in the pro-technology direction, but there are a few on the other end, those agree with a variant of the argument Scott Alexander has rediscovered that technology and wealth inevitably caused the change of values they find detrimental, but I don’t recall any of them arguing for a technological rollback, because none think it feasible.
From what I understood based on reading the anti-reactionary faq, Scott’s interpretation of Moldbug’s interpretation of an ideal reactionary king would either arrange infrastructure such that there are always jobs available, or start wireheading the most useless members of society (though if I’m reading it right, Moldbug isn’t all that confident in that idea, either). I’d not mind a correction (as Scott points out, either option would be woefully inefficient economically).
This makes me suspect he may have much more free time than I guessed, and no longer despair of a new LW survey in the foreseeable future.
It hasn’t appeared in the “Recent on Rationality Blogs” sidebar on LW yet. How long does that normally take? 24 hours?
It seems likely that this post has been blocked from appearing there, due to its political and controversial nature.
I continue blogging on the topic of educational games: Teaching Bayesian networks by means of social scheming, or, why edugames don’t have to suck
I was thinking recently that if soylent kicks something off and ‘food replacement’ -type things become a big deal, it could have a massive side effect of putting a lot of people onto diets with heavily reduced animal and animal product content. Its possible success could inadvertently be a huge boon for animals and animal activists.
Personally, I’m somewhat sympathetic towards veganism for ethical reasons, but the combination of trivial inconvenience and lack of effect I can have as an individual has prevented me from pursuing such a diet. Soylent would allow me to do so easily, should I want to. Similarly, there are people who have no interest in animal welfare at all. If ‘food replacements’ become big, it could mean for the incidental conversion of those who might have otherwise never considered veganism or vegetarianism to a lifestyle that fits within those bounds, for only their personal cost or convenience reasons.
I anticipate artificial meat having a much bigger impact than meal-replacement products. I anticipate that demand for soylent-like meal replacement products among the technophile cluster will peak within the next three years, and will wager $75 to someone’s $100 that this is the case if someone can come up with a well-defined metric for checking this.
Note that the individual impact you can have by being a vegetarian is actually pretty big. Sure, it’s small in terms of _percentage_of the problem, but that’s the wrong way to measure effect. If you saw a kid tied to railroad tracks, you wouldn’t leave them there on account of all the children killed by other causes every day.
Let $X= the cost to me of being a vegetarian. I’m indifferent between donating $X to the best charity I can find or being a vegetarian. For what values of $X would you advise me to become a vegetarian assuming that if I don’t become a vegetarian I really will donate an extra $X to, say, MIRI?
Being a vegetarian does not have a positive monetary cost, unless it makes you so unhappy that you find yourself less motivated at work and therefore earn less money or some such. Meat may be heavily subsidized in the US, but it’s still expensive compared to other foods.
I would rather pay $8,000 a year than be a vegetarian. Consequently, if my donating $8,000 to a charity would do more good for the rest of the world than my becoming a vegetarian would, it’s socially inefficient for me to become a vegetarian.
You can make a precommitment to do only one or the other, but if you become vegetarian you don’t actually lose the $8,000 and become unable to give it to MIRI. In this sense it is not a true tradeoff unless happiness and income are easily interconvertible for you.
I have a limited desire to incur costs to help sentients who are neither my friends nor family. This limited desire creates a “true tradeoff”.
I fight the hypothetical—there is no such tradeoff.
A more concrete hypothetical: Suppose that every morning when you wake up you’re presented with a button. If you press the button, an animal will be tortured for three days, but you can eat whatever you want that day. If you don’t press the button, there’s no torture, but you can’t eat meat. By the estimates in this paper, that’s essentially the choice we all make every day (3:1 ratio taken by a_m times l_m = at least 1000 animal-days of suffering avoided per year of vegetarianism ~= 3 days of torture per day of vegetarianism).
Anyway—you should not be a vegetarian iff you would press the button every day.
This is absurd. I really, really would rather pay $8,000 a year than be a vegetarian. Do you think I’m lying or don’t understand my own preferences? (I’m an economist and so understand money and tradeoffs and I’m on a paleo diet and so understand my desire for meat.)
I would rather live in a world in which I donate $8,000 a year to MIRI and press the button to one in which I’m a vegetarian and donate nothing to charity.
There is no market for your proposed trade. In this case using money as a proxy for utility/preference doesn’t net you any insight because you can’t exchange vegetarianism or animal-years-of-torture for anything else. Of course you can convert to dollars if you really want to, but you have to convert both sides—how much would you have to be paid to allow an animal to be tortured for three days? (This is equivalent to the original question, we’ve just gone through some unnecessary conversions).
Have you/they thought about other environmental implications? Processing everything down to simple nutrients to make the drink doesn’t sound very energy efficient. Might compete with eating meat, but definately not with veganism.
I like my meat, btw.
Personally, I haven’t really thought of it. Might be an angle worth looking at the product from, you’re right.
I haven’t really been following their progress or anything, so I don’t know, but it’s possible they’ve touched on it at some point before. You could dig around on the soylent forum or even start the topic yourself if you really felt like it. I think the creators of the product are reasonably active on there.
One of the primary ingredients of soylent is whey protein, which is produced from cow’s milk. It is not a vegan product.
Whey is a byproduct of cheesemaking, which is why it is currently relatively inexpensive. If people started consuming whey protein en masse, it would shift the economics of whey production and dairy cow breeding in potentially highly unfavorable directions for both the cows and the soylent enthusiasts (because it would become more expensive).
Sadly, there doesn’t seem to be any viable alternative to whey at this point (if there was, they’d use that, but there isn’t).
It doesn’t use whey for protein any more. Apparently the only issue for veganism (and vegetarianism) at the moment is fish oil for Omega 3s.
I didn’t know that. What does it use instead of whey?
Rice Protein, it seems.
Relevant blog posts:
soylent blog, 2013-07-24
soylent blog, 2013-08-27
link to blog
So it was whey, then it was rice protein and pea protein, now it’s just rice protein.
Their final ingredient list hasn’t been finalised yet, they seem to be getting close though. They said they’ll post it once it’s done.
Thanks for the info. While I suppose this is an improvement, I wonder about the scalability of this approach and the impact on the environment. Rice doesn’t exactly produce that much protein per acre of land. I’ll have to look at the numbers though.
I also wonder where they’re sourcing Lysine from.
I know someone who has a young child who is very likely to die in the near future. This person has (most likely) never heard of cryonics. My model of this person is very unlikely to decide to preserve their child even if they knew about it.
I don’t know if I should say something. At first I was thinking that I should because the social ramifications are negligible. After thinking about it for a while, I changed my mind and decided that possibly I was just trying to absolve myself of guilt at the cost of offending a grieving parent. I am not sure if this is just rationalization.
Advice?
Does the person has the financial means to pay for out of the pocket cryonics? It probably won’t be possible to get life insurance for the child.
I am not sure. I think so.
Attempting to highlight relevant variables:
how likely your persuasion is to offend parents (which is a pdf, not binary, of course)
how much you care whether you offend parents (see previous parenthetical)
U(child lives a long time) - U(child dies as expected)
P(child lives a long time | child gets frozen)
P(child gets frozen | you try to persuade the parents)
Edited to fix formatting.
You should reconsider this assumption. I would imagine that making suggestions about what to do with someone’s soon-to-be-dead child’s body would be looked upon coldly at best and with active hostility at worst. It’s like if you suggested you knew a really good mortician; it’s just not the sort of thing you’re supposed to be saying.
There’s also the fact that, as a society, we are very keen when watching the bereaved for signs they haven’t accepted the death. To most people cryonics looks like a sort of pseudoscientific mummification and the idea that such a person could be revived as delusional. It is easy to imagine that if your friend shelled out hundreds of thousands on your say-so for such a project people might see you as preying on a mentally vulnerable person.
This is not to make a value judgement or a suggestion, just pointing out that the social consequences are quite possibly non-negligible.
If you have not signed up for cryonics yourself, you could ask this person for advice as to whether you should. If you have signed up, you could work this into a conversation. Or just find some video or article likely to influence the parent and forward it to him, perhaps an article mentioning Kim Suozzi.
The only plausible ways I can think to bring it up are:
1) Directly
2) Talk about it to someone else with him in the room
3) Convince someone else who very close to him but not directly dealing with the loss of their child to consider it, and possibly bring it up for me
I think if I were to bring it up, I would take the third path.
What expert advice is worth buying? Please be fairly specific and include some conditions on when someone should consider getting such advice and focus on individuals and families versus, say, corporations.
I ask because I recently brainstormed ways that I could be spending my money to make my life better and this was one thing that I came up with and realized I essentially never bought except for visiting my doctor and dentist. Yet there are tons of other experts out there willing to give me advice for a fee: financial advisers, personal trainers, nutritionists, lawyers, auto-mechanics, home inspectors, and many more.
Therapy probably has the most impact on an individuals life satisfaction
Sources, please?
It depends on your needs, doesn’t it?
Specify what you want, see if you know how to get there—and if you don’t, check if someone will provide a credible roadmap for a fee...
Personal fitness folk: doing starting strength is three hours a week that will make all the rest much better and a personal trainer will make your form good, which is really important. If your conscientiousness is normal tutors rock. If you can afford one, hire a tutor.
Most personal trainers will not be able to help you have awesome form in powerlifting (starting strength) lifts. You’re better off submitting videos of your form to forums devoted to such things than with the average PT.
How many people here use Anki, or other Spaced Repetition Software (SRS)?
[pollid:565]
I’m finding it pretty useful and wondering why I didn’t use it more intensively before. Some stuff I’ve been adding into Anki:
Info about data structures and algorithms (I’m reading a book on them, and think it’s among the most generally useful useful knowledge for a programmer)
Specific commands for tools I use a lot (git, vim, bash—stuff I used to put into a cheat sheet)
Some Japanese (seems at least half of Anki users use it to learn Japanese)
Tidbits from lukeprog’s posts on procrastination
Some Haskell (I’m not studying it intensively, but doing a few exercises now and then, and adding what I learn in Anki)
I have much more stuff I’d like to Ankify (my notes on Machine Learning, databases, on the psychology of learning; various inspirational quotes, design patterns and high-level software architecture concepts …).
Some ways I got better at using Anki:
I use much less pre-made decks
I control the new-cards-per-day depending on how much I care about a topic. I don’t care much about vim, so have 3 to 5 new cards per day, but go up to 20 for procrastination or
I reorder my decks according to how much I care about them (I have a few decks prefixed with zzz that I review only if the others are done; I don’t mind forgetting about those)
For Japanese, I use double-sided cards and Japanese character input for creating them (I used to manually make both-way cards)
I have various google docs for stuff I’d like to eventually put into Anki, that I then copy-paste by batch into the web interface (there are probably even more convenient ways, but so far I find that the quickest—I want to be able to work on my list of entries before it goes in Anki)
I should probably make a top-level “reminder: used Spaced Repetition” post, but I’m still going to wait a bit more to have a bit more perspective.
Any other tips/advice/spaced repetition stories?
I’ve abandoned many decks almost completely because I made too complex cards.
Make the cards simple and combat interference. That doesn’t mean you can’t learn complex concepts. Now that I’ve got it right, I can go through hundreds of reviews per day if I’ve fallen behind a bit, and don’t find it exhausting. If I manage to review every day, it’s because I’m doing it first in the morning.
I use a plugin/option to make the answer show automatically after 6 seconds, so it’s easy to spot cards that are formatted badly or cause interference, and take too much time.
Some general Anki tips: If you use it to learn a foreign language use the Awesome TTS plugin. Whenever Anki displays a foreign word it should also play the corresponding sound. Don’t try to consciously get the sound. Just let Anki play the sound in the background.
I use a plugin that adds extra buttons to new cards I changed it in a way that gives the 6th button a timeframe of 30-60 days till the new card shows up the second time. I use that button for cards that are highly redundant.
Frozen fields is a plugin that’s useful for creating cards and I wouldn’t want to miss it. It allow you to present specific fields in the new card dialog from being cleared when you create a new card.
Quick Colour Changing is another useful addon. It allows you to use color more effectively to highlite aspects of cards.
I have written my more general thought about how to use Anki lately in another thread: http://lesswrong.com/r/discussion/lw/isu/advice_for_a_smart_8yearold_bored_with_school/9vlh
On of the core ideas that I developed over the last time is that you really want to make cards as easy as possible. I think the problem with most premade card that you find online is that they just aren’t easy enough. They take too much for granted.
Take an issue such as the effect of epinephrine on the heart. It raises hard rate. Most of the deck that you find out there would ask something like: “What’s the effect of epinephrine on the heart?” That’s wrong. That’s not basic enough. It’s much simpler to ask: “epinephrine ?(lowers/raises)? heart rate”
I think that idea also helps a lot with language learning. I think the classic idea of asking “What does goood mean in French?” is problematic. If you like in the dictionary you will find multiple answer and the card can only hold one answer. A card that just asks: “good means ?(bon/mal)?” is much simplier. I have made a French Anki deck using that principle and it’s astonishing for myself how well the learning flows.
If someone wants to test the deck I’m happy to share it. I would estimate the effects to be that for a lifetime investment of 20 hours you get the 200 most common French words + ~100 additional words. For most of the verbs you will be able to recognize the three basic times (present, future and passé simple). I think you will know the words well enough to understand them when you read a test. If you want conversational fluency with those words I think you will need additional practice. Deck is (French/English) For those of you who want to start using Anki I think it would be a good start.
If I would for example start now a Vim deck I would group functions. Take something knowledge like:
This make cards:
?(w/W)? → next word by punctuation
?(w/W)? → next word by spaces ?(w/b/e)? → next word by punctuation ?(W/B/E)? → next word by spaces
I would also add: ?(q/w/e/r/t)? next word by punctuation ?(w/s/x)? next word by punctuation
This gives you probably a rate of being able to answer the card in ~4 seconds. Cards that aren’t hard. You can simply integrate a new deck of 500 of those cards in an hour once the deck is ready.
Using it regularly is the most important thing by far. I don’t use it anymore, the costs to starting back up seem too high (in that I try and fail to re-activate that habit), I wish I hadn’t let that happen. Don’t be me; make Anki a hardcore habit.
Why not just restart from scratch with empty decks? It should be less daunting at first...
My strategy to avoid losing the habit is having decks I care less about than others, so that when I stopped using Anki for a few weeks, I only had to catch up on the “important” decks first, which was less daunging than catching up with everything (I eventually catched up with all the decks, somewhat to my surprise).
I’m also more careful than before in what I let in—if content seems too unimportant, it gets deleted. If it’s difficult, it gets split up or rewritten. And I avoid adding too many new cards.
Continuing with your current deck should be strictly superior to starting from scratch, because you will remember a substantial portion of your cards despite being late. Anki even takes this into account in its scheduling, adjusting the difficulty of cards you remembered in that way. If motivation is a problem, Anki 2.x series includes a daily card limit beyond which it will hide your late reviews. Set this to something reasonable and pretend you don’t have any late cards. Your learning effectiveness will be reduced but still better than abandoning the deck.
I’ve previously let Anki build up a backlog of many thousand unanswered cards. I cleared it gradually over several months, using Beeminder for motivation.
True, I forgot about that option—I actually discovered it after I had cleared my backlog, and thought “hm, that could’ve been useful too...”
I think when restarting a deck after a long time it’s important to use the delete button a lot. There might be cards that you just don’t want to learn and it’s okay to delete them.
You could also gather the cards you think are really cool and move them into a new deck and then focus on learning that new deck.
When using pre-made decks the only efficient way is to follow along, i.e. if you don’t know the source book/course it’s not very good. Partial exception, vocabulary lists.
Agreed—and you can even go wrong with vocabulary lists if they’re too advanced (some German vocabulary got overwhelming for me, I just dropped everything).
Another partial exception can be technical references (learning keywords in a programming language or git commands).
People who want to eat fewer animal products usually have a set of foods that are always okay and a set of foods that are always not (which sometimes still includes some animal products, such as dairy or fish), rather than trying to eat animal products less often without completely prohibiting anything. I’ve heard that this is because people who try to eat fewer animal products usually end up with about the same diet they had when they were not trying.
I wonder whether trying to eat more of something that tends to fill the same role as animal products would be an effective way to eat fewer animal products.
I currently have a fridge full of soaking dried beans that I have to use up, and the only way I know how to serve beans is the same as the way I usually eat fish, so I predict I’ll be eating much less fish this week than I usually do (because if I get tired of rice and beans, rice and fish won’t be much of a change). I’m not sure whether my result would generalize to people who use more than five different dinner recipes, though. I should also add that my main goal is learning how to make cheap food taste good by getting more practice cooking beans—eating fewer animal products would just be a side effect.
Now that I write this, I’m wishing I’d thought to record what food I ate before filling my fridge with beans. (I did write down what I could remember.)
People who you know want to eat fewer animal products. If I just decided to eat less meat, you’d be much less likely to find out this fact about me than if I decided to become fully lacto-ovo-vegetarian.
Good point.
I don’t think that an accurate description of the average vegetarian. A lot of self labeled vegetarians do eat animal products from time to time.
Most people who tell you that they try to eat only healthy food and no junk food, still eat junk food from time to time. The same goes for vegetarians eating flesh.
Additionally eat less red meat is part of the official mantra on healthy eating. A lot of people subscribed to the idea that limiting the amount of red meat they eat is good while not eliminating it completely.
I find this hard to believe, knowing several people who have become vegetarians and vegans and hardly ever eating meat myself. Do you have any support for this claim? Anecdotally, one new vegan (from being a vegetarian) stopped eating pizza which had previously been more-or-less a mainstay of his. My sister became a vegetarian as a kid despite actually quite liking meat at the time; not only did her eating habits changed but that of my entire family did significantly. My parents describe it as going from thinking “What meat is for dinner?” to thinking “What is for dinner?” ever night.
I think that was “people who try to eat fewer animal products without completely prohibiting anything”. It seems plausible to me.
Yes, this is what I meant.
Okay, that sounds plausible.
Prohibiting particular foods on certain days is also popular: “Meatless Mondays” or Catholic-style fasts.
I would like recommendations for a small, low-intensity course of study to improve my understanding of pure mathematics. I’m looking for something fairly easygoing, with low time-commitment, that can fit into my existing fairly heavy study schedule. My primary areas of interest are proofs, set theory and analysis, but I don’t want to solve the whole problem right now. I want a small, marginal push in the right direction.
My existing maths background is around undergrad-level, but heavily slanted towards applied methods (calculus, linear algebra), statistics and algorithms. My knowledge of pure maths is pretty fractured, not terribly coherent, and mostly exists to serve the applied areas. I am unlikely to undertake any more formal study in pure mathematics, so if I want to consolidate this, I’ll have to do it myself.
This came to my attention as I’ve recently started teaching myself Haskell. This is mostly an intellectual exercise, but at some point in the future I would like to work with provable systems. I can recognise the homology between some constructs in Haskell and mathematical objects, but others I don’t notice until they’re explicitly pointed out. I get the very strong impression that my grasp on functional programming would be a lot more powerful if I had a stronger grounding in pure maths.
If you like Haskell’s type system I highly recommend learning category theory. This book does a good job. Category theory is pretty abstract, even for pure math. I love it.
Essentially, this kind of math is called category theory. There is this book, which is highly recommended, and fills your criteria decently well. I am currently working through this book, and I am happy to discuss things with you if you would like.
I am not sure if it is good for you background and needs, but I would like to mention The Book of Numbers. I read and understood this book in high school without any formal training of calculus. I think this book is very effective at showing people how math can be beautiful in a context that does not have many prerequisites.
Inaccessible Is Ungovernable
I upvoted this, even though the part where wealth is suggested as a filter for competence completely fails to distinguish the Bill Gates (rich because competent) from the Paris Hiltons (rich because someone somewhere in the ancestry was competent and/or lucky). (Though it’s possible I just upvoted it because it starts out talking about accessibility and how the existence of imperfect beings kinda nukes the idea of libertarian free will, both of which I wish more people understood.)
After Conrad decided to give 97% of his fortune to charity, it appears to me that Paris will earn more money than she will inherit. Even if she is as stupid as the character she plays, she has acquired competent agents.
I don’t have much of a point, but people who win the fame tournament are probably not famous by accident.
Is disgust “conservative”? Not in a Liberal society (or likely anywhere else) by Dan Kahan
His argument against Haidt’s ideas on differences between liberals and conservatives related to his moral foundation theory differing psychology is similar to the ones Vladimir_M and Bryan Caplan made, but he upgrades it with a plausible explanation for why it might seem otherwise. The references are well worth checking out.
I recently found out a surprising fact from this paper by Scott Aaronson. P=NP does not (given current results) imply that P=BQP. That is, even if P=NP there may still be substantial speedups from quantum computing. This result was surprising to me, since for most computational classes we normally think about that are a little larger than P, they end up equaling P if P=NP. This is due to the collapse of the polynomial hierarchy. Since we cannot resolve that BQP lives in the polynomial hierarchy, we can’t make that sort of argument.
Sure, but that’s just saying that P=NP is not a robust hypothesis. Conditional on P=NP, what odds do you put that P is not P^#P or PSPACE? (though maybe the first is a robust hypothesis that doesn’t cover BQP)
I’m not sure. If P=NP this means I’m drastically wrong about a lot of my estimates. Estimating how one would update conditioning on a low probability event is difficult because it means there will be something really surprising happening, so I’d have to look at how we proved that P=NP to see what the surprise ended up being. But, if that does turn out to be the case, I’m fairly confident I’d then assign a pretty high probability to P=PSPACE. On the other hand we know that of the inequalities between P, NP, PSPACE and EXP, at least one of them needs to be strict. So why should I then expect it to be strict on that end? Maybe I should then believe that PSPACE=EXP? PSPACE feels closer to P than to EXP but that’s just a rough feeling, and we’re operating under the hypothetical that we find out that a major intuition in this area is wrong.
Apparently recent work shows that direct giving of grants in developing countries has high rates of return. This more or less confirms what Givewell has said before about microfinance.
The linked givewell posts discuss microloans, not outright grants. Link to Blattman’s paper: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2268552
Has someone attempted to make the jump from the reported data to QALY estimates or some other comparable measure?
My current guide to reading LessWrong can be found here.
I would like to know what people think about my potentially adding it to the sequences page, along with Academian’s and XiXiDu’s guides.
Just looking for feedback. Cheers.
I like the ideas of (1) providing an alternative video introduction, because some people like that stuff, and (2) having the last part of “what to do after reading LessWrong”.
I think the rationality videos should be even linked from the LW starting page. Or even better, the LW starting page should start with a link saying “if you are here for the first time, click here”, which would go to a wiki page, which would contain the links to videos (with small preview images) on the top.
Cheers—yeah, especially for my friends for whom reading a couple of those posts would be a big deal, the talks are very useful. I’ll make a top-level comment on next week’s open thread proposing the idea :)
Added: By the way, as to the ‘post LW’ section, you might’ve noticed that the last post in ‘welcome to Bayesianism’ is a critique of LessWrong as a shiny distraction rather than of actual practical use. I’m hoping the whole thing leads people to be more practically rational and involved in EA and CFAR.
Might be useful to hae introduction points for people with a certain degree of preexisting knowledge of the subject but from other sources. E.g. If I want to introduce a philosophy postgrad to lesswrong I would want to start with a summary of lesswrong’s specific definition of ‘rationality’ and how it compares to other versions, rather than starting from scratch.
I’m sorry, I had a little difficulty parsing your comment; are you saying that my introduction would be useful for a philosophy postgrad, or that my summary is starting from scratch and the former would be something for someone to work on?
Apparently replies to myself no longer show up in my inbox.
Kudos to whoever made that happen.
LW tells people to upvote good comments and downvote bad comments. Where do I set the threshold of good/bad? Is it best for the community if I upvote only exceptionally good comments, or downvote only very bad comments, or downvote all comments that aren’t exceptionally good, or something else? Has this been studied? Is it possible to make a karma system where this question doesn’t arise?
Information theory says that you communicate the most if you send the three signals of up, down, nothing equally often. This would be a psychological disaster if everyone did it, but maybe you should.
It seems to me that the total voting ought to reflect the “net reward” we want to give the poster for their action of posting, like a trainer rewarding good behavior or punishing bad. For this reason, my voting usually takes into account the current total score. I think the community already abides by this for most negatively scored posts—they usually don’t sail much below −2. For posts that I feel I really benefited from, though, I don’t really follow my own policy per se. -- I just “pay back” what I got out of it to them.
I basically only downvote if there’s some line of argument that I object to in the post. I think I need to say what I’m objecting to specifically when I do this more often.
My opinion is it has to depend on the current score of the post. [At least under the current system, which reports, if you will, net organic responses; in a different system where responses were from solicited peer-review requests, different behavior would be warranted.]
Good questions. I don’t know. There’s some further discussion here.
This should be implemented in the system if done at all. Downvoting “nondeservingly” upvoted posts will make obvious but true comments look controversial. I think inconsistently meta-gaming the system just makes it less informative.
If you don’t think something deserves the upvotes, but isn’t wrong, then simply don’t vote.
ETA: I assume you didn’t mean that downvoting to balance the votes is good, but you didn’t mention it either.
Good point. I don’t actually do that, I do the “don’t vote” policy you mentioned, but I hadn’t thought about why, or even noticed that I do it correctly. Thanks. Your point that it would make the voting look controversial is well taken.
I would be tempted to upvote something that I thought had karma that was too low. This would tend to cause it to look “controversial” when, maybe, I agreed that it deserved a negative score. Is upvoting behavior also a bad idea in this case and I should just “not vote”?
I don’t see how that’s possible without it having more information.
I don’t want to overthink this too much as I can’t help but think that these issues are artifacts of the voting system itself being a bit crude: e.g. should I be able to “vote” for a target karma score instead of just up or down? The score of the post could be the median target score.
I don’t know. I’m quite green here too. I don’t usually read heavily downvoted comments, as they’re hidden by default. Downvoted comments are less visible anyway, so any meta-gaming on them has less meaningful impact.
I might upvote a downvoted comment, if I don’t understand why it’s downvoted and wanted it to be more visible so that discussion would continue. It would be a good to follow up with a comment to clarify that, but many times I’m too lazy :(
I think making the system more complicated would just make people go even more meta.
I think that if we could coordinate perfectly what we mean by good comments, and each comment has a score between 0 and 1, then we should all upvote a comment with a positive score with a probability equal to its score, and downvote a comment with negative score with probability equal to its negative score.
This would cause the karma assigned to a post to drift over time unboundedly with expectation of: (the traffic that it recieves)*(the average score of voters), which seems problematic to me.
Nitpick: maybe you want the score to run between −1 and 1 and voting probability to be according to the absolute score? I’m confused by your phrase “comment with negative score”.
“negative score” means the negative of the score you give. If you give −1/2, you downvote with probability 1⁄2.
If we could coordinate perfectly, we’d delegate all the voting to one person. Can you try solving the problem with weaker assumptions?
Why are AMD and Intel so closely match in terms of processor power?
If you separated two groups and incentivized them to develop the best processors and came back in 20 years, I wouldn’t expect both groups to have done approximately comparably. Particularly so if the one that is doing better is given more access to resources. I can think of a number of potential explanations none of which are entirely satisfactory to me, though. Some possibilites:
there is more cross-talk between the companies than I would guess (through hiring former employees, reading patents, reverse engineering, etc.)
Outside factors matter a lot: eg the fab industry actually determines a lot of what they can do
Companies don’t work as hard as they can when they know that they’re slightly beating their competitors (and the converse)
Selection bias: I’m not comparing Intel to Qualcomm or to any competitors that went out of business and companies that do worse in performance would naturally transition to other niches like low-power. Nor am I considering markets where there was a clear dominator until their patents expired.
Basic research drives some improvements and is mostly accessible to both
Though none of these are particularly compelling individually, taken together they seem pretty plausible. Am I missing anything? I know basically nothing about this industry so I wouldn’t be surprised if there was a really good reason for this.
I’m afraid I didn’t keep information about the citation, but when I was reading up on chip fabs for my essay I ran into a long article claiming that there is a very strong profit motive for companies to stack themselves into an order from most expensive & cutting-edge to cheapest & most obsolete, and that the leading firm can generally produce better or cheaper but this ‘uses up’ R&D and they want to dribble it out as slowly as possible to extract maximal consumer surplus.
There is lots of cross-talk. Note also that Intel and AMD buy tools from other companies- and so if Cymer is making the lasers that both use for patterning, then neither of them has a laser advantage.
I find in general very hard to predict what kind of acceptance will my post receive, basing on the karma point of each.
While as a policy I try not to post strategically (that is, rationality quotes, pandering to the Big Karmers, etc.), but just only those things I find relevant or interesting for this site, I have found no way to reliably gauge the outcome.
It is particularly bewildering to me that comments that (I hope) are insightful gets downvoted to the limit of oblivion or simply ignored while comments or requests of clarification are the most upvoted.
Have someone constructed a model of how the consensus works here on LW? Just curious...
Curious about specific examples.
This can have many reasons. Posting too late, when people don’t read the article. Using difficult technical arguments, so people are not sure about their correctness, so they don’t upvote.
If you click on my name, the first two comment at −2 are the ones: I was seriously thinking of being contributing to the discussion.
Yeah, this does not bother me much, I’m more puzzled by the “trivial comment → loads of karma” side: “How did you make those graphs” and “How do you say ‘open secret’ in English” attracted 5 karmas each. Loads here has to be intended as relative to the average of points my posts receive.
Before, I modeled karma as a kind of power-law: all else being equal, those who have more karma will receive more karma for their comment. So I guessed that the more you align to the modus cogitandi of Big Karmers, the more karma you will receive. This doesn’t explain the situation above, though.
I don’t know about other people but when I upvote a simple question I’m saying “yeah I was wondering this too”
Upvoted because yeah I do this too.
Upvoted to reinforce explaining what your votes mean.
Upvoted because I was going to write the same thing, and upvoting the comment is what I usually do when I see that someone has already written what I was going to write.
+1 for explaining why. I’m not sure I agree with the behavior particularly, since it could give a lot of credit for something relatively obvious. I probably wouldn’t do it if the question had more than +5 already unless I was really glad.
Oh, I will give extra +1′s when the context made me think it would be hard for the person to ask the question they asked, e.g. because it challenges something they’d been assuming.
As a rule I don’t think it’s productive to worry about karma too much, and I’m going to assume you agree and that you’re asking “what am I missing, here” which is a perfectly useful question.
Before I get into your question, here’s an example that was at −2 when I encountered it, but that I see has now risen to having +5, so there’s definitely some fluidity to the outcome (you might be interested in the larger discussion on that page anyway).
So the two examples that you mention at −2 presently are 1 and 2.
Part of the problem in those examples seems to be an issue of language, but I don’t think that’s all of it. For example, you offer to clarify that when you say “natural inclination” you mean an “innate impulse [that] is strongly present almost universally in humans” and give examples of things humans seek regularly (“eating, company, sex”). From my interpretation of the other posts, when they say “natural inclination” they mean “behavior that would be observed in a group of humans (of at least modest size) unless laws or circumstances specifically prevent it”. I suspect that the downvotes could be because your meaning was sufficiently unexpected that even when you wrote to clarify what it was, they couldn’t believe that that was what you meant. And, on balance, no, that doesn’t seem right to me since you were making an honest effort to clarify terms.
For what it’s worth, here’s why I’d object to your choice of terms, and this could explain some of the downvotes, since it’s obviously much less effort to just downvote than explain. I’d object because your definition inserts an implied “and the situation is normal” into the definition. For example, in normal situations a person would rather have an ice cream than kill someone. But if the situation is that you’re holding a knife and the man in front of you has just raped your sister and boasts about doing it again soon, maybe the situation is different enough that the typical innate impulse is different. Since what’s usually of interest is behavior over a long period of time, the dependency on the situation is problematic.
As for the second comment, I don’t understand it. Maybe I’m missing context. You seem to set up an unreasonable partition of the possibilities into 3 things.
Anyway, sometimes the negative votes can tell us what we’re doing wrong, sometimes they seem to just be a consequence of saying something that’s not mainstream for the site, but I don’t want to let myself get trapped into dismissing them all that way, so I usually take a minute to think about it when it happens.
Incidentally, I think it would be a big mistake to actively try to get maximum +karma on your comments. On the benign side you’d start trying hard to be the first poster on major articles. On the more negative side you’d have the incentive to approve of the prevailing argument with clever words. To exaggerate: “Be proud that you don’t have too much that was merely popular.” That said, some of the highly voted articles, at least, clearly deserve it.
There are possible privileged situations, however. If you are in the environment of evolutionary adaptedness, living with your tribe out on the African savannah, how many days per year are you going to have an “inclination” to kill another human, vs. how many days are you going to have an “inclination” to eat, have sex and socialize. I’m guessing the difference is something like 1 vs. 360, unless tribal conflicts were much more common in that environment than I expect, and people desired to kill during those conflicts more than I expect (furthermore I would expect people to see it as an unfortunate but necessary action, which doesn’t jive with my sense of the definition of “inclination”, but that’s not critical to the point). Clearly putting them on the same level carves up human behavior in a particular way which is not obvious just from the term “natural inclination.”
That all seems fair to me. To be honest I haven’t read enough of the context to know how relevant these distinctions are to it, and I agree the term seems problematic which is all the more reason that trying to nail it down is actually useful behavior, hence MrMind’s concern, I guess.
One reason is people vote to signal simple agreement.
Not saying it would work, but there could be “warm fuzzy votes” that don’t contribute to karma at all, or contribute much less, and are shown separately. Comments could be arranged by those too if need be. It would be an interesting experiment to see how much people agree with posts that have no other value.
As for a model… obviously not a full model:
Statements that are short and that are non-controversially in line with the position that most readers would approve of and flow with the context well and get a lot of “traffic” are the most likely to have skyrocketing +1′s.
If it has a useful insight or a link to an important resource this also helps, but only if it’s lucid enough in its explanation.
I am interested in reading further on objective vs subjective Bayesianism, and possibly other models of probability. I am particularly interested in something similar to option 4 in What Are Probabilities, Anyway. Any recommendations on what I should read?
I recently memorized an 8-word passphrase generated by Diceware.
Given recent advances in password cracking, it may be a good time to start updating your accounts around the net with strong, prescriptively-generated passphrases.
Added: 8-word passphrases are overkill for most applications. 4-word passphrases are fairly secure under most circumstances, and the circumstances where in which they are not may not be helped by longer passphrases. The important thing is avoiding password reuse and predictable generation mechanisms.
I find it much easier to use random-character passwords. Memorize a few, then cycle them. You’ll pretty much never have to update them. If you can’t memorize them all, use software for that.
The “dictionary attacks” sentence is a non sequitur. The number of possible eight-word Diceware passwords is within an order of magnitude of the number of possible 16-character line noise passwords.
You’re right, removed it. I’m not sure I understand why people prefer using passphrases though. Isn’t it incredibly annoying to type them over and over again?
I think the main advantage is that they’re easier to memorize.
Another is that, although they’re harder to type because they’re longer, they’re easier to type because they don’t have a bunch of punctuation and uppercase letters, which are harder to type on some smartphones (and slower to type on a regular keyboard). And while I’m at it, one more minor advantage (not relevant for people making up their own passwords) is that the average person does not know punctuation characters very well, e.g., does not know the difference between a slash and a backslash.
They may be easier to type the first few times, but after your “muscle memory” gets it even the trickiest line noise is a breeze.
That smartphone thing is a good point, though. My phone is my greatest security risk because of this problem. Probably should ditch the special characters.
Yes, no one should use line noise passwords because they are hard to type. If you want 100 bits in your password, you should not use 16 characters of line noise. But maybe you should use 22 lower case letters.
The xkcd cartoon is correct that the passwords people do use are much less secure than they look, but that is not relevant to this comparison. And lparrish’s links say that low entropy pass phrases are insecure.
But why do you want 100 bit passwords? The very xkcd cartoon you cite says that 44 bits is plenty. And even that is overkill for most purposes. Another xkcd says “The real modern danger is password reuse.” Without indicating when you should use strong passwords, I think this whole thread is just fear-mongering.
According to the Diceware FAQ, large organizations might be able to crack passphrases 7 words or less in 2030. Of course that’s different from passwords (where you have salted hashes and usually a limit on the number of tries), but I think when it comes to establishing habits / placing go-stones against large organizations deciding to invest in snooping to begin with, it is worthwhile. Also, eight words isn’t that much harder than four words (two sets of four).
One specific use I have in mind where this level of security is relevant is bitcoin brainwallets for prospective cryonics patients. If there’s only one way to gain access to a fortune, and it involves accessing the memories of a physical brain, that increases the chances that friendly parties would eventually be able to reanimate a cryonics patient. (Of course, it also means more effort needs to go into making sure physical brains of cryonics patients remain in friendly hands, since unfriendlies could scan for passphrases and discard the rest.)
I don’t understand what you mean by this. How are salting and limits properties of passwords (but not passphrases)?
What I meant is that those properties are specific to the secret part of login information used for online services, as distinct from secret information used to encrypt something directly.
Sorry, what I meant is something more like like ‘encryption phrases’ and ‘challenge words’. Either context could in principle refer to a word or a phrase, actually. However, when you are encrypting secret data that needs to stay that way for the long term, such as your private PGP key, it is more important to pick something that can’t concievably be brute forced, hence the usage of the term ‘passphrase’ usually applies to that. If someone steals your hard drive or something, your private key will only stay private for as long as the passphrase you picked is hard to guess, and they could use that to decrypt any incoming messages that used your public key.
When you are simply specifying how to gain access to an online service, it is a bit less crucial to prevent the possibility of brute forcing (so a shorter ‘password’ is sort of okay), but it is crucial for the site owner to use things like salt and collision-resistant hash functions to prevent preimage attacks, in the event that the password-hash list is stolen. (Plaintext passwords should never be stored, but unsalted hashes are also bad.)
If someone was using a randomly generated phrase of 4+ words or so for their ‘password’, salt would be more or less unnecessary due to the extremely high probability that it is unique to begin with. This makes for one less thing you have to trust the site owner for (but then, you do still have to trust that they aren’t storing plaintext, that the hash they use is collision-resistant, etc).
I’m not sure if it is possible to use salt with something like PGP. I imagine the random private key is itself sufficient to make the encrypted key as a whole unique. Even if the passphrase itself were not unique, it would not be obvious that it isn’t until after it is cracked. The important thing to make it uncrackable is that it be long and equiprobable with lots of other possibilities (which incidentally tends to make it unique). Since the problem isn’t uniqueness to begin with, but rather the importance of it never being cracked even with lots of time and brute force, salt doesn’t do a lot of good.
Bitcoin private keys are bound to the number of bits of entropy stored in the public address, which I believe is 122 or so. Since the presence of coins at a public address is public information, brute force attacks should be expected to track the cost of computing power / the value of coins stored. It seems to be pretty good security for the near term, but Douglas_Knight predicts that quantum computers will break bitcoin. (Presumably later versions will be more robust against quantum computers, or something other than bitcoin will take dominance.)
In any case, while I have been calling the phrase used for a bitcoin brainwallet a ‘passphrase’, and it is more in that category than not (being important to protect from brute force, not having a salt, and not being part of a login sequence), note that it is unlike a PGP passphrase in that it represents the seed for the key in its entirety rather than something used to encrypt the key.
Disclaimer: I’m not an expert in crypto.
Yes, there are some uses. I’m not convinced that you have any understanding of the links in your first comment and I am certain that that it was a negative contribution to this site.
If you really are doing this for such long term plans, you should be concerned about quantum computers and double your key length. That’s why NSA doesn’t use 128 bits. Added: but in the particular application of bitcoin, quantum computers break it thoroughly.
Well, that’s harsh. My main intent with the links was to show that the system for picking the words must be unpredictable, and that password reuse is harmful. I can see now that 8-word passphrases are useless if the key is too short or there’s some other vulnerability, so that choice probably gives us little more than a false sense of security.
This is news to me. However, I had heard that there are only 122 bits due to the use of RIPEMD-160 as part of the address generation mechanism.
Rudeness doesn’t help people change their minds. Please elaborate what you mean by this. Even if he’s wrong, the following discussion could be a positive contribution.
There are 7776 words in Diceware’s dictionary. Would you rather memorize 8 short words, 22 letters (a-z case insensitive), or 16 characters (a-z case sensitive, plus numerals and punctuation marks?)
If I really had to type them in myself every time I wanted to use them, 16 random characters absolutely. Repeatedly typing the 8 words compared to 16 characters probably takes more time in the long run than memorizing the random string. Memorizing random letters isn’t significantly easier in my experience than memorizing random characters.
I find myself over sensitive to negative feedback and under-responsive to positive feedback.* Does anyone have any advice/experience on training myself to overcome that?
*This seems to be a general issue in people with depression/anxiety, I think its something to do with how dopamine and serotonin mediate the reward system but I’m not an expert on the subject. Curiously sociopaths have the opposite issue, underresponding to negative feedback.
Spend more cognitive resources on dealing with positive feedback.
When someone says that you have a nice shirt, think about why they said it. Probably they wanted to make you feel good. What does that mean? They care about making you feel good. You matter to them.
Gratitude journaling is a tool with a good evidence base. At the end of every day, write down all good feedback that you got. It doesn’t matter if it was trival. Just write stuff down.
Meditation is also a great tool.
I wouldn’t be sure about that claim. I think sociopaths rather have different criteria of what constitutes negative feedback. I think physical pain would have the same effect on a sociopaths as on a regular person.
The Feeling Good Handbook has good evidence as a treatment for depression and could help you to identify and address your automatic thoughts caused by negative feedback.
I’d like to highly recommend Computational Complexity by Christos H. Papadimitriou. Slightly dated in a fast changing field, but really high quality explanations. Takes a bit more of a logic-oriented approach than Hopcroft and Ullman in Introduction to Automata Theory, Languages, and Computation. I think this topic is extremely relevant to decision theory for bounded agents.
Thanks for the recommendation, but isn’t this sort of thing better suited for the Media thread?
I would recommend the Best Textbooks on Every Subject thread, rather. This comment (upvoted, incidentally) very almost meets the requirements there:
Those who have been reading LessWrong in the last couple of weeks will have little difficulty recognizing the poster of the following. I’m posting this here, shorn of identities and content, as there is a broader point to make about Dark Arts.
These are, at the time of writing, his two most recent comments. I will focus on the evidential markers, and have omitted everything else. I had to skip entirely over only a single sentence of the original, and that sentence was the hypothetical answer to a rhetorical question.
Someone replied to that, and his reply was:
In every sentence, he is careful to say nothing, while appearing to say everything. His other postings are not so dense with these thin pipings of doubt, but they are a constant part of his voice.
Most of us have read or watched Tolkien. Some have read C.S. Lewis. We know this character, and we can recognise his voice anywhere. Lewis called him Professor Weston; Tolkien called him Grima Wormtongue.
I’m having difficulty recognizing the poster of the following, and searching individual phrases is only turning up this comment. While I approve of making broad points about Dark Arts, I’m worried that you’re doing so with a parable rather than an anecdote, which is a practice I disapprove of.
I’m guessing that RichardKennaway means this post and the related open thread.
I, thankfully, missed that the first time around. Worry resolved. (Also, score one for the deletion / karma system, that that didn’t show up in Google searches.)
I couldn’t figure it out, either—the good news is that someone who’s so vague has a reasonable chance of being so boring as to be forgettable.
I’m fairly certain that the user RK is referring to was deleted from the site.
EDIT: But I am wrong! He wrote a post that got deleted, and I got confused.
Not as of this writing.
corrected, thanks
I agree that being slippery and vague is usually bad, and one way to employ Dark Arts.
However, avoiding qualifiers of uncertainty and not softening one’s statements at all exposes oneself to other kinds of dark arts. Even here, it’s not reasonable to expect conversants to be mercifully impartial about everything. Someone who expects strong opposition would soften their language more than someone whose statements are noncontroversial.
There’s slippery, and there’s vague. The one that I have not named is certainly being slippery, yet is not at all vague. It is quite clear what he is insinuating, and on close inspection, clear that he is not actually saying it.
Qualifiers of uncertainty should be employed to the degree that one is actually uncertain, and vagueness to the degree that one’s ideas are vague. In diplomacy it has been remarked that what looks like a vague statement may be a precise statement of a deliberately vague idea.
If your concerns are valid, then it doesn’t help those that are not aware of who you are talking about, by hiding the identity of the accused. We’re all grown ups here we can handle it.
I think the pattern is also important per se. You can meet the pattern in the future, in another place.
It’s a pattern of how to appear reasonable, cast doubt on everything, and yet never say anything tangible that could be used against you. It’s a way to suggest that other people are wrong somehow, without accusing them directly, so they can’t even defend themselves. It is not even clear if the person doing this has some specific mission, or if breeding uncertainty and suspicion is their sole mission.
And the worst thing, it works. When it happens, expect such person to be upvoted, and people who point at them (such as Richard), downvoted.
As Viliam_Bur says, it is the general pattern that is my subject here, not to heap further opprobrium on the one who posted what I excerpted. Goodness knows I’ve been telling him to his virtual face enough of what I think of him already.
More from Tolkien.
Here is a problem that I regularly face:
I have a hard time terminating certain subroutines in my brain. This most regularly happens when I am thinking about a strategy game or math that I am really interested in. I will continue thinking about whatever it is that is distracting me even when I try not to.
The most visible consequence of this is that it sometimes interferes with my sleep. I usually get to bed at a regular time, but if I get distracted it could take hours for me to get to sleep, even if I cut myself off from outside stimulus. It can also be a problem when I am in a class that I find less interesting that whatever math I was working on before the class.
I know there are drugs to help with sleep, but I am especially interested in a meta-thinking solution to this problem. Is there a way that I can force myself to clear my brain and get it to stop thinking about something for a while?
One idea I had is to give my brain another distracting activity that causes it to think, but has no way to actively stay in my head after the activity is finished. For example, perhaps I could solve a Sudoku or similar logic puzzle? I have not tried this yet, but I will next time I am in this situation.
Any other ideas? Is this a problem many people face?
This is pretty much what meditation is for — minus the “force”, that is.
I use certain videogames for something similar. I’ve collected a bunch of (Nintendo DS, generally) games that I can play for five minutes or so to pretty much reset my mind. Mostly it’s something I use for emotions, but the basic idea is to focus on something that takes up all of that kind of attention—that fully focuses that part of my brain which gets stuck on things.
Key to this was finding games that took all my attention while playing, but had an easy stopping point after five minutes or so of play—Game Center CX / Retro Game Challenge is my go-to, with arcade style gameplay where a win or loss comes up fairly quick.
StepMania is great for this (needs specialized hardware). It needs the mind and the body. When playing on a challenging level, I must pay full attention to the game—if my mind starts focusing on any idea, I lose immediately.
Intensive exercise—I remember P.J.Eby saying he’d use intensive exercise (in his case I thin it was running across his house) as a “reset button” for the mind. It’s pretty cheap to try! (I have occasionally did that—pushups, usually—though it’s more often to get rid of angry annoyance than distractions)
Physical pain will do it. Exercise is one option, but for me it always seems to be the bad “I am destroying my joints” kind of pain so I stop before it hurts enough to reset my thought patterns. Holding a mug of tea that’s almost but not quite hot enough to burn, and concentrating on that feeling to the exclusion of everything else, seems to work decently. A properly forceful backrub is better, though it requires a partner. And if your partner is a sadist then you begin to have many excellent options.
Addressing the sleep half: if meditation or sleep visualization exercises are hard for you, try coloring something really intricate and symmetrical. Like these. The idea is to keep your brain engaged enough to not think about the intrusive thing you were thinking about before, but calm enough to move towards sleep.
I read fiction or easy nonfiction. This distracts me from other thoughts, but isn’t engaging enough to keep me awake.
Alcohol.
I don’t have a citation, but I’ve heard that alcohol will screw with your sleep. Might want to Google if you’re thinking about going that route.
I don’t know if a citation would help—alcohol’s effect on sleep (and other things) is fairly personal. If you don’t already know, you’ll need to experiment and find out how it works for you.
In any case, alcohol is just the easiest of the hit-the-brain-below-the-cortex options. There are other alternatives, too, e.g. sex or stress.
I find reading LW helps with this.
Has anyone had any experience with http://sundayassembly.com ?
I’d love to hear some first hand accounts. It sounds like all the things I enjoyed about going to church when I was a Christian, with the Christianity part.
If you enjoyed going to church as a Christian, and considered it enough to make this post, then you should probably just go. There is not much penalty for trying.
I go to a UU church, which looks kind of similar. (They are not all atheist, but they are all different things and agree to disagree about theology.) I don’t really enjoy the singing that much, at least not the hymns, and I still enjoy the experience as an atheist. Just don’t expect to get the same level of intelligence or rationality you get from here though. If you are looking for good philosophical discussion, that probably isn’t the place to get it.
Overview of systemic errors in science—wishful thinking, lack of replication, inept use of statistics, sloppy peer review. Probably not much new to most readers here, but it’s nice to have it all in one place. The article doesn’t address fraud very much because it may have a small effect compared to unintentionally getting things wrong.
Account of a retraction by an experiment’s author Doing the decent thing when Murphy attacks. Most painful sentence: “First, we found that one of the bacterial strains we had relied on for key experiments was mislabeled.”
Stock market investment would seem like a good way to test predictive skills, have there been any attempts to apply lw style rationality techniques to it?
I disagree and hope that more people would update regarding this belief. There is no alpha (risk adjusted excess returns), at least not for you. Here is why:
For all intents and purposes, stocks markets are efficient; even if you don’t agree, you would still have to answer the question “to what degree of inefficiency is there that will allow you to extract or arbitrage gains”? Your “edge” is going to be very very small if you even have one.
Assuming you have identified measurable inefficiencies, your trading costs will negate it.
The biggest players have access to better information, both insider and public, at faster speeds than you could ever attain and they already participate in ‘statistical arbitrage’ on a huge scale. This all makes the stock market very efficient, and very difficult for you, the individual investor to game a meaningful edge.
The assumption that one could test for significantly better predictive skills in the stock market, would imply that risk free arbitrage is common – You could just buy one stock and sell an index fund or vice verse, then apply this with the law of large numbers and voila you are now a millionaire but alas this does not commonly happen.
I happen to disagree. I don’t think this statement is true.
First, there are many more financial markets than the stock market. Second, how do you know that stock markets are efficient?
That seems to be a bald assertion with no evidence to back it up, especially given that we haven’t specified what kind of trading we are talking about.
The biggest players have their own set of incentives and limitations, there are not necessarily the best at what they do, and, notably, they are not interested in trades/strategies where the payoffs are not measured in many millions of dollars.
I don’t see how that implies it. Riskless arbitrage, in any case, does not require any predictive skills given that it’s arbitrage and riskless. You test for predictive skills in the market by the ability to consistently produce alpha (properly defined and measured).
Upvoted because your reservations are probably echoed by many.
I’d like to change your mind specifically when it comes to “playing the stock market” for excess returns. My full statement is “There is no alpha (risk adjusted excess returns), at least not for you”. This reflects my belief that while alpha is certainly measurable and some entities may achieve long term alpha, for most people this will not happen and will be a waste of time and money.
First, OP mentions stock market I’m not particularly picking on it. Second, for all intents and purposes for the individual, it is. Think about it this way, instead of saying whether or not the stock market is efficient, like it’s binary, lets just ask how efficient it is. In the set of all markets, is the stock market among the most efficient markets that exist? I would see no reason why it wouldn’t be. Have you ever played poker with 9 others of the best players in the world? Chances are you haven’t, because they aren’t likely to be part of your local game, but the stock market is easy to enter and anyone one may participate. While you sit there analyzing your latest buy low and sell high strategy, you are playing against top tier mathematicians and computer engineers synergistically working with each other with the backing of institutions. A lone but very smart and rational thinking programmer isn’t likely to win. Why would you choose to make that the playground for you to test your predictions skills? There are better places, like prediction book.
Even dirt cheap discount brokers charge about $5 a trade, but if you were something of a professional then you could join a prop firm and get even cheaper maybe .005 per share. But now you have the problem of maintaining volume of trades in order to keep that rate. If you are a buy and holder you would still need to diversify and balance your portfolio with transaction trades, 1. To prove you statistically did better than the market rather than variance. 2. Prevent individual stock risk. If you have a strategy of anything other than a buy and hold strategy you will incur more trading costs.
Any incentives and limitations that big players have are more adverse for the individual. Strategies that are ignored by the truly big players are picked up by the countless mutual fund managers that year after year try to beat the market yet the majority don’t what makes an individual think they could do better?
I should rephrase, when I say arbitrage I mean statistical arbitrage. But strong stat arb might as well be just as good as riskless if you truly have an edge. Assume you have a measured alpha of a significant degree and probability. One would essentially be orchestrating a “risk-free” arbitrage by simply applying your alpha producing strategy and simultaneously shorting S&P etf’s to create a stat arbitrage. But that doesn’t happen commonly, because free lunches are quick and leave none for you. Strategies by nature are ephemeral, they last until rational agents exploit it until there is nothing left. For example there used to be a strategy where monitoring the monthly reported cash in flows to mutual funds could predict upward movement in equity markets. The idea is that with lots of cash, fund managers start to buy. This was exploited until this strategy no longer produces a measurable edge. Unless you have reason to think that you will discover a neat unexploited strategy, you shouldn’t play the stock market, just buy etfs.
I have personal experience in this industry and I think I only know one person that has been able to pull it off and is not lying about it. His experience is consistent with my beliefs that the stock market is getting more efficient. His earnings were the greatest during the earlier part of his career and has been steadily declining.
That’s a remarkably low bar.
A great deal of things will not happen “for most people”. Getting academic tenure, for example. Or having a net wealth of $1m. Or having travelled to the Galapagos Islands. Etc., etc.
Yes, but that’s the basic uninformed default choice when people talk about financial markets. It’s like “What do you think about hamburgers? Oh, I think McDonalds is really yucky”. Um, there’s more than that.
If you look at what’s available for an individual American investor with, say, $5-10K to invest, she can invest in stocks or bonds or commodities (agricultural or metals or oil or precious metals or...) or currencies or spreads or derivatives—and if you start looking at getting exposure through ETFs, you can invest into pretty much anything.
The focus on the stock market is pretty much a remnant from days long past.
I don’t know. It depends on how smart and skilled he is.
He might also join forces with some smart friends. Become, y’know, one of those teams of “top tier mathematicians and computer engineers” who eat the lunch of plain-vanilla investors. But wait, if the markets are truly efficient, what are these top-tier people doing in there anyway? :-/
Because the outcomes are direct and unambiguous. Because some people like challenges. Because it’s a way to become rich quickly.
Mutual fund managers are very restricted in what they can do. Besides outright constraints (for example, they can’t go short) they are slaves to their benchmarks.
Oh, no. “Riskless” and “I think it’s as good as riskless” are very, very different things.
That doesn’t get you anywhere near “riskless”. That just makes you hedged with respect to the market, hopefully beta-hedged and not just dollar-hedged.
True, but people show a very consistent ability to come up with new ones when old ones die.
In any case, no one is arguing that you can find a trade or a strategy and then milk it forever. You only need to find a strategy that will work for long enough for you to make serious money off it. Rinse and repeat, if you can. If you can’t, you still have the money from a successful run.
Disclaimer: I day trade, so this might be influenced by defensiveness.
The thinking patterns I’ve learned on LW haven’t really helped me to discover any new edge over the markets. Investment, or speculation, feels more like Go or blackjack as an activity. Being a rationalist doesn’t directly help me notice new trades or pick up on patterns that the analysts I read haven’t already seen.
On the other hand, the most difficult thing about dealing with financial matters is remaining calm and taking the appropriate action. LW techniques have helped me with this a lot. I believe that reading LW has made me a more consistent trader.
I’m not sure that the above was written clearly, let me try again. My proficiency as a speculator goes up and down based on my state of mind. Reading LW hasn’t made the ups higher, but its made me less likely to drop to a valley.
On a tangent, while I’m thinking about it.
Has anyone else just been baldly disbelieved if they mention that they made money in a nontraditional way? The only other time I’ve seen it happen is making money at Vegas. I’ve met people who seem to have ‘The House Always Wins’, or ‘You Can’t Beat The Market’ or ‘Sweepstakes/Lotteries Are A Waste Of Money’ as an article of faith to the point that, presented with a counter example, they deny reality.
At my current level of investment, I probably have received substantial benefit from other skills that seem Less Wrong related that are not predictive, like not panicking, understanding risk tolerance and better understanding the math behind why diversification works.
But I suppose those aren’t particularly unique to Less Wrong even though I feel like reading the site does help me apply some of those lessons.
I would guess that to the extend that some hedge fund uses lw style rationality tecniques to train the predictive skills of their staff, they wouldn’t be public about effective techniques.
Has anyone used fitbit or similar products for tracking activity and sleep?
I used a zeo. Is there any specific question you want to have answered?
Awhile back I posted a comment on the open thread about the feasibility of permanent weight-loss. (Basically: is it a realistic goal?) I didn’t get a response, so I’m linking it here to try again. Please respond here instead of there. Note: most likely some of my links to studies in that comment are no longer valid, but at least the citations are there if you want to look those up.
I think the substance is that there are plenty of people who change their weight permanently. On the other hand the evidence for particular interventions isn’t that good.
None of those address permanent weight loss per se. They all address the more specific problem of permanent weight loss through dietary modification.
A successful approach to weight loss would incorporate a change in diet and exercise habits along with an investigation of the ‘root cause’ of the excess weight i.e. the psychological factor that causes excessive eating (Depression? Stress? Pure habit? etc.)
I also question your implicit premise that “If it ain’t permanent it ain’t worth doing”. That sounds like a rationalization to me. For a woman who’s 25 and looking to maximize her chance of reproductive success (finding a mate), ‘just 5 years’ of weight loss would be extraordinarily superior to no weight loss. Permanent weight loss would be only marginally better.
(Barring you being a metabolic mutant. If you have tried counting calories and it didn’t work for you, then please ignore this post; weight loss is a lot more complicated than how I am about to describe it here.)
Permanent weight loss is possible and feasible; however it will probably require constant effort to maintain.
For example, count your daily caloric intake on myfitnesspal.com (my username is shokke, if you wish to use the social aspect of it too). Eat at a caloric deficit (TDEE minus ~500) until desired weight is attained, then continue counting calories and eat at maintenance (TDEE) indefinitely. If you stop counting calories you will very likely regain that weight.
This requires you to count calories for the rest of your life, or at least until you no longer care about your weight. Or we develop a better method of weight control.
Is there a lesswrong group on myfitnesspal? Can we make one?
Edit
I’ve just made one
Upvoted for action.
I believe there is a named cognitive bias for this concept but a preliminary search hasn’t turned anything up: The tendency to use measures or proxies that are easily available rather than the ones that most accurately measure the cared about outcome.
anyone know what it might be called?
http://en.wikipedia.org/wiki/Attribute_substitution ?
gwern to the rescue again! I had not seen this article before, thank you.
LW article: the Substitution Principle.
Calling all history buffs:
I have this fragment of a memory of reading about some arcane set of laws or customs to do with property and land inheritance. It prevented landowners from selling their land, or splitting it up, for some reason. This had the effect of inhibiting agricultural development sometime in the feudal era or perhaps slightly after. Anyone know what I’m talking about?
(I’m aware of the opposite problem, that of estates being split up among all children (instead of primogeniture) which caused agricultural balkanization and prevented economies of scale.)
This sounds like the system that France had before the first French Revolution. That is, up until 1789; I’m not sure when it started. I wouldn’t be surprised if a similar system existed in other European countries at around the same time, but I’m not sure which. (I’ve only been reading history for a couple years, and most of it has been research for fiction I wanted to write, so my knowledge is pretty specifically focused.)
Under this system, the way property is inherited depends on the type of property. Noble propes is dealt with in the way you describe—it can’t be sold or given away, and when the owner dies, it has to be given to heirs, and it can’t be split among them very much. My notes say the amount that goes to the main heir is the piece of land that includes the main family residence plus 1⁄2 − 4⁄5 of everything else, which I think means there’s a legal minimum within that range that varies by province, but I’m not completely sure. Propes* includes lands and rights over land (land ownership is kind of weird at this time—you can own the tithe on a piece of land but not the land itself, for example) that one has inherited. Noble propes is propes that belongs to a nobleperson or is considered a noble fief.
Commoner inheritance varies a lot by region. Sometimes it’s pretty similar to noble inheritance (all or most of propes must go to the first living heir), sometimes the family can choose one heir to inherit more than the others, sometimes an equal split is required. There’s no law against selling or giving away common (non-noble) propes, but some of the provinces that require an equal split have laws to prevent parents from using gifts during their lifetime to give one child more than the others.
I’m not sure what effect noble property law had on agricultural development. I know France’s agriculture was lagging far behind England’s during the 18th century, but I never saw it attributed to this, at least not directly. (The reasons I can remember seeing are tenant farming with short tenures, and farmers having insufficient capital to buy newer tools.) The commoner inheritance system did fragment the land holdings, as you said. The main problems I remember hearing about with that were farms becoming too small to support a person (so the farmers would also work part-time as tenant farmers or day laborers, or abandon the farm and leave), and limiting social mobility by requiring wealthy commoners to divide their wealth with each new generation.
Most of this is coming from notes I took on the book Marriage and the Family in 18th Century France by Traer. I’m not sure how much you wanted to know, so ask if there’s anything you’re curious about that I didn’t include, and I’ll see if I can dig it up. If you want to research this, my impression is that finding good history books about a specific place is much easier if you can read the language spoken there, so it might be worth checking what the property laws were in places that speak the languages you know. If you need sources in English, having access to a university library helps a lot. When looking for information on France during this time period, “Ancien Regime” and “early modern” are useful keywords.
Lease-like arrangements that are practically selling are allowed, though. The only one I can think of at the moment is called alienation—basically you sell it except the new “owner’s” (or their heirs or whoever they sell the land to) pay your family rent for the land, forever. Something similar can be done with money, as a sort of loan that is never paid off. (These are called rentes fonciérs and rentes constituées, respectively—in case you ever want to look up more information.) They’re technically movable property, but they’re legally counted as propes, and treated the same way as noble land.
I always wondered why people didn’t just buy a square inch of land if that’s all it took to be noble.
Yeah, at least in France, land can’t make you noble, even if it’s a whole noble fief with a title attached. (Then you’re just a rich commoner who owns a title but can’t use it.) You could become noble by holding certain jobs for a long enough time (usually three generations), though. And people did buy those. (Not through bribes—the royal government sold certain official posts to raise revenues, so it was legal.)
There was also a sort of real estate boom after the revolutionary government passed some laws to make it easier for commoners to buy land, which was sort of like what you describe—all the farmers who could afford it would buy all the land they could at higher values than it was worth, because it made them feel like they were rich landowners.
Adam Smith reported that this was how the law worked in the Spanish territories in the Americas, in order to ensure the continued existence of a wealthy and powerful landed aristocracy and so maintain social stability. He theorized that this policy was the reason that the Spanish territories were so much poorer than the English territories, even though the former had extensive gold deposits and the latter did not.
Yeah I did some more research, apparently they were called “fee tails” or “entails”. They were designed to keep large estates “in the family”, even if that ended up being a burden to the future generations.
As I want to fix my sleep (cycle) I am looking for a proper full spectrum light to screw in my desk light. But when I shop for “full spectrum” light it turns out that they only have three peaks and do not even come near a black body in lighting. Is there something for less than a small fortune for a student like I am looking for? E27 socket, available in the EU.
I can ask more generally: What is the lighting situation at your desk and at your home? I aim for lighting very low in blue in the evening and as close to full daylight during work. For that I have f.lux on my computers and want to put a full-spectrum light in my desk lamp. I do not know what I should do for my room, I am thinking having a usual ‘warm’ lamp for the whole room and quite an orange light for reading late at night.
Hope I made myself clear.
What evidence do you have that full spectrum light is beneficial? It seems you already know that it’s the blue spectrum that primarily controls the circadian rhythm.
No particualar evidence but the closer light is to natural sunlight the better it looks. I could also argue that the closer I come to ‘natural’ conditions, that is much sun-like light the better I should fare.
Orange goggles/glasses for late at night aren’t that bad and are very cheap. I don’t have a good solution for the full spectrum issue. MIRI is getting by with the regular full spectrum bulbs AFAIK (is there a followup on the very bright lights experiment?)
I use a bedside lamp with a full-size Edison screw (I think E27 is full size). Daylight-spectrum bulbs are readily available in all manner of fittings on eBay. Last lot we got were 6x30W (equivalent 150W) with UK bayonet fittings for £5 each (though I don’t use something that bright for my bedside lamp).
I have a question about Effective Altruism:
The essence of EA is that people are equal, regardless of location. In other words, you’d rather give money to poor people in far away countries than people in your own country if it’s more effective, even though the latter feel intuitively more close to you. People care more about their own countries’ citizens even though they may not even know them. Often your own country’s citizens are similar to you culturally and in other ways, more than people in far-way countries and you might feel a certain bond with your own country’s citizens. There are obviously examples of this kind of thinking concretely affecting people’s actions. In the Congo Crisis (1960–1966) when the rebels started taking white hostages, there was an almost immediate military operation conducted by the United States and Belgium and the American and European civilians of this area were quickly evacuated. Otherwise this crisis was mostly ignored by western powers and the UN operation was much more low key than the rescue operation.
In Effective Altruism, should how much you intuitively care about other people be a factor in how much you allocate resources to them?
Can you take this kind of thinking to its logical conclusion: you shouldn’t allocate any money or resources to people that you feel are close to you, like your family or friends because you can more effectively minimize suffering by allocating those resources to far-away people?
Note, I’m not criticizing effective altruism or actually supporting this kind of thinking. I’m just playing a devil’s advocate.
A possible counterargument: one’s family and friends are essential to one’s mental well-being and you can be a better effective altruist if you support your friends and family.
Maybe it is a problem of puchasing fuzzies and utilons together, and also being hypocritical about it.
Essentially, I could do things that help other people and me, or I could do things that only help other people but I don’t get anything (except for a good feeling) from it. The latter set contains much more options, and also more diverse options, so it is pretty likely that the efficient solution for maximizing global utility is there.
I am not saying this to argue that one should choose the latter. Rather my point is that people sometimes choose the former and pretend they chose the latter, to maximize signalling of their altruism.
“I donate money to ill people, and this is completely selfless because I am healthy and expect to remain healthy.” So, why don’t you donate to ill people in poor countries instead of your neighborhood? Those people could buy greater increase in health for the same cost. “Because I care about my neighbors more. They are… uhm… my tribe.” So you also support your tribe. That’s not completely selfless. “That’s a very extreme judgement. Supporting people in my tribe is still more altruistic than many other people do, so what’s your point?”
I guess my point is, if your goal is to support your tribe, just be honest about it. Take a part of your budget and think about the most efficient way of supporting your tribe. And then take another part of your budget and spend it on effective altruism. (The proportion of these two parts, that’s your choice.) You will be helping people selflessly and supporting your tribe, probably getting more points on each scale than you are getting now.
“But I also want a recognition of my tribe for my support. They will reward me socially for helping in-tribes, but will care less about me helping out-tribes.” Oh, well. That’s even less selfless. I am not judging you here, just suggesting to make another sub-budget for maximizing your prestige within the tribe and optimize for that goal separately.
“Because that’s too complicated. Too many budgets, too much optimization.” Yeah, you have a point.
Also, if it turns out that I have three sub-budgets as you describe here (X, Y, Z) and there exist three acts (Ax, Ay, Az) which are optimal for each budget, but there exists a fourth act B which is just-barely-suboptimal in all three, it may turn out that B is the optimal thing for me to do despite not being optimal for any of the sub-budgets. So optimizing each budget separately might not be the best plan.
Then again, it might.
Generally, you are right. But in effective altruism, the axis “helping other people” is estimated to do hundred times more good if you use a separate budget for it.
This may be suboptimal for the other axes, though. Taking the pledge and having your name on the list could help along the “signalling philantropy” axis.
Fair point.
Expanding on this, isn’t there an aspect of purchasing fuzzies in the usual form of effective altruism? I know there’s been a lot of talk of vegetarianism and animal-welfare on LW, but there’s something in it that’s related to this issue.
At least some people believe it’s been pretty conclusively proven that mammals and some avians have a subjective experience and the ability to suffer, in the same way humans have. In this way humans, mammals, and those avian species are equal—they have roughly the same capacity to suffer. Also, with over 50 billion animals used to produce food and other commodities every year, one could argue that the scope of suffering in this sphere in greater than in the human kind.
So let’s assume that the animals used in the livestock have an equal ability to suffer when compared to humans. Let’s assume that the scope of suffering is greater in the livestock industry than in the human kind. Let’s also assume that we can more easily reduce this suffering than the suffering of humans. I don’t think it’s a stretch to say that these three assumptions could actually be true and this post analyzed these factors in more detail. From these assumptions, we should conclude not only that we should become vegetarians, like this post argues, but also that the animal welfare should be our top priority. It is our moral imperative to allocate all the resources we dedicate to buying utilons to animal welfare, until the marginal utility for it is lower than for human welfare.
Again, just playing a devil’s advocate. Are there other reasons to help humans other than the fact they belong to our tribe more than animals? The counterarguments raised in this post by RobbBB are very relevant, especially 3. and 4. Maybe animals don’t actually have the subjective experience of suffering and what we think as suffering is only damage-avoiding and damage-signaling behavior. Maybe sapience makes true suffering possible in humans and that’s why animals can’t truly suffer on the same level as humans.
I had this horrible picture of a future where human-utilons-maximizing altruists distribute nets against mosquitoes as the most cost-efficient tool to reduce the human suffering, and the animal-utilons-maximizing altruists sabotage the net production as the most cost-efficient tool to reduce the mosquito suffering...
That’s a worthwhile concern, but I personally wouldn’t make the distinction between animal-utilons and human-utilons. I would just try to maximize utilons for conscious beings in general. Pigs, cows, chicken and other farm animals belong in that category, mosquitoes, insects and jellyfish don’t. That’s also why I think eating insects is on par with vegetarianism because you’re not really hurting any conscious beings.
Since we’re playing the devil’s advocate here: much more important than geographical and cultural proximity to me would be how many values I share with these people I’m helping, were I ever to come in even remote contact with them or their offspring.
Would you effective altruist people donate mosquito nets to baby eating aliens if it cost effectively relieved their suffering? If not, where do you draw the line in value divergence? Human?
LWers may appreciate this Onion-style satire: “Another Empty, Lifeless Planet Found”.
So, what’s all this about a Postivist debacle I keep hearing? Who were the positivists, what did we have in common with them, what was different, and how and why did they fail?
Sounds exactly like us...
I’m no expert on the history of epistemology, but this may answer some of your questions, at least as they relate to Eliezer’s particular take on our agenda.
We consider probabilities authentic knowledge. Since we are Bayesianists and not Frequentists those probabilities are sometimes about questions which can not be scientifically tested. Science requires repeatable verification, and our probabilities don’t stand up to that test.
I assume this was downvoted for inaccuracy. If so, I would like know what you think is wrong please.
How can I learn to sleep in a noisy environment?
For several years now I’ve lived in loud apartments, where I can often hear conversations or music late into the night.
I often solve this problem by wearing earplugs. However, I don’t want to sleep with earplugs every night, and so I’ve made a number of attempts to adjust to the noise without earplugs, either going “cold-turkey” for as long as I can stand, or by progressively increasing my exposure to night-time noise.
Despite several years of attempts, I don’t think I’ve habituated at all. What gives?
Other information that might be relevant:
I adjust fine to noise during the day, and to other stimulus at night.
I have no mental illness.
“Information-less” noise is fine, (for example, traffic or the hum of an appliance). Problem noises involve voices or music, or things like video games.
Since you are already fine with white noise, you should try using white noise to drown out the music or voices. A quick search of white noise on the internet lead me to simplynoise where you can stream white noise over the internet. If not, then try a phone app.
I don’t need such a thing for sleeping, but I find SimplyNoise gives a satisfactory sound having a much steeper fall-off with frequency than white noise (flat spectrum of energy vs. frequency) or pink noise (3dB fall-off per octave), both of which sound unpleasantly harsh to me. They also have a few soundscapes (thunderstorm, river, etc.). The app is not free, but cheap, and there are also pay-what-you-want mp3 download files.
I’ve been on a scifi audio kick lately and was wondering are there any good sites other than sffaudio?
Escape Pod is good.
Let’s assume society decides that eating meat from animals lacking self-awareness is ethical, and anything with self-awareness is not ethical to eat, and that we have a reliable test to tell the difference. Is it ethical to deliberately breed tasty animals to lack self-awareness, both before or after their species has self-awareness?
My initial reaction to the latter is ‘no, it’s not ethical, because you would necessarily be using force on self-aware entities as part of the breeding process’. The first part of the question seems to lean towards ‘yes’, but this response definitely sets off an ‘ugh’ field in my mind just attempting to consider the possible implications, so I’m not confident at all in my line of reasoning.
Thoughts from others?
I think any question of the form “Assume X is ethical, is X’ also ethical?” is inherently malformed. If my ethics do not follow X, then the change in my ethics which causes me to include X may be very relevant to X’.
I don’t think anyone who is a vegetarian regardless of self-awareness would be able to answer the question you are asking.
I think the big question that implies this one is “Should we eat baby humans? Why?”
I believe the answer is “No, because there is no convenient place to draw the line between baby and adult, so we should put the line at the beginning, and because other people may have strong emotional attachment to the baby.”
I think the first part of my reason is eliminated by your “reliable test.” If the test is completely reliable, that is a very good place to draw the line.
The second part is not going away. It has been evolved in us for a very long time, however, it is not clear if people will get the same attachment to non-human babies. I think that our attachment to non-humans is much lower, and there is not a significant difference between their attachment before and after self awareness.
However, the question asked assumes that our ethics distinguish between creatures with and without self awareness. If that distinction is caused by us having different levels of emotional attachment to the animal depending on its self awareness, then it would change my answer.
As for the first part, I would say that it’s fairly common for an individual and a society to not have perfectly identical values or ethical rules. Should I be saying ‘morals’ for the values of society instead?
I would hope that ethical vegetarians can at least give me the reasons for their boundaries. If they’re not eating meat because they don’t want animals to suffer, they should be able to define how they draw the line where the capacity to suffer begins.
You do bring up a good point—most psychologists would agree that babies go through a period before they become truly ‘self-aware’, and I have a great deal of difficulty conceiving of a human society that would advocate ‘fresh baby meat’ as ethical. Vat-grown human meat, I can see happening eventually. Would you say the weight there more on the side of, ‘This being will, given standard development, gain self-awareness’, or on the side of ‘Other self-aware beings are strongly attached to this being and would suffer emotionally if it died’? The second one seems to be more the way things currently function—farmers remind their kinds not to name the farm animals because they might end up on their plate later. But I think the first one can be more consistently applied, particularly if you have non-human (particularly non-cute) intelligences.
‘This being will, given standard development, gain self-awareness’ is a common reason that I missed.
I am partially confused by it, because this notion of “standard development” is not easily defined, like “default” in negotiations.
You could put strict statistical definitions around it if you wanted, but the general idea is, ‘infants grow up to be self-aware adults’.
This may not always be true for exotic species. Plenty of species in nature, for example, reproduce by throwing out millions of eggs / spores/ what have you that only a small fraction of which grow up to be adults. Ideally, any sort of rule you’d come up with should be universal, regardless of the form of intelligence.
At some point, some computer programs would have to be considered to be people and have a right to existence. But at what stage of development would that happen?
I’ve got a few questions about Newcomb’s Paradox. I don’t know if this has already been discussed somewhere on LW or beyond (granted, I haven’t looked as intensely as I probably should have) but here goes:
If I were approached by Omega and he offered me this deal and then flew away, I would be skeptical of his ability to predict my actions. Is the reason that these other five people two-boxed and got $1,000 due to Omega accurately predicting their actions? Or is there some other explanation… like Omega not being a supersmart being and he never puts $1 million in the second box? If I had some evidence that people actually have one-boxed and gotten the $1 million then I would put more weight on the idea that he actually has $1 million to spare, and more weight on the possibility that Omega is a good/perfect predictor.
If I attempt some sort of Bayesian update on this information (the five previous people two-boxed and got $1,000) these two explanations seem to equally explain this fact. The probability of Omega putting the $1,000 in the previous five peoples’ boxes given that he’s a perfect predictor seems to be observationally equivalent to the probability that Omega doesn’t ever put $1 million in the second box.
Then again, if Omega actually knew my reasoning process, he might actually provide me with the evidence that would make me choose to one-box over two-box.
It also seems to me that if my subjective confidence in Omega’s abilities of prediction are over 51%, then it makes more sense to one-box than two-box… if my math/intuition about this is correct. Let’s say my confidence in Omega’s abilities of prediction are at 50%. If I two-box, there are two possible outcomes: I either get only $1,000 or I get $1,001,000. Both outcomes have a 50% chance of happening due to my subjective prior, so my decision theory algorithm is 50% $1,000 + 50% $1,001,000. This sums to a total utility/cash of $501,000.
If I one-box, there are also two possible outcomes: I either get $1,000,000 or I lose $1,000. Both outcomes, again, have a 50% chance of happening due to my subjective probability about Omega’s powers of prediction, so my decision theory algorithm is 50% $1,000,000 + 50% -$1,000. This sums to $499,000 in total utility.
Does that seem correct, or is my math/utility off somewhere?
Lastly, has something like Newcomb’s Paradox been attempted in real life? Say with five actors and one unsuspecting mark?
I had a random-ish thought about programming languages, which I’d like comments on: It seems to me that every successful programming language has a data structure that it specialises in and does better than other languages. Exaggerating somewhat, every language “is” a data structure. My suggestions:
C is pointers
Lisp is lists (no, really?)
Ruby is closures
Python is dicts
Perl is regexps
Now this list is missing some languages, for lack of my familiarity with them, and also some structures. For example, is there a language which “is” strings? And on this model, what is Java?
Well, different languages are based on different ideas. Some languages explore the computational usefulness of a single data structure, like APL with arrays or Forth with stacks. Lisp is pretty big, but yes you could say it emphasizes lists. (If you’re looking for a language that emphasizes strings, try SNOBOL or maybe Tcl?) Other languages explore other ideas, like Haskell with purity, Prolog with unification, or Smalltalk with message passing. And there are general-purpose languages that don’t try to make any particular point about computation, like C, Java, JavaScript, Perl, Python, Ruby, PHP, etc.
I don’t think this idea works.
Pointers in C aren’t data structures—they are a low-level tool for constructing data structures. Neither closures nor regexps are “data structures”. And Perl was historically well-known for relying on hashes which you assigned to Python as dicts.
Certainly each programming language has a “native” programming style that it usually does better than other languages—but that’s a different thing.
Java is classes—a huge set of standardized classes, so for most things you want to do, you choose one of those standard classes instead of deciding “which one of the hundred libraries made for this purpose should I use in this project?”.
At least this was until the set of standardized classes became so huge that it often contains two or three different ways to do the same thing, and for web development external libraries are used anyway. (So we have AWT, Swing and JavaFX; java.io and java.nio; but we are still waiting for the lambda functions.)
Different languages are good at different things. For some languages it happens to be a data structure:
Lisp is lists
Tcl is strings
APL is arrays
Forth is stacks
SQL is tables
Other languages are good at something specific which isn’t a data structure (Haskell, Prolog, Smalltalk etc.) And others are general languages that don’t try to make any particular point about computation (C, Java, JavaScript, Perl, Python, Ruby etc.)
I’m not sure R fits this metaphor—the closest I can get is “R is CRAN”, but the C?AN concept is not unique to R. Hmm… maybe R is data.frames. Java is prepare your anus for objects.