Open thread, Mar. 23 - Mar. 31, 2015
If it’s worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the ‘open_thread’ tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
[Cross-Posted from the “Welcome to LW” thread] I’m a long-time user of LW. My old account has ~1000 karma. I’m making this account because I would like it to be tied to my real identity.
Here is my blog/personal-workflowy-wiki. I’d like to have 20 karma, so that I can make cross-posts from here to the LW Discussion.
I’m working on a rationality power tool. Specifically, it’s an open-source workflowy with revision control and general graph structure. I want to use it as a wiki to map out various topics of interest on LW. If anybody is interested in working on (or using) rationality power tools, please PM me, as I’ve spent a lot of time thinking about them, and can also introduce you to some other people who are interested in this area.
EDIT: Added the missing links.
EDIT: First cross-post: Personal Notes On Productivity (A categorization of various resources)
Yep, I’m interested!
Rationality: From AI to Zombies is a nice name, but unfortunately, the “from A… to Z...” aspect sometimes does not survive translation. Specifically, in Slovak, the translation of “artificial intelligence” does not start with “A”. (At least the “zombies” part is okay.)
Are there other words that could be used instead of “AI” in this context? (Please check in the Google Translate link whether they start with “A” after translation.) Thanks!
“Od automatov po zombie” ?
Not an ideal variant, but I am afraid there are not many possibilities here…
If automatov means a combination of “automation” and “machine” as its etymology and translation seem to indicate, that would be a very good choice.
Other names we considered included “Rationality: From A to Z” (note there are 26 sequences in the book, lettered from A to Z), “From Atoms (atómy) to Zombies”, “From Algorithms (algoritmy) to Zombies.” It sounds like “automatov” below would work better, but I don’t speak Slovak so I can’t comment on these options connotations/associations.
Thanks. So the good choices seem to be “atoms”, “automatons”, and “algorithms”.
What is your opinion, LessWrong hive mind? Which one of these three words feels most related to the Sequences?
[pollid:842]
“Algorithms” feels more related to the Sequences, but may not be the strategic choice. I don’t expect non-math/CS types to go “yay, a book about algorithms!”.
I am more concerned about the lack of specific algorithms in the book. If I remember correctly, there is no pseudocode anywhere. It’s just metaphorically that the whole book is about human thinking algorithms, etc. But using the word “algorithm” in the title feels like a false promise.
EDIT: Okay, the hive mind has spoken, and I accept the “algorithms”. Thanks to everyone who voted!
I never thought of that, but that’s a great question. We have similar problem in Croatian language as AI would be translated ‘Umjetna Inteligencija’ (UI). I think we can also use the suggested title “From Algorithms to Zombies” once someone decides to make Croatian/Serbian/Bosnian translation
An automaton seems too similar to a zombie to give the “diverse mix of things” impression a “from A to Z” title should give off. I’d think the book was about mindlessness, and presumably how rational people can avoid behaving like the mindless automata and zombies that make up the common rabble, which isn’t exactly how I would characterise the sequences, even if EY does sometimes talk about being a PC instead of an NPC.
Funny how I too was thinking about translating that book, to a language used in the same broader region. I think I will end up making a proposal to EY that it would rather be making a derivative work with his permission than a direct translation.
For example, I don’t want to use the term “rational”, due to its Straw Vulcan connotations. I would rather use terms that could be roughly translated back as “according to reason” or “reasonable”. I think my audience would consider AI and Zombies far too geeky, so I would title that book “living according to reason” or “making reasonable decisions”. I would also rip out the examples taken from science and rather use the more everday-life examples gleaned from the threads on LW.
I too would recommend you to not try a direct translation. It would not go down well. Write a simplified and culturally more compatible book with different terminology. Avoid using the the word rational.
I mean, if I would tell a peasant look this is not a rational way to go about planning your agriculture he would tell me to fuck off you intellectual city snob. If I would rather say the equivalent of this is not a reasonable or sensible or clever way, he would be more likely to listen.
Rationality (the term) carries too much “Enlightenmentist” connotations in this region, it feels like enlightened absolutism, from King Joe II to the Soviets. Part of the issue being rationality has always been a foreign import in this region and it rubs people’s pride the wrong way. However, people like the idea of being “shrewd”, outsmarting others, or solving problems the clever way, finding backdoors, cheat codes, low hanging fruits, you just need to present it as something a vulpine, “pfiffig”, street-smart shepherd kid would do (like in The Simple Truth), rather than something a high-brow academic would do which feels too foreign and too haughty in this region. People here tend to associate the term “rational” with Marcos Sophisticus from the The Simple Truth essay.
If you try a direct translation, you will be trying to essentially jump over multiple levels of development / progress. That book was more or less written for the Silicon Valley. Even before that book, those people were already different—smarter? geekier? - than people we are used to. So I think it is better to write a more accessible version of it.
Thanks, but the book is already translated. :D
I just want to add the recent changes (which include the change of the title; that’s why I am asking here)… and then perhaps convert it to TeX if I get some help with producing the same layout as the original. Then MIRI could sell it along with the original.
(And of course a German translation would be more useful, but I am not qualified to do one.)
Zendo, the game of inductive logic has been discussed many times on Less Wrong. To make things easier for new players, I made a web application that generates Koans of several difficulty levels. You can find it here.
Nice!
It would also be cool to write a program to play Zendo, as master. Perhaps using character-strings rather than Icehouse pieces. That would probably train induction faster than playing against humans, since there’s less social overhead. Though I’m doubtful that that training would generalize to things that aren’t Zendo. If one did this, it would probably also be best to get a larger set of possible predicates, and maybe construct rules recursively as logic sentences, so it’s harder to just learn the program’s rule-distribution.
How do you rank the difficulty of Koans? My intuition does a very good job by now, but caching it out has always resulted in obviously wrong corner cases.
“A Koan has the Buddha-nature if and only if all the pieces are ungrounded, except for blue pieces.” is unclear to me. I am not sure whether blue pieces must be grounded or may be grounded.
Nice job!
The method I used is very close to this one, but what you say is true. There are edge-cases that aren’t quite right. For instance, the most difficult “medium” Koans seem more difficult than the least difficult “hard” Koans.
Would “A Koan has the Buddha-nature if and only if all the non-blue pieces are ungrounded, and all the blue pieces are grounded” be less ambiguous?
Thanks.
One thing I’ve been wondering about deep neural networks: to what extent are neural networks novel and non-obvious? To what extent has evolution invented and thus taught us something very important to know for AI? (I realize this counterfactual is hard to evaluate.)
That is, imagine a world like ours but in which for some reason, no one had ever been sufficiently interested in neurons & the brain as to make the basic findings about neural network architecture and its power like Pitts & McCulloch. Would anyone reinvent them or any isomorphic algorithm or discover superior statistical/machine-learning methods?
For example, Ilya comments elsewhere that he doesn’t think much of neural networks inasmuch as they’re relatively simple, ‘just’ a bunch of logistic regressions wired together in layers and adjusted to reduce error. True enough—for all the subleties, even a big ImageNet-winning neural network is not that complex to implement; you don’t have to be a genius to create some neural nets.
Yet, offhand, I’m having a hard time thinking of any non-neural network algorithms which operate like a neural network in putting together a lot of little things in layers and achieving high performance. That’s not like any of your usual regressions or tests, multi-level models aren’t very close, random forests and bagging and factor analysis may be universal or consistent but are ‘flat’...
Nor do I see many instances of people proposing new methods which turn out to just be a convolutional network with nodes and hidden layers renamed. (A contrast here would be Turing’s halting theorem: it seems like you can’t throw a stick among language or system papers without hitting a system complicated enough to be Turing-complete and hence indecidable, and like there were a small cottage industry post-Turing of showing that yet another system could be turned into a Turing machine or a result could be interpreted as proving something well-known about Turing machines.) There don’t seem to be ‘multiple inventions’ here, as if the paradigm were non-obvious and, without the biological inspiration.
So if humanity had had no biological neural networks to steal the general idea and as proof of feasibility, would machine learning & AI be far behind where they are now?
This 2007 talk by Yann LeCun, Who is Afraid of Non-Convex Loss Functions? seems very relevant to your question. I’m far from an ML expert, but here’s my understanding from that talk and various other sources. Basically there is no theoretical reason to think that deep neural nets can be trained for any interesting AI task, because they are not convex so there’s no guarantee that when you try to optimize the weights you won’t get stuck in local minima or flat spots. People tried to use DNNs anyway and suffered from those problems in practice as well, so the field almost gave it up entirely and limited itself to convex methods (such as SVM and logistic regression) which don’t have these optimization problems but do have other limitations. It eventually turned out that if you apply various tricks, good enough local optima can be found for DNNs for certain types of AI problems. (Far from “you don’t have to be a genius to create some neural nets”, those tricks weren’t easy to find otherwise it wouldn’t have taken so long!)
Without biological neural networks as inspiration and proof of feasibility, I guess people probably still would have had the idea to put things in layers and try to reduce error, but would have given up more completely when they hit the optimization problems, and nobody would have found those tricks until much later when they exhausted other approaches and came back to deep nets.
I don’t think we would be that far behind.
NNs had lost favor in the AI community after 1969 (minsky’s paper) and only have become popular again in the last decade. see http://en.wikipedia.org/wiki/Artificial_neural_network
The only crossover that comes to mind for me is the vision deep learning ‘discovering’ edge detection. There also is some interest in sparse NN activation.
Yes, I’m familiar with the history. But how far would we be without the neural network work done since ~2001? The non-neural-network competitors on Imagenet like SVM are nowhere near human levels of performance, Watson required neural networks, Stanley won the DARPA Grand Challenge without neural networks because it had so many sensors but real self-driving cars will have to use neural networks, neural networks are why Google Translate has gone from roughly Babelfish levels (hysterically bad) to remarkably good, voice recognition has gone from mostly hypothetical to routine on smartphones...
What major AI achievements have SVMs or random forests racked up over the past decade comparable to any of that?
NNs connection to biology is very thin. Artificial neurons don’t look or act like regular neurons at all. But as a coined term to sell your research idea its great.
NNs are popular now for their deep learning properties and ability to learn features from unlabeled data (like edge detection).
Comparing NNs to SVMs isn’t really fair. You use the tool best for the job. If you have lots of labeled data you are more likely to use an SVM. It just depends on what problem you are being asked so solve. And of course you might feed an NNs output into an SVM or vice versa.
As for major achievements—NNs are leading for now because 1) most of the world’s data is unlabeled and 2) automated feature discovery (deep learning) is better then paying people to craft features.
I am well aware of that. Nevertheless, as a historical fact, they were inspired by real neurons, they do operate more like real neurons than do, say, SVMs or random forests, and this is the background to my original question.
ImageNet is a lot of labeled data, to give one example.
There is a difference between explaining, and explaining away. You seem to think you are doing the latter, while you’re really just doing the former.
SVMs are O(n^3) - if you have lots of data you shouldn’t use SVMs.
What year do you put the change in google translate? It didn’t switch to neural nets until 2012, right? Did anyone notice the change? My memory is that it was dramatically better than babelfish in 2007, let alone 2010.
Good question… I know that Google Translate began as a pretty bad outsourced translator (SYSTRAN) because I had a lot of trouble figuring out when Translate first came out for my Google survival analysis, and it began being upgraded and expanded almost constantly from ~2002 onwards. The 2007 switch was supposedly from the company SYSTRAN to an internal system, but what does that mean? SYSTRAN is a proprietary company which could be using anything it wants internally, and admits it’s a hybrid system. The 2006 beta just calls it statistics and machine learning, with no details about what this means. Google Scholar’s no help here either—hits are swamped by research papers mentioning Translate, and a few more recent hits about the neural networks used in various recent Google mobile-oriented services like speech or image recognition.
So… I have no idea. Highly unlikely to predate their internal translator in 2006, anyway, but could be your 2012 date.
Here is a 2007 paper that I found when I was writing the above. I don’t remember how I found it, or why I think it representative, though.
How many components go into “neural nets”?
At the very least, there are networks of artificial neurons. You seem to accept Ilya’s dismissal of the artificial neuron as too simple to credit, but take the networks as the biologically inspired part. I view those components exactly opposite.
Networks of simple components come up everywhere. There were circuits of electrical components a century ago. A parsed computer program is a network of simple components. Many people doing genetic programming (inspired by biology, but not neurology) work with such trees or networks. Selfridge’s Pandemonium (1958) advocated features built of features, but I think it was inspired by introspective psychology, not neuroscience.
Whereas the common artificial neuron seems crazy to me. It doesn’t matter how simple it is, if it is unmotivated. What seems crazy to me is the biologically inspired idea of a discrete output. Why have a threshold or probabilistic firing in the middle of the network? Of course, you want something like that at the very end of a discrimination task, so maybe you’d think of recycling it into the middle, but not me. I have heard it described as a kind of regularization, so maybe people would have come up with it by thinking about regularization. Or maybe it could be replaced with other regularizations. And a lot of methods have been adapted to real outputs, so maybe the discrete outputs didn’t matter.
So that’s the “neural” part and the “network” part, but there are a lot more algorithms that go into recent work. For example, Boltzmann machines are named as if they come from physics, but supposedly they were invented by a neuroscientist because they can be trained in a local way that is biologically realistic. (Except I think it’s only RBMs that have that property, so the neuroscientist failed in the short term, or the story is complete nonsense.) Markov random fields did come out of physics and maybe they could have lead to everything else.
Camping vs Cryonics
Assuming that a cryonicist a) has a limited budget; b) believes that going solo hiking, canoeing, and camping have salutary effects on mental health; and c) believes that camping provides one of the best available ratios of improved long-term mental functioning to dollars spent...
… then what measures could said cryonicist take to minimize the odds of ending up not just dead, but warm-and-dead? And, secondarily, how much would each such measure cost, and how much would it reduce that risk?
Example 1: A PLB (Personal Locator Beacon) costs around $300, and uses satellites to signal search-and-rescue teams to start looking in roughly an area a mile around. Requires someone alive to push the button, that the PLB can be placed right-side-up. Benefits are increased if, eg, a pen-type flare launcher can more precisely identify location to searchers.
Example 2: A backup cell phone can cost $20, and at least one provider offers service for $10 for the SIM chip and $20 per year if no calls are made. Requires limiting trips to areas within range of cell towers.
Finding a friend to go hiking, canoeing, and camping with might be an effective thing to do. Also, there are probably locations that offer wilderness first aid courses in your city. Getting basic safety stuff right is probably lower-effort than taking a wilderness first aid course, though.
Schizoid personality disorder means that the solo part is a large part fo what provides the relaxation and decompression.
I’m in south Ontario, so the usual trainers are St. John’s Ambulance. Looking it up, there’s a two-day course in a nearby city next month for $200 that could be worth it.
There are a couple of versions of the “Ten Essentials” that I keep in mind.
I’ve just ordered the course manual—it’s 1/20th the cost of the course, and since I’m trying to educate myself rather than acquire the certification, it seems well worth the cost.
What about analysis of how to lower the risks of the hiking, camping, and canoeing that you want to do? Or have you pushed that aspect as far as it’s likely to go?
I’m open to any suggestions on that front.
For example, my existing kit already includes bear bags to keep tempting food away from the campsite, anti-bear pepper spray, a “bear banger” shot for the flaregun… and I’ve just added those “QuikClot” rapid-coagulation sponges to the first-aid kit.
I was thinking about choosing terrain and temperature and such.
Do you know specifics about the risks of camping?
That’s a fairly open-ended question, and I’m not sure how to answer. One version or another of the “SAS Survival Handbook” has been in my library for a couple of decades, which seems to offer a good overall framework.
It’s open-ended because it’s an area that I don’t know much about. When I thought about how a person could end up dead from camping, the first thing I imagined was twisting an ankle after a stumble a steep hill, and not being able to get back to civilization. Add more trouble, and I imagine a fall down a steep hill, maybe with a concussion. (Yes, I have issues about falling.) However, I don’t know if those are the biggest risks.
Possibly of interest: Deep Survival: Who Lives, Who Dies, and Why.
I currently have “http://www.amazon.ca/The-Unthinkable-Survives-Disaster-Strikes/dp/0307352900″ http://www.amazon.ca/The-Unthinkable-Survives-Disaster-Strikes/dp/0307352900 in my to-read pile, and that looks like a good companion piece.
Most forms of SOS signaling require you to be alive to push the button (there are some exceptions, e.g. some marine beacons automatically activate if you fall into water), but I found you another rationalization for a smartwatch :-) I haven’t seen an actual app, but it should exist (or be trivially easy to program): monitor your pulse and if it drops to zero, start screaming its head off via email, SMS, FB messages, dial 911, call the Coast Guard, etc. etc.
So now your watchstrap gets snagged on a tree branch and falls off without your noticing—and then it dials 911, calls the coast guard, etc. That could make you pretty unpopular.
One possibility: Designate one or more emergency contacts in case a pulse measurement drops to zero, who can text back to see if you’re alright, if the battery’s died, or whatnot; and who can /then/ decide to call out the cavalry.
I think smartwatches are smart enough to notice when they’re not on your wrist any more.
P.S. Even without smartwatches, I would be greatly surprised if there is no remote-monitoring medical device which you strap onto yourself and which alerts someone if it thinks you’re in trouble. The market for live-alone elderly people is huge.
Or you simply take it off.
I don’t recall yet finding a phone-watch with a reliable pulse sensor. That may be because it’s a feature I wasn’t looking for, but it’s also possible that such a product doesn’t yet exist.
You can probably just use one of the fitness bracelets (Fitbit style) and sync them to your phone. I don’t know how reliable they are, but one of their explicit purposes is recording your heart rate during exercise.
Losing the heart rate connection for a few seconds during exercising isn’t a big deal.
The old tech with chest straps certainly loses signals from time to time. I’m not sure about the newer tech, but I would doubt that you get cheap tech that doesn’t from time to time loses track of the pulse.
Which provider is this?
http://www.speakout7eleven.ca/prepaid-cell-phone-rates … though it looks like I misremembered, and the least-expensive option is $25 instead of $20.
If you mean the cellphone, I got the low price from http://www.dx.com/s/850+1900?category=511&PriceSort=up .
Instrumental rationality is about reaching goals. Any methods for finding goals? There is a military term “target-rich environment”. I think you live in a goal-rich environment if it is risky both ways: if you both fail hard and win big. If your environment does not look very goal-rich, what are some good ways to “mine” goals out of it? Broader, fighting boredom / tedium.
IIRC, “target-rich environment” was originally a euphemism for being surrounded by the enemy.
By analogy, a “goal-rich environment” might be one in which you are very critical of everything — no matter what you look at, you can see a way in which it sucks and should be improved — and a “goal-poor environment” is one in which pretty much everything is okay with you.
I don’t know if you are extremely optimistic or I am misunderstanding something, but much much more common is the case when you feel entirely powerless to change the things you are critical of, because they are set up so by bigger, more powerful people or some other similar cause, or simply you don’t think you are the kind of person who can tackle big things. Low self-esteem is the most common cause of feeling goal-por. BTW the analogy stands: the most common move to being surrounded is not shooting in 360 but surrendering.
I was inverting the connotation of the expression — in the same way that “target-rich environment” has been inverted from being a euphemism for a bad situation (being surrounded by people who want to kill you) into an expression for a good situation (having lots of opportunities to choose from).
If you environment does not look very goal-rich, you have an opinion on how it is different from a goal-rich environment, i.e. you have a model of a goal-rich environment. Find a decent real-world match for that environment, move there.
Or find other people with risky-both-ways goals, and work for them if you like either the goals or the people, preferably both.
But I’m not sure “risky both ways” is a good metric to look for. A life of crime fits this criterion, while violating a couple of other criterions that I assume you hold implicitly.
Does the harm of smoking scale linearly? I went from about 15 cig a day to about 3, without any effort, at some point it just made me feel sick and nauseous if I wasn’t having it with/after coffee. Consider the problem 80% solved? Stupid question, but why do people talk about the harm of smoking in general, instead of weighting it with dose? Because most people cannot before a pack a day? I too was on my to that if not for the new nausea effect. One study suggests linearity for one effect at least for men but probably nobody has a firm idea whether overally all of the effects are linear or not, but why shouldn’t they be? At this point the only reason to not stop completely is that coffee generates a strong smoke craving.
See this Hanson post. Quote “MRFIT randomized multifactor trial of 8000 smokers. After 6 years one half quit 49% (vs. 29%), and after 16 years had an insignificant 6% lower mortality (11% less heart disease, and −15% less lung cancer).”
Thank you
The search term “pack years” may be helpful to you. Dose is definitely considered important.
Incidence of some cancers is described fairly well by a multistage model of carcinogenesis, which posits that a cell has to go through multiple pre-cancerous stages before it becomes a cancer. Suppose the model is true. If smoking accelerates the transition at multiple steps on the path to carcinogenesis, then smoking’s effect on cumulative cancer incidence can be super-linear.
Not my clever idea, sadly, I got it from another paper.
Low-hanging fruits: buying military surplus gear? I assume they go through extensive durability and quality testing before being approved, so piggybacking on top of that and buying e.g. surplus officer shoes to wear in the office with a suit or half-suit may be a good idea? Did anyone try stuff like these? Military gear is supposed to be durable, and as for officers in dress uniform, even elegant looking. I could never find a coat that would be look good with a suit, and be suitable for −10C and not cost an arm and leg, but perhaps I should be looking it what e.g. Norwegian officers are wearing? Is this a good idea?
Not a good idea. Military gear is typically chosen because it is cheap and the contractor can comply with government procurement rules. Military officers that are allowed to bend the rules (e.g. special forces) are known to procure their own equipment and accessory clothing (e.g. boots, vests, sunglasses) because the commercial stuff is better quality. Indeed the commercial stuff is often made or endorsed by retired special forces who built a better mouse trap because the existing ones sucks so bad.
http://lesswrong.com/lw/lxh/open_thread_mar_23_apr_05_2015/c6as
Is that useful way of finding good products?
Endorsement? I suppose it could be. In my case I actually know some ex-marines and Navy SEALs, so I always just asked them what brands they recommend. But looking back, endorsement by people with actual combat experience seem to correlate well with quality.
One of Murphy’s Laws of Combat Operations: “Remember, your weapon was made by the lowest bidder.”
I find it surprising. Perhaps, we can say military gear is better than cheap knock-offs but worse than brand name designer stuff.
Since I am used to knock-offs, they look good to me—I used to use a Bundeswehr sleeping bag, the kind with arms, to sleep at friends couches after parties, it worked. I got some US Marine t-shirts as well, they were definitely made from a more a skin-friendly material than the Fruit of the Loom type knock-offs. Probably undies from the same material would be better than the ones from the China-Mart. Once I used a TESCO Value tent for camping, it leaked, I’ve put a kind of a truck tarp on it, which locked in the steam creating a cold sauna, then threw the whole thing into a bin afterward. I guess compared to these kinds of crap, military grade must be better.Once I bought a Russian bayonet, the kind that doubles as a wire cutter, that was a bad idea, not only lacking an edge but its casing itself was an edge-killer. However some friends endorsed Bundeswehr bayonets.
That’s, um, is not a test of a sleeping bag X-) Random coats, blankets, and drapes would work as well.
There is no particular magic in textiles. The basic choice is between cotton and a variety of synthetics, each of which has its own advantages.
Generally speaking, “camping” military equipment is better than pure trash (like your Tesco Value tent) because, well, pretty much everything is better than pure trash. But it’s worse than actual, proper camping equipment. Probably cheaper, too, so you pick your price-quality trade-off, as always.
I have a Bundeswehr jacket. I got it for €6, it works well enough, and it hasn’t started falling apart yet even though I’ve had it for over a year—which is about as long as I expect cheap clothes to last.
I’ve heard that actual military gear is usually better than fake military gear. But it may be worse than commercial gear.
In the end, it’s just another type of product. Read the reviews.
If you want good dress shoes at a low price, buy some gently used Allen Edmonds or Loake 1880 off eBay. They’ll run you about $100. Just make sure you take them to a cobbler rather than throwing then out when the heel starts to wear to get your money’s worth.
My tentative advice is to allow for the possibility that military surplus gear is good quality, but check reviews of specific items, both to find out about whether those items are worth buying and to get a feel for whether military surplus (which country? which branch of the military? which sort of items?) are good value.
We may be talking about medium-hanging fruit here.
The Feds go subpoena-fishing on Reddit: http://www.wired.com/2015/03/dhs-reddit-dark-web-drug-forum/
Notable parts (emphasis mine):
“Intentional Weight Loss and All-Cause Mortality: A Meta-Analysis of Randomized Clinical Trials”:
This user is spamming the forum: http://lesswrong.com/lw/lxe/personal_notes_on_productivity_a_categorization/c6i5?context=3
I don’t know what standard protocol is for things like this. So, I thought I’d try to let somebody know.
Thanks, the user is banned now.
I guess the standard protocol would be sending a private message to moderator.
This seems relevant to LW.
Title: Sidetracked by trolleys: Why sacrificial moral dilemmas tell us little (or nothing) about utilitarian judgment
Abstract:
In the sequences I find Correspondence Bias scary. It is the usual Fundamental Attribution Error / Just World Hypothesis issue and what I find scary is that Eliezer casually assumes we know the motives of our actions better than the motives of the actions of other people. That we judge ourselves more accurately than we judge others. And the scary part is that it is such as easy assumption.
Yet IMHO it is not so. Nemo iudex in causa sua. We tend to judge ourselves way too leniently, actually how we judge others is more accurate as how we judge ourselves because the bias that generates excuses for our own behavior is stronger than the bias that generates blame for the behavior for others.
The scary thing here is how easily a self-critical Rationalist can give in to the mood of the era: the mood of the current era is certainly about “not beating up yourself” i.e. letting yourself get away on excuses far to easily.
150 years ago the nemo iudex in causa sua principle was more expected and people thought roughly the way I explained it: you judge others more accurately than yourself, you should be judging others not yourself, and you should let other judge you, because this way you all will be stricter and that is a good thing because more or less we all suck.
Of course, the article is more about predicting behavior rather than assigning blame: the idea is that unusual circumstances predict unusual behavior better than unusual personalities. While my point is more about moral blame.
What follows from all this? If we both are right and people with usual personalities do blameworthy things in unusual circumstances, it simply means usual personalities are not good enough. We cannot simply accept that if we are doing whatever people with usual personalities are doing in that situation then that is good enough. There is probably a strong bias working towards “if I am usual then I am okay” because the evolutionary origin of morality is probably being good enough to not thrown out from a tribe or hunting band as an uncooperative member, so the evolved algorithm is saying being usual is good enough.
Yet, I think the only solution here is accepting usual is still bad. Reason 143 why I am a religion-friendly atheist: we need a bit of a we-are-all-sinners attitude. People with usual personalities will do morally blameworthy things in unusual situations and the solution is not to excuse them—this is what I am railing against here, the excusatory tone of the article—but to find the usual personality blameworthy.
Look at people in prison. Discount the obvious bad apples, about 25%, the rest are basically you, except in worse situations, more temptation, or willpower broken down by a series of bad situations. The same heated argument that made me only throw a glass against the wall may have made someone-like-me who had his willpower broken down by circumstances stick a knife into someone. Either all those people are basically innocent and not deserving of punishment—meaning almost any inhumane act can be excused by pleading unusually tough circumstances—or I am, in a sense, a “sinner” who simply had the good luck so far of not being tempted enough.
This is fairly ironic. If unusual circumstances predict unusual behavior better than unusual personalities, and if you think the general system of blame and punishment today is still roughly correct, for wanting a better one, then you must accept the position of something sort of an atheist-Augustine claiming we all are “sinners”.
An interesting analysis of, basically, how sure you are that you own your bitcoins. Summary:
I seem to have gained a downvote on at least the first two pages of my comment history, dropping from 83% positive to 69% positive in the space of a day. I remember mass-downvoting was a big issue here a while back, did we find a way to fix it or did we just ban one or two prolific mass-downvoters?
I’m not angry and seeking revenge and/or compensation, after all I’m still positive for this month, but I’m curious what if anything to do about it. And a bit annoyed that my curiosity about who I’ve offended and how isn’t going to be satisfied because they decided on a downvote campaign instead of replying to whichever comment set them off.
I think two prolific mass-downvoters were banned, and they were almost certainly the same person. (Eugine_Nier, and then Azathoth123 who was generally reckoned to be Eugine_Nier redivivus.) There have been other reports of mass-downvoting since then, though not on the same lavish scale; I think (but this may just be confirmation bias) that the ones I’ve heard of have, like the earlier ones, shown some sign of being triggered by the victims’ expressing non-neoreactionary views on gender, race, etc. I wouldn’t be terribly surprised to learn that the individual behind the Eugine_Nier and Azathoth123 accounts is still around and up to the same tricks as before, just a little less vigorously.
Having said all of which, I don’t see anything in your recent comment history that seems like it would annoy a Eugine-like mass-downvoter particularly. Nor in fact anything that seems like it would annoy anyone very much. So maybe there’s someone around who just thinks it’s fun to hit people with lots of downvotes for no reason at all.
More confirmation: I’m also a victim, and it started during this thread.
I’ve been tempted to donate some small amount to a progressive cause just to annoy whoever it is, but that would seem to incentivise false-flag attacks.
I’ve lost around 60 points in the past couple months despite hardly commenting at all (haven’t spent much time here recently). I did get a smaller swarm of downvotes in September for this comment, in which I was grumpy at Azathoth123 on the subject of gender.
One more has been banned, and it looks like a new one has turned up. I’m waiting for tech support to give me the name. I’ll pass your complaint on to them.
Can you, in interests of transparency, let the community know who the new banned individual is?
I can, but I’m not sure whether it would be a good idea.
Would people with experience of moderating care to weigh in on this?
Without focusing on the first problem, is there a path to get better at naming things?
Do you have experiences to share where you think you improved on the skill?
Exercises to recommend?
Books and articles to recommend?
I have usually seen that quotation in the modified form: “only two hard things: cache invalidation, naming things, and off-by-one errors”. (It appears that this modification was introduced by someone called Leon Bambrick.)
I like the modified version because (1) it’s funny and (2) off-by-one errors are indeed a common source of trouble (though, I think, in a rather different way from cache invalidation and naming things). I do wish Karlton had said “software development” rather than “computer science”, though.
But that joke distracts from the original joke that caching is subsumed by “naming things.”
At least one of us is confused. It never occurred to me that the original comment was intended as a joke (except in so far as it’s a deliberate drastic oversimplification) and I don’t think I understand what you mean about cacheing being subsumed by naming (especially as the alleged hard problem is not cacheing but cache invalidation—which seems to me to have very little to do with naming).
I’m probably missing something here; could you explain your interpretation of the original comment a bit more? (With of course the understanding that explaining jokes tends to ruin them.)
I don’t agree with Douglas_Knight’s claim about the intent of the quote, but a cache is a kind of (application of a) key-value data structure. Keys are names. What information is in the names affects how long the cache entries remain correct and useful for.
(Correct: the value is still the right answer for the key. Useful: the entry will not be unused in the future, i.e. is not garbage in the sense of garbage-collection.)
I agree that a cache can be thought of as involving names, but even if—as you suggest, and it’s a good point that I hadn’t considered in this context—you sometimes have some scope to choose how much information goes into the keys and hence make different tradeoffs between cache size, how long things are valid for, etc., it seems pretty strange to think of that as being about naming.
Well, as iceman mentioned on a different subthread, a content-addressable store (key = hash of value) is fairly clearly a sort of naming scheme. But the thing about the names in a content-addressable store is that unlike meaningful names, they say nothing about why this value is worth naming; only that someone has bothered to compute it in the past. Therefore a content-addressable store either grows without bound, or has a policy for deleting entries. In that way, it is like a cache.
For example, Git (the version control system) uses a content-addressable store, and has a policy that objects are kept only if they are referenced (transitively through other objects) by the human-managed arbitrary mutable namespace of “refs” (HEAD, branches, tags, reflog).
Tahoe-LAFS, a distributed filesystem which is partially content-addressable but in any case uses high-entropy names, requires that clients periodically “renew the lease” on files they are interested in keeping, which they do by recursive traversal from whatever roots the user chooses.
Why do you believe that the problem of naming doesn’t fall into computer science? Because people in that field find the question to low status to work on?
Nothing to do with status (did I actually say something that suggested a status link?), and my claim isn’t that computer science doesn’t have a problem with naming things (everything has a problem with naming things) but that when Karlton said “computer science” he probably meant “software development”.
[EDITED to remove a remark that was maybe unproductively cynical.]
The question isn’t whether computer science has a problem with naming things but whether naming information structures is a computer science problem.
It’s not a problem of algorithms but it’s a problem of how to relate with information. Given how central names are to human reasoning and human intelligence, caring about names seems to be relevant for building artificial intelligence.
When I read your post, my initial thought was of Kernighan and Pike’s The Practice of Programming. Fortunately, I had to spend some time looking it up because I’d forgotten the name of the book; when I did, I was somewhat disappointed.
The first chapter is on programming style, but very little of it is about naming things, as is relevant to your question. About half of that chapter is inaccurate or useless if you’re using a language other than C, which you probably are.
Nevertheless, if you have the opportunity to read that 28-page chapter, I recommend it.
The end of that chapter makes the following reading recommendations related to programming style:
Kernighan, Plauger The Elements of Programming Style
Maguire Writing Solid Code
McConnell Code Complete
van der Linden Expert C Programming: Deep C Secrets
...and Strunk, White The Elements of Style
Code Complete has a section on this. But we don’t have a precise understanding of what a “good name” is, for the same reason that we don’t have a precise understanding of what a “good song” is: the goodness of a name is measured by its effect on its reader.
So I think the high-level principle, if you want to do a good job naming things in your program, is to model your intended reader as precisely as you can. What do they know about the problem domain? What programming conventions are they familiar with? Why are they reading your program—what matters to them? These concerns will inform your formatting and commenting style as well.
When you draw these distinctions you will exclude some people. That’s normal. You shouldn’t feel badly about that, any more than Thomas Mann felt bad that Chinese speakers had to learn German before they could read Der Zauberberg. If your work is influential enough, someone will translate or annotate it. And unlike a novel, most programs are read only by a small circle anyway.
If you want concrete advice instead of philosophy, this c2 page includes some useful tips.
I’m not sure whether I buy that argument. It would be quite possible to go out and study naming in the real world and study problems that arise and what goes well.
Yes, I agree. That’s why I like the analogy to composition: most of the songs you might write, if you were sampling at random from song-space, are terrible. So we don’t sample randomly: our search through song-space is guided by our own reactions and a great body of accumulated theory and lore. But despite that, the consensus on which songs are the best, and on how to write them, is very loose.
(Actually it’s worse, I think composition is somewhat anti-inductive, but that’s outside the scope of this thread)
My experience is that naming is similar. There are some concrete tricks you can learn—do read the C2 wiki if you don’t already—and there’s a little bit of theory, some of which I tried to share insofar as I understand it. But naming is communication, communication requires empathy, and empathy is a two-place word: you can’t have empathy in the abstract, you can only have empathy for someone.
It might help to see a concrete example of this tension. I don’t endorse everything in this essay. But it’s a long-form example of a man grappling with the problem I’ve tried to describe.
To speak to the second of naming things, I’m a big fan of content addressable everything. Addressing all content by hash_function() has major advantages. This may require another naming layer to give human recognizable names to hashes, but I think this still goes a long way towards making things better.
You might find Joe Armstrong’s The Mess We’re In interesting, and provides some simple strawman algorithms for deduplication, though they probably aren’t sophisticated enough to run in practice.
(My roomate walked in while I was watching that lecture when I had headphones on, and just saw the final conclusion slide:
We’ve made a mess
We need to reverse entropy
Quantum mechanics sets limits to the ultimate speed of computation
We need Math
Abolish names and places
Build the condenser
Make low-power computers—no net environmental damage
And just did that smile and nod thing. The above makes it sound like Armstrong is a crank, but it all makes sense in context, and I’ve deliberately copied just this last slide without any other context to try to get you to watch it. If you like theoretical computer science, I highly recommend watching the lecture.)
It also requires (different) attention to versioning. That is, if you have arbitrary names, you can change the referent of the name to a new version, but you can’t do that with a hash. You can’t use just-a-hash in any case where you might want to upgrade/substitute the part but not the whole.
Conversely, er, contrapositively, if you need referents to not change ever, hashes are great.
I know a lot of you probably aren’t all that interested in mainstream television, but I’ve noticed something in the 8th series of Doctor Who which might be somewhat relevant here. It seems the new Twelfth Doctor has a sort of Shut Up and Multiply utilitarian attitude. There have been several instances in the 8th series where he is faced with something like the fat man variation of the trolley problem and actually pushes the metaphorical fat man, even in situations that are less clear cut than the original problem. This might represent a step in the right direction for mainstream cultural acceptance of, or at least just exposure to quantitative moral reasoning instead of emotional. (Granted, the Doctor is far from a rational protagonist in many ways, but still, every little bit helps.)
I’d argue the opposite. The writer is so opposed to the idea of moral reasoning that he thinks that no normal human being would ever use it. However, he’s trying to make the Doctor look alien. Something that nobody would ever do, but has a plausible-sounding justification, is ideal to show that the Doctor is an alien.
Also, this explains why the show is so inconsistent on such things. The right thing to do when the moon is a giant egg and hatching has a chance of destroying the Earth is to kill it. It’s one life against (billions * probability of the world being destroyed), which is at least one life against millions. The Doctor decided that what we should do (after giving a fake “free” choice to Clara) is to not kill the fat man^H^H^Hmoon. Instead we should take the risk of everyone dying. When you throw in things to make the Doctor look alien, you can just easily throw in a too-sentimental act as you can throw in a too-utilitarian act.
In fact, the Doctor often acts as if he’s in a TV show and is aware that million to one chances work nine times out of ten. You often see the Doctor say “I’m not going to doom innocents to save a greater number” and something saves everyone anyway, but you never see the Doctor say “I’m not going to doom innocents to save a greater number” and discover that since he didn’t doom the innocents, the greater number died.
(The Doctor does often accept and even act callous about inevitable death, but that’s different from the case where he or a protagonist personally has to cause the death.)
I find that undermines a lot of enjoyment for me. A Hard Choice is presented, the Doctor does something that seems deontologically virtuous but consequentially absurd, and then deus ex machina the consequences of the Hard Choice are wiped away.
Perhaps he knows he is living in a just universe where moral realism proves deontology correct, and ignoring consequentialism leads to the best consequences. Depending on the writer.
Well, if you want to write a fictional scenario in which deontology proves better than consequentialism, you kinda have to make the consequences of the deontological decision better than those of the consequentialist one. I agree that it’s ironic, though, to be justifying deontology on consequentialist grounds (it saved more lives in the end, ha!).
Perhaps he just mainpulates Clara into being a person who cares alot about the living beings she happens to interact with, but still can make uncomfortable choices. This would be useful for him since she is supposed to save his lives over and over somewhere in time. He could easily “cheat” and look what consequences a given choice would have, since he has a time machine and a lot of spare time.
Or his basic values are alien to some of us.
His basic values are intended to be alien to us, but what they actually are is alien to the writers. Of course the values of lots of actual human beings are alien to the writers too.
That may be true! But I think the writers have to agree on some basic values of his.
I don’t think they do. As Gondolinian pointed out above, the Doctor has been known to kill the fat man on the trolley (sort of—I can think of situations where he lets them go to their doom, but not where he personally pulls the trigger). But the Doctor has also been known to refuse to kill the fat man on the trolley (as in Kill the Moon). I don’t think the writers agree on anything more than “he does something weird or extreme that people like myself wouldn’t do”, and they’re not consistent in which weird or extreme things those are.
I don´t think so, but yeah, who knows? That´s the beauty of this show, it is weird. Btw, when I said the writers, what I really meant was the people who are currently working on the script and the ones who are involved in cross-season plots.
I was reading gwern’s essay on spaced repetition, and I read this note:
I felt very unsure about what a ‘forced-choice paradigm,’ was, but I guessed the correct definition.
Calling things “an art, not a science” has always been a pet peeve of mine. And I’ve heard people say things like, “there’s no best way to do it’. In particular, I’m taking a Responsive CSS course on Udacity and the guy said these things (if you listen closely, you could hear me cringe).
And then there’s the idea that art is like inherently intuitive, whereas science isn’t. I want to focus on the “art is inherently intuitive and not about breaking things down into components like science” part. My thought is that these people who say this are confusing their map for the territory. They may not know how to deduce what the perfect painting would look like, but that doesn’t mean that it’s not possible.
I know that there are different versions of these beliefs, and that I may be misunderstanding them. If so, please correct me. Anyway, what do you guys think?
To me it’s shorthand for “It took years of practice to get good and I can’t explain the process I use. The problem appears too complex for a scientific approach to be worth the effort. I certainly can’t think of any polynomial-time algorithm and my experience has led me to doubt that one exists.”
It’s a caution for beginners. The expert is basically saying that getting good at it will take a lot of work and finding a good systematic approach is likely to be a dead end.
One can certainly brute-force the issue by creating every possible painting and comparing them. Whether this can be done efficiently is another matter altogether.
One charitable interpretation is “it’s something you learn by doing, not something you learn by reading”.
“Art” has a bit of a double meaning, there’s the “something that’s pretty/pleasing/aesthetic/original/creative”, but there’s also the “craft” meaning, as in “the art of XXX”.
Two reactions to this:
1) If someone says something can’t be broken into component parts, a more charitable reading is that they think that trying to do so is a waste of time and less likely to bring progress than just a lot of practice. Even if it’s possible in theory, that doesn’t mean it’s actually a good idea, so warning people against it can be totally reasonable, and isn’t “confusing their map for the territory”.
2) HOWEVER, in the case of art, most forms of art I can think of—drawing, painting, storytelling, animations, etc. - most definitely CAN be broken into component pieces, and often those component pieces can be broken into component pieces too, etc. - just check out the right section in any library.
You can’t learn to draw by reading a book, but a good book on drawing can tell you what individual skills you should practice, and how to do so.
Art is about relying on intuitive pattern matching and not following strict rules.
Computers can follow strict rules that you program into them but are bad at creative pattern matching.
A correct breaking down may very well result in 10,000 rules with complex interactions between them. The human brain has a lot more than 10,000 neurons that are active during a particular decisions process.
In LW jargon I’d phrase it as: “It’s a system one thing, not a system two thing.” I think this is what most people do mean when they use “it’s an art, not a science.” When something is considered an art and not a science it’s something that can’t be done well by “just” following a set of instructions. Keep in mind that the popular view on science (when it’s positive) is seen as strictly adhering to the scientific method (form hypothesis-->test hypothesis-->adjust hypothesis) and that this is something that anyone can do. The difficulty of each individual step is ignored or seen as something that can be learned without much trouble. The pop-culture view on art is “mysterious process.”
And you’re probably right that you can distill (some) parts of art into “rules” about good art. I recently heard a radio interview with a professional photographer and he could explain why each photo was good or bad by adhering to a set of simple principles and he could explain those principles. I do think, however, that part of what makes him a great photographer is that those simple principles have become part of his system one thinking.
I think that’s a great way to phrase it!
I’m a bit skeptical that this is true. I sense that the majority of people don’t actually believe that art is reducible.
Yes, you might very well be right. What I meant to say is that I think “system one, not system two” is the general sort of idea that people want to convey, not that it was the exact same thing.
“an art, not a science” =
we cannot explain it explicitly well, yet (charitable interpretation)
we do not bother to measure our outcomes (uncharitable interpretation)
Consider the following two statements:
1) Things fall downwards, and anyone who disagrees is objectively wrong.
2) Mozart is better than Iron Maiden, and anyone who disagrees is objectively wrong.
I think there are problems with the second statement, and not just that people will stop inviting you to parties. You’re trying to impose your opinions on other people.
There are approaches you could take to determine which art is objectively better. For designing a website, the answer is which gets more hits, or sells more products. But as far as art goes… you could take a democratic vote over which artist is better. You could, in a thought experiment, extend the vote to a sum over the whole of mindspace. You could weight by IQ.
But I don’t want my tastes in art to be determined by a democratic vote of what everyone else thinks. At the end of the day, the people who say “there’s no best way to do it” are not able to work out what the most popular website ever would look like, and saying that the currently most popular site is objectively best would stifle innovation.
They’re not trying to say philosophically universally correct points. They’re not theologians trying to determine whether God could create the perfect painting. They’re trying to teach humans.
I think the most direct way to do it would be to measure people’s brain activity. I’d be skeptical of self reporting. I think people would be swayed a lot by what they think they should like. I’m pretty sure I’ve read some research on this but can’t recall what it was :/
It wouldn’t be. In this case “best way” would be defined as something along the lines of maximizing total happiness.
Measuring brain activity is certainly possible but brings all sorts of challenges, rendering this kind of approach far less direct than it may seem on the surface.
However your thoughts about expectations are very much supported by a fMRI study (Kirk et al. Neuroimage 2009, direct link to PDF ) manipulating participants’ expectations about artistic images with the following instructions:
Paintings were randomly assigned to these two conditions for each participant, and ratings of aesthetic quality were obtained for each painting as well. Instructions modulated brain activity: the same artistic images evoke different responses in prefrontal/orbitofrontal cortex depending on participants’ expectations, and the same images received higher aesthetic ratings if the participant believed they came from a prestigious museum.
The authors also correlated brain activity with the aesthetic ratings given to each image (collapsed over context). No regions were reliably modulated by aesthetic ratings, which should have been the case if there are some regions that are responsive to “artistic quality” detached from context.
Of course, plenty of potential problems with this study, perhaps the most salient to me being the use of entirely abstract artistic images and artistically naive participants. But it does highlight some of the challenges related to revealing information about aesthetics from brain activity.
Still, rather than having one universal best art, the best approach is to have different peices of art to apply to different people. There may be a universal best piece of art for one person, but I imagine they would still want variety.
I suppose you could try to define away all subjectivity by appealing to utilitarianism, and maybe a post-singularity optimiser that can simulate all brains to gauge their reactions could create a perfect peice of art by those standards. Its still useless for discussing matters in the present.
I’m interested in optimizing my diet for cognition and general health. Most discussions of diet on LessWrong tend to focus on obesity, and are not of interest to me. Are there any good resources that summarize the evidence with the focus on cognition and general health? I can find some specific things like this SSC post and this study. These are not complete, however, and I don’t want to do a big search on my own. I find diet to be a minefield, in that I’ve seen loads of evidence pointing in every direction and it’s hard for me to know what is good and what is not.
A lot of information on the topic is controversial. What isn’t seems to be:
1) Eat lots of vegetables. 2) Don’t eat in a way that produces sudden spikes in the blood glucose level. That means complex carbohydrates are usually better than simple sugars.
Being mindful of your own state and how it’s affected by foot yields return. Personally I quit certain foods that I ate regularly after I payed attention to how I feel after I eat them.
It seems like diet is a good case of where it might be better to satisfy than optimize: it’s clearer that some things are bad than that other things are optimal.
Hi, anyone in Ithaca? I’d like to start some sort of Effective Altruism meet-up/club. The lovely people at the EA forum suggested asking here.
Missed me by a year, just graduated from uni last year and am now in Texas. Best of luck though! :)
Has anyone heard of this massive UIA (Union of International Associations) online databases/wiki of human values, world problems, and proposed solutions? I really wish it had random access so I can freely sample it. My first thought when I stumbled across this was (paraphrasing) “An FAI would surely have a lot of subgoals to juggle.”
I’ve spent the last few months following a new diet/exercise plan. I notice that my past failures came down to using food as a way to regulate my mood and deal with stress. Exercise mollifies this to a great extent; however, I find that I regularly experience temporary spurts of depression lasting a few hours, and in those times I find it difficult to maintain discipline. Is there a good way to guard against this sort of thing?
Might want to type your diet into cron-o-meter and see if you’ve screwed up any nutrients. Had similar feelings before? If so may want to see doctor. Sounds dangerous to me, and I would likely pull a white flag on the diet if this has not happened to you before.
How is your calorie balance, and how low in carbs is your diet?
Very tentatively offered—maybe you actually need to use food to regulate some of your moods. Possibly you could use a few more carbs.
One thing that might help you from my experience is to remove any food from your surroundings that could tempt you. I myself have only fruits, milks and cereals in my kitchen and basically nothing else. While I could easily go to supermarket or order food the fact I would need to do do some additional action is enough form me to avoid doing that. You can use laziness for your advantage.
The word depression in his medical meaning doesn’t describe a state that lasts hours. Maybe you mean an emotion like sadness?
I don’t think I’d class it under sadness. Despair might be the right word. Once it hits, I find it very hard to function and when it passes I feel grateful I don’t keep loaded weapons in the house.
How do people who want to live forever (or something like this) renormalize their everyday approach to life and relationships? I mean, children and parents move apart with each passing day (normally), and if you imagine living for at least a hundred years, how do you keep yourself interested in older links? It seems this would take a lot more effort.
You imply that you need to. Why do you think so?
Mostly, out of habit. I have a hard time imagining a world where commitment ‘until death do us part’ (not necessarily marriage) is no longer a staple of my existence. It was just a duty which we were taught.
Years ago when I still intended to go to college, I had to find an unconventional route because I had no money or scholarships, etc. I found out that there are three regionally accredited, distance education colleges in the U.S. that have unusually low or no credit residency requirements (usually expressed as a number of credit hours or percentage of total credit hours that must be completed at the institution conferring the degree). This is extremely useful because you can either throw together disparate credits from other accredited institutions or mostly take a bunch of credit exams at a much lower cost per credit hour and at a much quicker (or not) pace than in traditional education. The institutions are: Charter Oak State College, Excelsior College, and Thomas Edison State College.
I thought this would be pretty relevant since the average LW user is fairly young and almost half are non-U.S. citizens (I know at the very least that Charter Oak State College will admit non-U.S. citizens with some additional requirements; that means you can get a U.S. degree without a travel visa). It’s also not always clear whether or not getting a degree is worth it and financial cost could tip the balance. My estimate back in 2013 for a B.S. in Psychology was ~US$12,000 without financial aid and including exam fees.
I’m reading Rationality and I’m paraphrasing/summarizing the posts so that I understand and internalize them better. I got to Focus Your Uncertainty, and it felt a bit more opaque than the others, what with all of the sarcasm and italics. I compared my summary with the wiki summary, but I felt like the wiki summary was sort of like a dictionary definition that uses its own word. I’d appreciate it if someone could give me feedback on my summary:
(I don’t know anything about chemistry in high schools outside Ukraine, so maybe this is redundant): it often seemed to me, in school and after (and still after, reading upon mass-spectrometry analysis) that much of the problems we had to solve could be easily moved to arithmetics curriculum in primary classes. We studied inorganic chem for 2 years (I think), then 2 years of organic chem. Many times we had to calculate mass of a given compound, or concentrations of solutions, or other stuff like that which was necessary, of course, but still seemed misplaced and too simple. And yet we made mistakes and forgot numbers and so on. I mean, can’t people studying multiplication already be given problems like ‘Carbon atom weighs 12 units, oxygen atom weighs 16 units. How much does a molecule having 1C and 2O weigh?’ instead of only stories about boys buying apples and cakes? By the time they get to the explanation why the molecule consists of exactly such combinations of elements, they will have some idea of how heavy it is. And it will free time for actual chemistry. I remember being very surprised to learn that there are classes of compounds built on Phosphorus-Nitrogen bonds (the problem of what valence is) and of alloys where ions of two metals form separate grids (I used to think that atoms’ radii should prohibit that one).
Mainly I object to school-induced view of atoms and groups of atoms like curried functions (SO4, depending on context, means ‘add 2’, ‘minus 2’, …), mostly because in MS analysis these functions aren’t reliable any more:))
The problem isn’t the arithmetic though, the problem is connecting the arithmetic to the type of abstraction in question, which is surprisingly difficult.
But is it the right thing that should be surprisingly difficult in chemistry class? I mean, we learned to disregard the nature of ions floating in the solution, as long as those ions can or cannot bind to each other. Great! You can probably explain things about sets, intersections, etc., to people who are used to such operations, but how is it chemistry? How does it control our anticipations about compounds (beyond a restricted range of interactions)? Analytical chem is not taught in high school (wasn’t in ours, anyway).
Hmm, I’m not sure. From your initial comment it sounded like there was more actual chemistry going on, but now I’m wondering if the amount of actual chemistry really is less than there was when I was in school.
I am not sure, too—was some time ago & there was some extracurricular activity going on, but… yes. I think so.
Do people who passionate argue for buying a home instead of renting violate the Efficient Market Hypothesis? If that is so much better, why don’t see a lot of people making money from landlording, by buying on mortgage and renting it out? Actually, I would think if you live somewhere where you see that, buying may be a good idea. If you personally rent from a landlord and you have a good idea that you and another family just paid for the landlords holiday cruise, you may want to stop that. But if you live somewhere where renting from co-ops, councils etc. is more frequent, and you see nobody profiting off landlording, and this suggests buying and re-renting is not really an option, you probably don’t win much by buying.
Let’s call it the Diamond Houses Fallacy. A Diamond House one that is indestructible and its value can only go up. Everything paid in mortgage for a Diamond House beyond the interest goes towards the equity, which is better than throwing it away in rent. However, real houses depreciate, need renovation, and you must also price in a certain risk of the neighborhood becoming lower value, kinda slum. If your country allows—for tax purposes—depreciating a house over 25 or 40 years, it may mean a 25 or 40 year mortgage would buy nothing, in the accounting sense: of course the house worths something, but you may have spent more on renovation.
But the real point here is that if you trust in efficient markets, you need not speculate on whether rent is spent on renovation or eaten up by depreciation. All you need to look around and see if people are making money with arbitraging it or not.
There are transaction costs involved in landlording.
People take better care of a house if they own the house.
We do.
“Efficient markets” are an idealized abstraction. Putting trust in such seems to be… not wise :-)
Sorry, I meant in the sense some places you see and some places not and this should inform your choice.
Efficient markets is like every hypothesis, it has various probabilities of working in various conditions. Usually we need a fluid market, something that is not like De Beers diamonds, but more like a market with a million homes on a city and hundreds of thousands of speculating homeowners. A market where it is easy to be a vendor and thus collusion and rigging is unlikely. Where the barrier to enter is low. Houses are really close to an ideal market, you won’t start a mutual fund tomorrow, nor open a Michelin-star restaurant, but you could be landlord really easily. Suppose you don’t want to profit in such a market just prevent others from profiting off you i.e. save that kind of money, which is the renters dilemma. Then the algorithm is really simple 1. is anyone profiting off me? 2. are there a lot of people profiting off people like me?
Renter who rent from private landlords can make an efficient decision just by inviting the guy over to a beer or five. You have no job? Rent out ten apartments and basically do this for a living? You have a nice car and holidays in good places? Well, fuck you I am off to buy a house :)
I think the question is whether someone may be leeching you off needs not be a theoretical question. This is what I mean under efficient markets: if you don’t see a leech, and see an improvement opportunity nevertheless, be aware of hidden costs.
Well, as Marx could have told you, you need the initial capital to start :-D
But yes, it’s a common way of earning money. Typically you buy a house in dire need of TLC (because it’s cheap), fix it yourself, and then rent it. There are a lot of people who are landlords and rent is their primary source of income.
It’s not all roses, of course—it’s just a business and like any business it has its own failure modes.
But as Mises would tell you, capital is not necessary as you can borrow money and still pocket the entrepreneurial profit and managerial salary, having to pay only interest, now whom to believe :-D
I am joking, of course, capital being difficult / expensive to acquire is one of the primary problems of real-world market economies, in the ideal market simply presenting a good business plan, even without any sort of a collateral would get other people’s money thrown at it which requires perfect trust and perfect trustworthiness. Markets where only people who already have capital can engage in entrepreneurship end up with nasty outcomes, not only high inequality but also crappy entrepreneurs the customers must put up with.
Well, Silicon Valley functions more or less like this. Hedge funds can look like this, too.
That’s basically a counterfactual—businesses have been able to borrow money for a very long time in human history :-)
In reality the situation is a mix: it’s not quite true that “A bank will only lend money to someone who can prove he doesn’t need it”, but it’s also not quite true that a solid business plan is all a bank needs to lend you money. Banks are sufficiently rational to want to have positive-expected-returns loans—they will lend money if they think the loan will be good.
I also suspect that entrepreneurship is noticeably easier in the US than in Europe.
The explanation for this market inefficiency, as for so many others, is the government. There are massive tax benefits to owner-occupied housing, like the non-taxation of imputed rent. This means that the value of a house to a homeowner exceeds the value to a landlord. This plus the liquidity-constraints of the marginal homebuyer mean that the marginal house is worth more to the marginal homebuyer than he is able to pay for it.
As for whether people are arbitraging this or not? Yes, millions of middle class homebuyers are arb’ing this, saving themselves a huge amount of money.
I thought it is just the US-specific stuff, then I realized that the rent non-taxation applies everywhere the landlord is supposed to pay income tax on the rent, except where it is cash under the table, except where landlords are offshore companies with tax shenaningans, except where it is rented from a non-profit co-op, this third is actually our case.
But this gives a useful heuristic, if anyone pays tax on your rent—I think our co-op doesn’t but I need to verify it—that is an argument for ownership.
I too have noticed that the weird part of taxation is that it encourages barter and DIY. This is not every efficient. It is value in general and not cash movements that ought to be taxed, but of course it is both hard and useless, as governments need cash to finance services. I wonder what non-market-distorting tax ideas exist.
I think our co-op or non-profit organizations in general, if they are tax-free or tax-reduced, are good ways to deal with these distorting effects. If we ever decide to buy a property, we will probably look into a credit union mortgage, not a for-profit bank, and not necessarily because profits are evil but because—probably, need to find out—non-profits are not or lower taxed.
Lets say we have the capability to create living creatures and some bored scientist makes one that is relatively intelligent (enough to be considered a person in a meaningful, human definition), capable of language, requiring of little sustenance, capable of reproduction and completely and utterly happy except under to most terrible circumstances. Would the utilitarian view of the situation be to convert all usable resources to create habitats for these critters? Would the moral thing be to give the world over to them because they’re better at not making each others lives terrible/are happier for the same amount of resources?
I’m pretty new around here so please forgive the sheer newbishness of my question.
What exactly do you mean with “utterly happy”? What’s the empiric test to measure whether those creatures are “utterly happy”?
The more I interact with people at the high end of that spectrum the less I think that humans optimize towards happiness.
Child-like joyousness was what I was envisioning. The short story Paprika http://escapepod.org/2014/05/30/ep448-paprika/ ends with all humans long dead and the only remaining human creations are talking squirrels that, at least according to the story’s brief description, live completely care free and joyous lives. That doesn’t exactly seem feasible to me with living creatures having resource needs, but what if they lived in-silico rather than in meat? That would still have resource needs, but possibly far fewer.
I think I am unconvinced that the way humans… “work”, for lack of a better term, is the optimal one. As you said, people who are at the high end of the spectrum of intelligence don’t seem to be happy quite as often (not to suggest that I think people should be dumber, but being both intelligent and ecstatically happy most of the time without the use of life-shortening drugs would be nice). I was curious about what has been discussed on the subject already, but I guess I chose the wrong way to ask it.
I don’t speak about a high spectrum of intelligence but of “fun” and how certain people feel like they experience too much of it.
The idea of being ecstatically happy most of the time might sound good in theory but I don’t believe that’s what people actually prefer.
See discussions of utility monsters. Don’t assume that many people here support pure utilitarianism.
Thanks for the link, and sorry for the presumption. The question occurred to me and this was the first place I thought to ask.
Some possible reasons for saying no:
We aren’t as good at being happy, but we might be better at improving the scope for future happiness. (E.g., maybe space colonization some day.)
This one becomes harder to justify if these critters are actually our intellectual superiors.
Other things matter (to us) besides happiness, and these critters’ lives don’t provide those things as well as ours do.
This answer may be irrelevant if you’re only interested in utilitarian arguments.
My moral values are approximately utilitarian, but I would say that what I care about when thinking in utilitarian terms isn’t happiness as such but those things of which happiness is a measure. (This doesn’t require me not to care about happiness; happiness is in fact one of the things that makes us happy. If I were suddenly made much less happy about everything then that fact itself would be a source of unhappiness for me from then on.)
It may be illuminating in this connection to read “Not for the sake of pleasure alone”.
We’re heading into the last few hours to make predictions on the outcome of the latest round of the Amanda Knox/Raffaele Sollecito case. I’ve made mine. See also here (and, for that matter, the post and other comments).
The main sources of uncertainty are the general unpredictability of Italian Supreme Court decisions (as demonstrated in the nigh-inexplicable—at least on naïve theories of how the system should work—overturning of the acquittal by the Supreme Court two years ago), the fact that the panel that hears the case tomorrow won’t actually be the same one as the one that heard it the first time, the fact that a juror in last year’s retrial has come out expressing doubts about the case in the Italian press recently, and the fact that Knox and Sollecito do, in fact, have a pretty good case (even if their case was better at the previous levels of trial).
Of course, these factors aren’t independent by any means, and I think they are dominated by the inertia of the previous verdicts. But, I don’t dare put my confidence at more than about 60%.
Hey, the Supreme Court annulled the conviction. Any thoughts? I’m sure this has come as a (pleasant) surprise to you.
I guess we’ll know better when they publish their reasoning in 90 days.
I’m planning to get a BS and then an MS in computer science. To get the BS I have to take a certain number of course units, much more than is actually needed to fulfill the BS’s requirements, and I’m not entirely sure what to fill those extra units with.
Which of these is more impressive?
A 2nd major in economics.
A 2nd major in management engineering.
2 minors in any 2 of:
statistics
economics
management engineering
mathematics
Major in Economics
Minor in Maths/Stats
Minor in Econ
Major in Management Engineering
Minor in Management Engineering
I don’t know exactly what management engineering is but it sounds like a made up subject.
Thanks!
Management engineering is things like optimization, stochastic modelling, and organization theory. I agree it might sound like a way of weaseling out of taking difficult courses.
The main question is who you are trying to impress—what are your goals beyond getting an MS in computer science? or stated otherwise, why MS in computer science?
My perspective from academia: a second major signifies a certain level of dedication to a subject—which can work for you or against you depending on the next step you plan to take. Discipline prejudices are very real, and in some cases exist for good reasons. for example, management engineering can be very beneficial if you are heading toward a technical management role (in certain kinds of companies) but can work against you if this is not your intent.
And minors: I pretty much ignore them, instead looking at the portfolio of classes a student has taken, and much more importantly, why. It is much more important to me to see that a student had some coherent plan for courses taken, than whether they conform to a university’s designation of “minor” or not. So: the more coherent the better (but no need to apologise for optional classes that are obviously just for fun or personal interest).
Thanks for the feedback! I’m intending to go into industry, not academia, but this is still helpful.
Question on infinities
If the universe is finite then I am stuck with some arbitrary number of elementary particles. I don’t like the arbitrariness of it. So I think—if the universe was infinite it doesn’t have this problem. But then I remember there are countable and uncountable infinities. If I remember correctly you can take the power set of an infinite set and get a set with larger cardinality. So will I be stuck in some arbitrary cardinality? Are the number of cardinality countable? If so could an infinite universe of countably infinite cardinality solve my arbitrary problem?
edit: carnality → cardinality (thanks g_peppers people searching for “infinite carnality” would be disappointed with this post)
Since elementary particles can come and go, what’s really conserved is some arbitrary energy. Infinities won’t save you from arbitrariness here, because energy is locally conserved too, and our energy density is (thank goodness) definitely not infinite.
You’re right that there is no greatest cardinal number. The number of ordinals is greater than any ordinal; I’m not sure whether that’s true for cardinal numbers.
You can sorta get around the arbitrarity by postulating the mathematical universe hypothesis, that all mathematical objects are real.
“Discrete Euclidean space” Z^n would be countably infinite, and the usual continuous Euclidean space R^n would be continuum infinite, but I’m not sure what a world whose space is more infinite than the continuum would look like.
It is also true that the number of cardinals is greater than any cardinal, leading to Cantor’s Paradox.
Eh, not really. You’re still bounded by the finite cosmological horizon. Unless of course you have access to super-luminal travel.
Exactly.
It depends. If you use “subsets” as a generative ontological procedure, you would still be stuck by the finite time of the operation. If you consider “subset” instead as a conceptual relation, not some concrete process, you’re not stuck in any cardinal.
No. Once you postulate a countable cardinal, you get for free ordinals like “omega plus one”, “omega plus two”, etc. And since uncountable cardinals are ordered by ordinals, you also get for free more than omega uncountable cardinals.
Inaccessible is the next quantity for which you need a new axiom. Indeed, “inaccessible” is the quantity of cardinals generated in the process above.
Thanks to accelerating expansion of the universe, the reachable universe / the parts of the universe which intersects our future light cone is definitely finite.
Total blunder: I created this thread spanning two weeks instead of just one. Would you prefer me to create another one with the dates of the present week?
This is probably a good time to say thank you for bothering/remembering to create these threads at all. :)
I don’t think its comment volume is so huge as to require that. But let’s ask The People:
[pollid:844]
I’ve decided on a compromise and opened a new thread starting tomorrow (so that people that have already commented here today do not split).
Is the free-rider problem a real problem? Just in case anybody is interested in the topic… here’s my latest blog entry… In Which Our Anarchist Hero Jeffrey Tucker Proves The Point Of Taxation.