Open thread, Jan. 12 - Jan. 18, 2015
If it’s worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the ‘open_thread’ tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
- 4 Dec 2015 17:28 UTC; 12 points) 's comment on LessWrong 2.0 by (
- Open thread, Jan. 19 - Jan. 25, 2015 by 19 Jan 2015 0:04 UTC; 6 points) (
- 22 May 2015 22:49 UTC; 5 points) 's comment on Open Thread, May 18 - May 24, 2015 by (
Image recognition, courtesy of the deep learning revolution & Moore’s Law for GPUs, seems near reaching human parity. The latest paper is “Deep Image: Scaling up Image Recognition”, Wu et al 2015 (Baidu):
For another comparison, on pg9 Table 3 shows past performance. In 2012, the best performer reached 16.42%; 2013 knocked it down to 11.74%, and 2014 to 6.66% or to 5.98% depending on how much of a stickler you want to be; leaving ~0.8% left.
EDIT: Google may have already beaten 5.98% with a 5.5% (and thus halved the remaining difference to 0.4%), according to a commenter on HN, “smhx”:
On the other hand… Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images
From the abstract:
I’m not sure what those or earlier results mean, practically speaking. And the increased use of data augmentation may mean that the newer neural networks don’t show that behavior, pace those papers showing it’s useful to add the adversarial examples to the training sets.
It seems like the work around for that is to fuzz the images slightly before feeding them to the neural net?
‘Fuzzing’ and other forms of modification (I think the general term is ‘data augmentation’, and there can be quite a few different ways to modify images to increase your sample size—the paper I discuss in the grandparent spends two pages or so listing all the methods it uses) aren’t a fix.
In this case, they say they are using AlexNet which already does some data augmentation (pg5-6).
Further, if you treat the adversarial examples as another data augmentation trick and train the networks with the old examples, you can still generate more adversarial examples.
Huh. That’s surprising. So what are humans doing differently? Are we doing anything differently? Should we wonder if someone given total knowledge of my optical processing could show me a picture that I was convinced was a lion even though it was essentially random?
Those rather are the questions, aren’t they? My thought when the original paper showed up on HN was that we can’t do anything remotely similar to constructing adversarial examples for a human visual cortex, and we already know of a lot of visual illusions (I’m particularly thinking of the Magic Eye autostereograms)… “Perhaps there are thoughts we cannot think”.
Hard to see how we could test it without solving AI, though.
I don’t think we’d need to solve AI to test this. If we could get a detailed enough understanding of how the optical cortex functions it might be doable. Alternatively, we could try it on a very basic uploaded mouse or similar creature. On the other hand, if we can upload mice then we’re pretty close to uploading people, and if we can upload people we’ve got AI.
I’m not sure if NNs already do this, but perhaps using augmentation on the runtime input might help? Similar to how humans can look at things in different lights or at different angles if needed.
To update: the latest version of the Baidu paper now claims to have gone from the 5.98% above to 4.58%.
EDIT: on 2 June, a notification (Reddit discussion) was posted; apparently the Baidu team made far more than the usual number of submissions to test how their neural network was performing on the held-out ImageNet sample. This is problematic because it means that some amount of their performance gain is probably due to overfitting (tweak a setting, submit, see if performance improves, repeat). The Google team is not accused of doing this, so probably the true state-of-the-art error rate is somewhere between the 3rd Baidu version and the last Google rate.
That is shocking and somewhat disturbing.
Human performance on image-recognition surpassed by MSR? “Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification”, He et al 2015 (Reddit; emphasis added):
(Surprised it wasn’t a Baidu team who won.) I suppose now we’ll need even harder problem sets for deep learning… Maybe video? Doesn’t seem like a lot of work on that yet compared to static image recognition.
The record has apparently been broken again: “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift” (HN, Reddit), Ioffe & Szegedy 2015:
On the human-level accuracy rate:
People often talk about clusters of ideas. A common context here is the various different contrarian clusters. But ideas can often cluster for historical reasons that don’t have a coherent reason to connect. That’s well known. What may be less well known is that there are examples where one idea in a cluster can be discredited and as a result other, correct ideas in the same cluster can fall into disrepute. I recently encountered an example while reading Cobb and Goldwhite’s “Creations of Fire” which is a history of chemistry.
In the early 1800s Berthollet had hypothesized (with a fair bit of experimental evidence) that one could make the same compound with different ratios of substances. He also hypothesized what would later become to be known as the law of mass action. When the first claim was shown to be wrong, the law of mass action was also rejected, and would not become accepted again for about 50 years.
The upshot seems to be that we should be careful not to reject ideas just because they come from the same source as other, ideas which we’ve assigned low probabilities.
At one point there was a significant amount of discussion regarding Modafinil—this seems to have died down in the past year or so. I’m curious whether any significant updating has occurred since then (based either on research or experiences.)
As far as I know, no important news or research has come out about modafinil. Things have been quiet lately, even on the black-markets.
I’m taking a seminar course taking a computational approach to emotions: there’s a very interesting selection of papers linked from that course page that I figure people on LW might be interested in.
The professor’s previous similar course covered consciousness, and also had a selection of very interesting papers.
Random thought: revealing something personal about yourself is a very powerful “dark art”. People will feel strong pressure to reciprocate.
I confess that I’ve sort of used it before. Ie. if I want to get information out of someone, I might reveal something personal about myself (I’m comfortable talking about a lot of things, so often times it really isn’t even that personal).
I can’t recall ever having had bad intentions though. I recall using it to get a friend to open up about something that I think would be beneficial for them, but that is difficult for them to do.
The real trick to both use and deflect this is to have some piece of information about yourself that sounds very personal but that you would be fine sharing with everyone. I use autism disclosure this way, not only for this purpose, but also so that when people who I have met try to think of examples of autism, they don’t just think of fictional evidence.
Also, this shows up in HPMoR’s chapter 7, titled Reciprocation.
It’s not just pressure to reciprocate—revealing something very personal is an extremely strong signal of honesty. (Edit: And also confidence)
And while I didn’t do this intentionally per se, I do remember the first conversation I had with my girlfriend involved me telling her about the time I failed out of my program in university. That worked out pretty well, I’d say.
I think that depends on whether the personal detail in question helps or hinders bonding.
There are many personal things people (strangers mostly) could tell me about themselves that would put me off rather than get me to reciprocate, and probably I’ve awakened such reactions in other people in the past as well.
Confess something embarrassing or awkward enough, and wave your success goodbye—just when you thought you were improving social skills by consciously applying social strategies...
Tip: an unflattering but ordinary and relatable experience is best for this. Internet meme images and funny pics are full of those.
Whenever you discover a social “dark art”, look for a countermeasure.
Of course, in most cases this isn’t a “dark art” at all: It can just be a signal that you’re okay talking about X or moving the conversation in the direction of X, without explicit requesting to talk about X, because an explicit request would require an explicit refusal in the case where they truly didn’t want to talk about X. Whereas if you use the ambiguous signal, you’re giving them the option of an ambiguous refusal (often by reciprocating with a superficially equal but actually trivial “yeah me too” disclosure). I think this holds for the case of “difficult” issues between friends, and well as things like flirting (ambiguous introduction of a sexual topic), and moving to informal topics from a formal context.
Languages create selections effects that influence our perceptions of other nations.
Most notably, the prevalence of English as a second language means that more people outside of the Anglo-sphere have access to a wide range of Anglo-sphere media and conversation partners, whereas countries that mostly speak English will have a more filtered selection of international sources. For example, there are more people in Slovakia who can read major US newspapers than people in the US who can read major Slovakian newspapers.
A second class of effects occur on the scale of individuals. Second-language use may stem from a direct ancestral link, as in the case of immigration. Second-language use is sometimes related to higher levels of education. Finally, individual interest can influence the choice of acquired second-languages.
I’m curious how this model seems to people living in non-English countries.
As a monolingual, it does seem clear that I’m getting a very filtered sampling of the residents of foreign countries, even relative to all the normal filtering that happens in communication. I frequently catch myself thinking “How can country X be so dysfunctional? All the people I’ve every met from X are highly-skilled immigrants and people who choose to hang out on the same English-language science and philosophy forums as me!”. The dysfunction of an English-speaking country never puzzles me, since I’ve met far too many of the residents :)
Interesting. So the educational filter should make people in Slovakia appear smarter to Americans (if they notice this country at all) simply because the worst stupidity won’t get translated, and the lowest-class people will not travel to USA. You will not be regularly exposed to things like this.
On the other hand, this effect is probably much smaller than noise created by random American journalists or bloggers writing made-up stuff about Slovakia, or depictions of “Slovakia” in movies (example here, or shortly here). If for whatever reason a popular writer would decide that Slovakia is e.g. inhabited by vampires, there is pretty much nothing we could do about it.
Maybe the right question to ask yourself when you meet a smart immigrant is: “Why did they have to leave their country?” Probably not polite to ask them, but you should assume there was a reason. And if the answer seems to be “poverty”, well, poverty is usually caused by something, so unless the country is just one huge empty desert, there are other things wrong there, too.
It’s also clearly not caused by laziness in this case.
Since no one answered on the stupid questions tread:
Why did LessWrong split off from Overcoming Bias?
Does anyone know?
Avoiding trivial inconveniences that effectively discourage wider participation?
If that effect came as a surprise, it couldn’t have been the reason for the split.
My impression from the outset was that Eliezer and Robin were posting very different sorts of stuff, not having much to do with each other. It was two blogs shoehorned into one. The question for me is not why did they split, but why were they ever together?
Hm.That sounds like something of an ‘insider bias’ where you see differences that are less obvious to the casual reader.
So what was the thought process? What led them to go from one arrangement to the other?
I don’t think either of them ever said. It was a bit like when a band splits up because of “artistic differences”. :)
Really? Was there “juicy drama”? Did Robin ask Eliezer to leave?
That being the case, why was it decided to use the Reddit codebase? Why the Main and the Discussion? If it was just that Robin and Eliezer had differing thrusts, why didn’t Eliezer just start his own personal blog?
I am as much in the dark as you about these details. I’ve been reading since the beginning, but I’ve never been involved in the internal affairs (and probably wouldn’t be commenting if I had been).
On the question of a discussion forum vs. a blog, Eliezer’s intentions for LW were always for it to be not only a method of “raising the sanity waterline”, but also a method of recruiting people to the cause of rationality, and specifically people capable of working on AGI. Hence a format more suited for carrying on discussions than a blog with comments on articles entirely written by the owner and other people the owner has granted special dispensation to. Perhaps the fact that OB was and still is precisely the latter made it less suitable for what Eliezer wanted to do with it.
How did you find OB?
I don’t remember. I wasn’t previously aware of Eliezer or Robin.
This tells you nothing about Richard’s history, but here’s one datapoint from another old-timer: I think I first encountered OB via Tyler Cowen’s Marginal Revolution blog, and in particular this post from 2007-08-25. (I remember being struck by Tyler’s suggestion that OB, despite its name, was actually exemplifying bias in a good way.) Weak evidence in favour of this being right is that my first comment seems to have been in 2007-09 and that this one in 2007-10 says I’ve been reading “for a month or thereabouts”.
[EDITED to add: yes, different username.]
I’ve recently been diagnosed with ADHD (predominantly inattentive). Does anyone here share this, and if so, what resources or books on the topic would you recommend?
Hey...
I’m new here. Hi.
I was recently re-reading the original blogs (e-reader form and all that), and noticed a comment by Eliezer something to the effect of “Someone should really write ‘The simple mathematics of everything’ ”.
I would like to write that thing.
I’m currently starting my PhD in mathematics, with several relevant side interests (physics, computing, evolutionary biology, story telling), and the intention of teaching/lecturering one day.
Now… If someone’s already got this project sorted out (it has been a few years), great… however I notice that the wiki originally started for it is looking a little sad, (diffusion of responsibility perhaps).
So… if the project HAS NOT been sorted out yet, then I’d be interested in taking a crack at it: It’ll be good writing/teaching practice for me, and give me an excuse to read up on the subjects I HAVEN’T got yet, and it’ll hopefully end up being a useful resource for other people by the time I’m finished (and hopefully even when I’m under way)
I was hoping I could get a few questions answered while I’m here: 1) Has “the simple mathematics of everything” already been taken care of? If so, where? 2) Does anyone know what wiki/blog formats might be useful (and free maybe?) and ABLE TO SUPPORT EQUATION. 3) Any other comments/advice/whatever?
Cheers, Babblefish.
I think I have noticed a frequent failure pattern when people try writing about complicated stuff. It goes like this:
Article #1: in which I describe the wide range of stuff I plan to handle in this series of articles
Article #2: introduction
Article #3: even more introduction, since the introduction from the previous article didn’t seem enough
Article #4: reaction to some comments in the previous articles
Article #5: explaining some misunderstandings in comments in the previous articles
Article #6 …I am already burned out, so this never gets written
Instead, this is what seems like a successful pattern:
Article #1: if this is the only article I will write, what part of the stuff could I explain
Article #2: if this is the only article I will write for the audience of article #1, what else could I explain
Article #3: if this is the only article I will write for the audience of articles #1+2, what else could I explain...
Seems to me that Eliezer followed the latter pattern when writing Sequences. There is no part saying “this will make sense to you only after you read the following chapters I haven’t written yet”. But there are parts heavily linking the previous articles, when they advance the concepts already explained. The outline can be posted after the articles were written, like this.
I understand the temptation of posting the outline first, but that’s a huge promise you shouldn’t make unless you are really confident you can fulfill it. Before answering this, read about the planning fallacy, etc. On the other hand, with incremental writing you have complete freedom, and you can also stop at any moment without regrets. Even if you know you are going to write about A, B, C, and you feel pretty certain you can do it, I would still recommend starting with A1 instead of introduction.
I’m not sure that Eliezer outlined the posts in order- he did mention at some point wanting to explain X, but realizing that in order to explain X he needed to explain W, and in order to explain W...
Agreed. One of the ways I’ve worked around this is to not post the start of a sequence until it’s mostly done (I have the second post to this sequence fully finished, and the third post ~2/3rds finished). I’m not sure I’d recommend it- if you find the shame of leaving something unfinished motivating, it’s probably better to post the early stuff early. (I let that particular sequence sit for months without editing it.)
Good luck. ‘The simple mathematics of everything’ is not an easy task. Maybe not even doable. But it’s a noble goal.
Since you asked, my advice is to not work on ill-posed problems. More concretely, ask your advisor for advice on developing a good nose for problems to work on. Where are you starting?
I started writing one of those back in 2005 when my MMath finished. After writing over 1000 pages of loosely-packed LaTeX I discovered ProofWiki which had only just started up. Been writing for it ever since. But I still have that original LaTeX and can at a pinch generate the PDF again (although it’s seriously iffy in places).
In the meantime if you want to join ProofWiki (google it) then if you can handle the iron-rigid rules for contribution, you’d be more than welcome.
When you say “Started writing one of those” Do you mean a blog in general, or a “simple mathematics of everything” in particular? 1000 pages is a pretty decent contribution. What happened to all those pages?
I’ve encountered proof wiki before- its certainly a useful resource, but perhaps not precisely what I am working towards.
Welcome to this website! It’s common for new users to introduce themselves on the Welcome thread.
Unfortunately, while there’s already a wiki, it only has 3 pages.
Welcome Thread- thanks, will go visit.
And yes, I did find that wiki, noticed it was sad and decided… that while the wiki format is nice, I’m not sure if its precisely what is needed here.
(request for guidance from software engineers)
I’m a recent grad who’s spent the last six years formally studying mathematics and informally learning programming. I have experience writing code for school projects and I did a brief but very successful math-related internship that involved coding. I was a high-performing student in mathematics and I always thought I was good at coding too, especially back in high school when I did programming contests and impressive-for-my-age personal projects.
A couple months ago I decided to look for a full-time programming job and got hired fairly soon, but since then it’s been a disaster. I’m at a fast-moving startup where I need to learn a whole constellation of codebase components, languages, technologies, and third-party libraries/frameworks but I’m given no dedicated time to do so. I was immediately assigned a list of bugs to fix and without context and understanding of the relevant background knowledge I frantically debug/google/ask for help until somehow I discover the subtle cause of the bug. Three times already I’ve received performance pressure, and things aren’t necessarily looking up. Other new hires from various backgrounds seem to be doing just fine. All this despite my being a good coder and a smart person even by LW standards. I did well in the job interview.
When I was studying and working in academia, I found that the best way to be productive at something (say, graph theory research) is to gradually transition from learning background to producing output. Thoroughly learning background in an area is an investment with great returns since it gives me context and a “top-down” view that allows me to quickly answer questions, place new knowledge into an already dense semantic web, and easily gauge the difficulty of a task. I could attempt to go into more details but the core is: Based on my experience, “hitting the ground running” by prioritizing quick output and only learning background knowledge as necessary per task is inefficient and I’m bad at it.
At the moment my only strong technology skills are the understanding of the syntax and semantics of a couple of programming languages.
Am I at the wrong company? Am I in the wrong profession—should I go back to academia, spend four years getting a PhD, and work in more mathy positions? Thanks!
I would say it’s a combination of being at the wrong company, and our education system being inadequate to the task.
There are many skills that are required in order to write complex software. You need to know how to organize your code in a maintainable and comprehensible way (Design Patterns, build/package systems, abstraction layers, even simple stuff like UML). You need to know how to find bugs in one’s own code as well as in code written by other people (using debuggers, reading stack traces, writing logs, applying basic deductive reasoning). When you get stuck, you need to know how to get help efficiently (reading documentation, understanding the jargon, knowing exactly which questions to ask, knowing whom to ask them to).
None of these skills are considered “sexy”; and, in fact, most scientists and mathematicians that I’ve worked with in the past don’t even recognize them as skills at all. Their attitude usually is, “don’t bother me with your bureaucratic design pattern bullshit, I wrote a 3000-line method that calculates an MDS plot and it works, what more do you want”. But the problem is that, without such skills, you will never be able to create anything more than a quick one-off script that performs one specific calculation and then quits.
My advice would be as follows.
Firstly, figure out what you actually want to do. Do you want to invent algorithms for other people to implement, or do you want to write software yourself ? There’s nothing wrong with either choice, but you need to consciously make the choice to begin with.
Secondly, if you do want to learn software engineering, find some people at your company who are already experienced software engineers. Ask them for a list of books or online tutorials to read (most likely, they’ll recommend the Design Patterns book, so you might as well start with that). After reading (or, let’s be realistic here, skimming) the books, ask them to sit down with you for a couple of hours in order to review your code—even, and especially, the code that actually works. Listen to their input, and refactor your code according to their recommendations. When you have a bug, make sure you’ve tried everything you could think of, and then ask them to sit down with you and walk you through the steps of diagnosing it.
Thirdly, if there are no such people at your current company, or if they flat-out refuse to help you… then find a better company :-(
This is very much from the outside, but how sure are you that the other new hires are doing just fine? Could they (or some of them) be struggling like you are?
Thanks for the response. It’s hard to say exactly, but I can see their work logs and I hear them getting congratulated from time to time.
When I started as a programmer I joined a graduate program at a big company. I was also fortunate enough that one of the consultants working there was able to act as an informal mentor in how things are done in “real world” programming (including dealing with all those technologies, frameworks, etc). You might find it easier to get up to speed with a bigger, slower-moving company with a more long-term view than a frantic startup.
One framing that might be useful: Part of being a professional software engineer is learning new things constantly, whether it’s new languages, new frameworks, new codebases, etc. But this itself is a skill that can be learned and practiced. In addition to having the attitude of relentless resourcefulness, there are many small tricks that can be picked up: for example, grepping for a bit of text from the UI to quickly find the code that defines it, using Google’s site: operator to make it easier to do targeted documentation searches, using your language’s debugger to solve bugs, having an editor with lots of plugins installed that make you more efficient, etc. This sentence: “I was immediately assigned a list of bugs to fix and without context and understanding of the relevant background knowledge I frantically debug/google/ask for help until somehow I discover the subtle cause of the bug” sounds like a pretty good description of what I found it was like to work as a software engineer at a startup, minus the word “frantic”. I spent a lot of time learning just enough about the code I was working with to solve the problem I needed to, searching on google for just enough documentation to accomplish what I needed to accomplish, etc. Even the best software developers are doing keyword searches on Google and their codebase constantly as part of their development flow; if you find yourself doing this you should not consider it indicative of a problem.
Based on what you describe as your programming background, it sounds like you don’t have much experience with this modality of software development. Probably lots of other recent hires have experience as interns or teaching themselves stuff for independent projects, which helped them learn the skill of just-in-time knowledge acquisition. I might try working at a less demanding company just so you could feel less stressed out and give yourself the opportunity to gradually ramp up to this development style. If you work at a bigger, slower-moving company that nevertheless is using fairly up-to-date technologies and you spend 15 minutes every morning working to improve your tools & efficiency, my guess is you’ll be in a much better place after a year or so.
Seems to me that “good job” is a 2-place word—which working environments makes which people productive? I have experiences very similar to what you described (dozen new components, zero time to learn, expected high productivity immediately, other people seem to cope well and receive performance bonuses). But I also have experiences of high productivity in business situations, where I received performance bonuses. Sometimes both experiences in the same company, just in a different project or with a different boss.
My “natural” approach is to do some learning first: I try to make sure that I understand the specification, that it is complete and unambiguous. For example I will first sketch the dialogs on paper and ask my boss or customer whether this is what they meant (because redrawing a sketch is much easier, faster, cheaper, and less frustrating than changing the code of an already developed and tested program). I think about all the additional work which was not mentioned but may be logically necessary, and I ask about it in advance, offering the least complicated solution. (“You want to have a list of users with passwords. What happens if someone forgets their password? Perhaps there could be an administrator who can change passwords, and as an almost free bonus, they could also block and unblock users. Or maybe users could provide their e-mails, and then change their passwords using e-mail verification, but that would be more complicated, and some users will call your support anyway, so unless you have thousands of users I believe this is not necessary, and can still be included later.”) I also research the framework I am supposed to use, and make a few simple prototypes with the functionality I expect to need in the project. (Thus, if the framework has some serious problems, I am likely to find out in advance, when there is still a chance to use a different technology, or at least to avoid the problematic parts of the framework. The fact that I am not under pressure yet helps me notice more details and context.) Then, when I am ready (which is never perfect, but that’s not the point), I implement the solution, step by step. At each moment I know where I am in the project and how much needs to be done yet. (This knowledge usually also makes my boss happy, assuming they believe my analysis.) I do a part, and I test it. When I am ready, there are usually very few bugs. So the time “wasted” analysing the problem and doing the prototypes is later offset by less bug fixing and less remakes. And there is generally less stress.
As an example of my work (probably the only example available freely online), I did a cycle route map for Trnava region in one month; and that included learning the Google Maps framework which I have never used before. A short tour: Choosing “Zoznam cyklotrás” displays a list of cycle routes; clicking on one of them displays it on the map. The “Zobrazenie cesty k bodom mimo trasy” checkbox allows you to click anywhere on the map to display the shortest path from that place to the route. There is also a possibility to find a cycle route according to your criteria, display “places of interest” within given distance from the route; your results can be printed or exported to PDF. This is the part visible to the user. The administrator part allows importing the cycle routes from GPS log files, editing and annotating the route points (e.g. “crossing a road”, “crossing a railway”); and importing the “places of interest” from an Excel file. -- I believe I was rather productive here, but of course I am open to feedback. (I am not working for that company anymore, so I can’t use the feedback to improve the product.)
However, most managers insist on a completely different approach to work, which other programmers seem to handle somehow (some enjoy it, others complain but cope anyway), but it makes me suffer and less productive. I would describe it as chaotic, short-sighted, penny-wise and pound-foolish, treating experts as replaceable, and accumulating the technical debt until the whole thing collapses under its own weight. Learning the new technology or debating the details of the project is considered an unproductive waste of time; the important thing is keeping the programmers busy (even if it means doing something merely to tear it down later). Maybe it’s because the managers cannot recognize productivity, but can recognize silence and typing. And I really hate the idea that people are completely replaceable and should be randomly switched between different parts of the project or even random projects; in real life it often means the code is sloppy and undocumented, and there is no time or incentive to fix it, because if you spend your time refactoring the code, it makes you seem less productive, and the benefits will be enjoyed by someone else when you are moved to the next piece of spaghetti code. (Somehow collective code ownership is the part of agile programming most accepted by self-labeled enlightened managers. The other parts like unit testing or pair programming are obviously just a waste of time. I have to ask Robin Hanson whether it is a coincidence that the managers embrace exactly the one part which allows them to reduce the perceived status of expert programmers.) Well, to be fair, sometimes there are also external constraints, such as the customer insisting on doing things the stupid way; but I believe most of this comes from inside of the companies.
Most of my productive work was done when I worked on a project alone; and once when I was a leader of a small project. My approach as the leader was asking people what are their individual strengths and what part of the project would they prefer to do, and dividing the work accordingly. Then I paid attention to have clean APIs between the team members, good documentation of the data, and sketches of the dialogs. And then we just coded, and sometimes debated. Three programmers, myself included, the other two were part-time working students, for me it was the first Java EE project done from the scratch and I didn’t have experience with many parts. Yet we completed it in three months, and received bonuses.
Programming with other people, working with large codebases and working with multiple libraries and frameworks are basically all software engineering realities that education gives minimal training for. If you can view the job as a learning experience, it’s probably a good one even though it is frustrating on multiple levels right now. If you’re scrupulous about needing to pull the same weight as the other members and are thinking about switching jobs because of this, you could just talk about it to your manager. They might concede that yeah, you’re not a good fit, and then you can go find a less miserable place to work in, or say that they think you’re actually doing fine, in which case you can go back to considering the learning experience perspective.
Can’t advise on the taking the time off to do a PhD instead, but I don’t think you should give up on programming just yet. Like others said, there are many companies, and bigger companies have more resources to spend for training and mentoring. Also, the current mode of mixing together a bunch of frameworks developed in the last few years nobody really understands and rushing to the market with a minimum viable product chock-full of technical debt is probably just an artifact of the web as an application platform still being a reasonably new thing and people rushing to figure out all the simple things they can do with it. If the technology stabilizes, there’s going to be more opportunity for mastering long-lived technologies.
On the other hand, becoming a specific technology expert in programming is a gamble. Technologies just plain up die sometimes. Math domain expertise is probably a lot more durable, but it’s probably also trickier to get a nice math job than it is to get a nice programming job.
Looks like a wrong company. Try looking for something more structured. It’s good to learn the proper requirements/design/coding/testing techniques before throwing them out the window.
In the vein of asking personal questions of Less Wrong, I need career advice. Or advice on finding useful career advice.
I’m an undergraduate student, my course is “Mathematics & Theoretical Physics”, BSc, but I’m already convinced I don’t want to try to be a career scientist. Long-term, my career goals are to retire early (I’ve felt comfortable enough on what I live on as a student that the MrMoneyMustache approach seems eminently doable), with the actual terminal values involved being enjoyment and lack of stress, so becoming a quant also seems like a bad choice what with having to get a PhD first. Teaching just sounds horrible to me.
What this leaves me with is the much broader range of careers that are either mathematical or sciencey enough that I could use the degree for them, or the jobs and graduate programs that just ask for a degree and don’t care what kind. I have too many choices, every particular one I look at seems okay but not great, I have no idea how to even begin narrowing them down or ordering them.
Which marketable skills do you have or would be willing to acquire?
The concept of a “marketable skill” as it’s been given to me in most career advice I’ve seen seems to refer to a personal virtue that you make a flimsy claim to possessing to make it more likely you’ll get the job. I prefer to just think in terms of qualifications, because it doesn’t put me in a spiral of “I can’t just lie about it, I don’t have any of these virtues they say to say you have, I’ll never get a job”. But at least in terms of actual skills, apart from those I’m presumably working on through the degree I’m also learning Japanese in my spare time, have been learning for a bit over a year and at the current rate would take I think 2-3 years to reach JLPT1 level.
By a “marketable skill” I mean the capability to do something that other people are willing to pay you money for. Not a virtue, not a degree, not even a qualification (what matters is not whether you are qualified to do it, but whether you can do it).
In crude terms, if you want other people to pay you money, what they would pay money for?
I don’t think I currently have any skills I could be paid money to do? I expect in most entry-level positions or graduate programs I could apply, I would be doing things that I don’t yet know how to do that I would either be given on-the-job training for or just have to figure out as I go along. What sort of marketable skills might one have, as an undergraduate student without previous work experience, that I should be trying to think of?
That seems to be a problem. I think you should fix it.
If you can’t come up with a convincing answer as to why an employer should hire you, chances are the employer won’t bother to think one up for you.
That’s basically the question of which job should you get post-college :-) There is a large variety of possible skills—from accounting to website creation.
This graduate scheme at Aldi, which I would be way out of my depth with and I mostly remember because it’s absurdly well-paid for an entry level graduate position, $62,000 in ’murican-money, doesn’t ask for anything that I would actually think of as a skill that you could be paid money to do. You need a 2:1 degree, a driver’s license, and a certain package of personal virtues and personality traits. There are a lot of things like that for graduates, and it’s mostly those things that I’m looking at, with the issue being a lot of choice and difficulty identifying which ones are better than others.
Managing people and logistics is a very desirable and highly-paid skill.
That’s a skill you learn while you’re on the scheme, the applicants don’t need to have the skill already, they need to have the personality traits and qualities that would enable them to quickly learn how to be managers. A qualified, experienced manager, someone who could list “managing people and logistics” among the things they can do that people might pay them to do, would not be an appropriate applicant for the scheme and could probably find better management positions that weren’t entry-level.
I’m saving to take the national examination to become a certified translator.
If you’re looking for a useful major, Computer science is the obvious choice. I also think statistics majors are undersupplied, though only anecdotal data there. I know a few stats majors (none overly clever) that have done far more with the degree than I would have guessed as an undergraduate. But this could have changed since, markets being anti-inductive. If your goal is effective egotism, you’re probably not in the best major. Probably the best way to go about your goal is to follow the advice of effective altruists and then donate all the money to your future self, via a Vanguard fund. If this sounds too evil, paying a small tithe, 1%, would more than make up for this at a managable cost.
I’m not really considering a change in major as on the table, for various reasons, mostly personal. I’m more thinking of what career to try for given the degree I’m on track for and that I’ve rejected the obvious choices for that degree.
The difference with the “effective egoist” approach is the diminishing returns value of money—altruists want to earn as much as they can over the course of their lives, I want to earn a set amount in as little time as possible, and might want to earn more if I’m making lots of money quickly or without stress. That’s the main reason the “get PhD, become quant” track is ruled out—the “teaching sounds horrible” aside was referring to actually becoming a teacher, which is a common suggestion for what to do with a physics degree when ruling out science, I wasn’t actually considering how bad teaching undergrads would be.
And there’s not really a “too evil” for me, my response to the ethical obligation to donate to efficient charity is to notice the that I don’t feel guilty even though the logic seems perfectly sound and say “well I guess I’m already an unrepentant murderer, and therefore evil” and then functionally be an egoist while still using utilitarianism for actual moral questions.
If they want to live forever the effective egoist still has linear utility WRT money until radical life extension and friendly AI research runs out of room for more funding.
If radical life extension eliminates biological ageing and thereby increases life expectancies by 1,000 years, scrounging together enough money to increase the chance it’s accomplished in my lifetime by 0.1% is worth 1 year of life to me. That would take a phenomenal amount of money, and if I have to spend even two years working to get that money when I could otherwise support myself on passive income, I’ve taken a loss.
The point is to live until the functional immortality date.
Well, yes, that’s why I didn’t compare it to other interventions I could make and say they’re much better investments, because the obvious response would be to do both, and why I described the amount of life extension funding in terms that still make sense with reaching the immortality deadline in mind. Increasing the chance you live forever with personal donations to the relevant research groups has a very low expected value per amount of money spent.
Hey, Math PhD candidate here (graduating this May).
These are my goals, as well.
It is pretty horrible. My university has a relatively teaching-heavy TA assignment, and it was kind of soul-crushing.
Graduate school could serve the role of a holding pattern for you to figure out what it is you actually want to do. I think it’s possible to become a quant or an actuary with a MSF or other master’s degree. However, I don’t recommend going into debt for graduate school, and as far as I know most graduate schools don’t fund master’s students.
There’s a somewhat sneaky trick: One can apply for a PhD program, obtain a TA or RA, and then after the requirements of the Master’s program you actually want are done, transfer to the Master’s program and graduate out.
Of course that all requires some degree of teaching, probably, and afterwards you need to find a job making enough to balance out the cost of making a TA’s salary for ~2 years.
The people I know who retired or are scheduled to retire the quickest do white-collar jobs in manufacturing or energy at very large corporations.
The people I know who do the least stressful jobs work either part-time in retail or have tenure of one kind or another. One is an epic-level computer programmer who gets so many job offers that he’s able to choose the least restrictive.
Me, personally? After I defend I’m going to work for a small research lab.
Cops.
So, this looks to be a common aspiration, but it strikes me as woefully underspecified :-) A lot of retired people spend their day extending minor tasks to take a lot of time and spend the rest of it staring into the idiot box.
Are all y’all quite sure you have enough internal motivation to do interesting, challenging things without any external stimuli? What will prevent you from vegging out and being utterly bored for the rest of your life?
Oh, and a practical question (for the US people) -- once you retire at, say, 40, what are you going to use for health insurance and does your retirement planning cover the medical costs?
A life of just everyday minor tasks plus internet/videogames seems perfectly adequate and I don’t understand why the emotional response would be “boredom” rather than “content”, except for the fact that television is vastly inferior to an internet-connected gaming PC.
I’d probably prefer to do “interesting, challenging things” than just veg out the time (which surely should be enough motivation in itself, unless you’re specifically talking about work-like projects and assuming those are necessary to happiness), but if I have a motivation failure and spend all my time doing inconsequential things at home, that’s hardly going to be such a bad outcome that it would be preferable to have to go to work.
Ah. OK, then.
Also military. Defined pension benefits and health care (such as it is) for the rest of your life. Of course, you must be in the military for 20+ years, which I’m guessing is not what the OP is looking for based on his/her other comments. :-)
I experienced this to some extent (a long story I won’t go into here). For a while, we paid for a high-deductible plan on the state exchange since we were both relatively healthy and mainly looking to not be bankrupted should we experience a medical emergency or suddenly fall ill. Unfortunately (or fortunately, depending on how you look at it), our other income was just high enough that we didn’t qualify for federal subsidies so we were paying over $400 per month for a bare-bones plan for my husband and me. Doable, but not ideal....definitely something people need to plan and budget for when considering early retirement.
Will they for a Masters in mathematics? Nearly everyone knows that a Masters in math means “I quit or failed out of my PhD program”. This generally doesn’t reflect well on you.
Short answer: business
Long answer: The high-paying in-demand jobs mostly fall into four categories right now: business, technology, engineering, and health care. Health care would be the toughest switch for you from where you are right now as you’d nearly have to get a 2nd major to get into a grad program there. Engineering would probably require graduate school since your degree isn’t in engineering, and I’m not sure how easy it is for a non-engineering major to go that route. That leaves business and technology, and just a rough guess from your description is that you would prefer business to technology. You would most likely either be working in finance, accounting, or data analysis. A lot of this is just doing basic work with excel spreadsheets all day long. Those are the types of jobs I would recommend looking into.
Hey Rowan,
The way to figure this out is to work backwards. Find people who have the ideal day you want, with your strengths and skills, then work backwards to deconstruct their careers.
Use that to come up with a list of potential careers, then talk to people in that career (find them using LinkedIn) to answer a few questions:
Is the demand for this career going up or down?
What are the biggest surprises I should watch out for?
What does a typical day look like? Would I enjoy it?
What would be my biggest wins in college be in terms of skills, network, credibility , and projects that would allow me to quickly land a job when I get out?
I put up a video on this process here: https://www.youtube.com/watch?v=u6sXNR7kL-c&list=UUCi-drAVuy8g4N8TfODHgUQ
I’d also be happy to chat with you about any further questions you may have: http://selfmaderenegade.net/lets-chat/
Reposting this because I posted it at the very end of the last open thread and hence, I think, missed the window for it to get much attention:
I’m vegetarian and currently ordering some dietary supplements to help, erm, supplement any possible deficits in my diet. For now, I’m getting B12, iron, and creatine. Two questions:
Are there any important ones that I’ve missed? (Other things I’ve heard mentioned but of whose importance and effectiveness I’m not sure: zinc, taurine, carnitine, carnosine. Convince me!)
Of the ones I’ve mentioned, how much should I be taking? In particular, all the information I could find on creatine was for bodybuilders trying to develop muscle mass. I did manage to find that the average daily turnover/usage of creatine for an adult male (which I happen to be) is ~2 grams/day—is this how much I should be taking?
I’m a vegetarian and I looked into this stuff a while back. The Examine.com page What beneficial compounds are primarily found in animal products? is a useful reference with sources and includes the ones you wrote above. An older page with some references is this one.
I currently supplement with a multivitamin (this one—Hair, Skin and Nails), creatine and occasionally Coenzyme Q-10 and choline, You didn’t mention the last two but I have subjectively felt they increase alertness. I (hopefully) get my Omega-3/6 fatty acids from cooking oil. I had a basic panel done and found I was deficient in Calcium (probably due to my specific diet, but it is worth mentioning) and B12. So, I supplement for Calcium too.
I do regular exercise (usually bodyweight and dumbbells) and I had disappointing results without whey protein and creatine supplementation. Excessive amounts of creatine (look up “loading”) is recommended for bodybuilders but 5g/day is recommended for vegetarians. See gwern’s review and the examine.com review.. The examine.com review mentions that the fear of this compound is irrational and recommends 5g a day for everyone, pointing out that creatine would have been labeled a vitamin if it wasn’t produced in the body. (Excessive creatine causes stomach upsets but I wasn’t able to find a value at which this happens, and I’ve never experienced this myself).
I also take a fiber supplement, Metamucil. This one isn’t vegetarian-specific, but I highly recommend it.
From cooking oil you get too much Omega-6 and not enough Omega-3.
I haven’t put sufficient effort into identifying healthy cooking oils. I currently use Crisco’s Blended Oil supplemented with Omega-3. The question is if it is supplemented in the right amount, and that information is not provided.
Animal fats are low in Omega-6 but I think the Omega-3:6 ratio is a problem for meat-eaters too.
Thanks, this looks good. The sort of thing I was after.
I’ve never heard this expression! I wonder whether that’s just transatlantic terminology variation. Will look into whether I can get this on the NHS.
Perfect; thanks.
I have been vegetarian for three years, and haven’t taken any supplements consistently throughout that period of time. The last time I had a blood panel done, I didn’t have any mineral deficiencies, at least. I am by no means against taking supplements, but my impression is that they aren’t fully necessary for vegetarians who have a well-balanced diet.
I did take B12 for a few months when I was experimenting with reducing my intake of eggs and milk, though I eventually decided that I really liked eggs and milk, and consequently stopped taking B12. I’ve recently started taking CoQ10 because RomeoStevens advocated doing so here.
In the past couple of years, I have considered becoming flexitarian (i.e. 98% vegetarian) or pescatarian, mostly for convenience and health reasons, respectively, though I’ve elected to stay vegetarian for now. This is partly because I’m used to being vegetarian, partly because I’ve accidentally built vegetarianism into my self-identity, and partly because of the normal reasons people give for being vegetarian (health, environmental, and compassion-towards-animals type reasons).
Added 6/29/2015: Apparently, I haven’t been getting enough fiber for at least the last couple of months, but that is due to me being lazy about my diet, rather than any shortcoming of vegetarianism.
You might consider the vegetarian case for eating bivalves It’s a way of getting the benefits of pescetarianism with less moral uncertainty issues.
Yes, as of a few months ago when I researched the issue, I am OK with eating bivalves. I just haven’t gotten around to doing so yet.
Vitamin K2. Vitamin K1 is produced by plants, and K2 is produced by animals and bacteria. They have very different functions in the human body, and you need them both. Supplements and fortified food are almost always K1, unless you look for K2 specifically.
Vitamin K2 is necessary for some proteins which modulate calcium in your body. Supplementing it has been found to protect both against osteoporosis and heart/artery calcification.
You should ask a dietician, not us.
There are many other vegetarians; this seems like it should be a solved problem.
I know plenty of LW people are interested in nutrition; it’s within the realms of possibility that one of them might know enough about what I’m asking to be able to give me a quick summary of what I’m after. As for asking a dietician, I’ve never met one and wouldn’t know how to go about getting hold of one to ask. (I’m also not totally sure I’d trust J. Random Dietician to have a good understanding of things like what counts as good evidence for or against a proposition. Nutrition is a field in which it’s notoriously difficult to prove anything.)
Well, erm, yes, that’s why I’m asking about it. (I don’t go around making posts asking for proofs that P=NP, for example.)
I disagree about asking a dietician and not LW.
Can you expand on your reasoning?
FrameBenignly’s comment reflects my opinion well
A dietician can get licensed with just a bachelor’s degree in nutrition. A well-informed layman will often have more informed views on the issue. Also, communities like this will select against bad information. However, a fitness forum that also has a commitment to rejecting errors will have even better answers as they will specialize in this area.
Might be uaeful to enter your typical intake on cron-o-meter and check for deficiencies. If I had to guess, you might be low on choline, but you shouldn’t supplement based on my wild guess. :)
To piggyback on this:
I’m currently a vegetarian and have been for the past three years, before which the only meat I consumed was poultry and fish. I’ve been reading a lot about the cognitive benefits of consuming fish (in particular, the EPA/DHA fatty acids); unless I’m mistaken (please tell me if I am), EPA and DHA cannot be obtained from vegetables alone. ALA can be obtained from seaweed, and while our bodies convert ALA into EPA, we do it very slowly and inefficiently, and ALA wouldn’t give us any DHA.
I looked into fish oil pills. Apparently pills contain much less EPA/DHA than fish meat does, and it’s more cost-effective to eat fish (depending on which species, of course)… and based on other research, I’d expect that our body would extract more fatty acids from a fillet than from a pill with the same quantity of acids.
I still have a visceral (moral?) opposition to eating fish and supporting horrendous fishing practices, and I worry about where fish I might be eating would come from. If it’s coming from the equivalent of a factory farm, then I don’t want to eat it. On that point, I’ve read many articles suggesting that extracting fish oil harms certain species of fish.
Ideally there would be a vegetarian, eco-friendly, and health-friendly source of EPA/DHA. Is there?
In the meanwhile, I will try fish again and see if it has any noticeable effect on me. I’ll continue to investigate whether vegetarian or eco-friendly sources of EPA/DHA exist, especially if I notice any positive effects from eating fish.
And, the undermining question: does not having any EPA/DHA really matter? (I think it does, since it apparently boosts cognitive function, and I want my brain to operate at its maximum potential; but maybe I’m wrong.)
I’m in the same boat as you with regards to whether EPA/DHA has a bigger effect than ALA, but I was convinced enough to try to find some when I became vegetarian last year.
If you google “algal dha together” you’ll find what I’m taking—meeting your criteria of vegetarian (vegan), eco-friendly and health-friendly (with aforementioned uncertainty)
ALA can also be found in flaxseed, soy/tofu, walnut and pumpkin, so you needn’t stick to seaweed if you only want ALA.
I once did a 3-day analysis of all foods consumed, and found I was within optimal limits on just about everything. I was high on salt and low on manganese. It’s quite possible to get everything you need using a vegetarian diet, and your particular needs will be unique to you.
Sebastian Seung’s Quest to Map the Human Brain By GARETH COOK JAN. 8, 2015
http://www.nytimes.com/2015/01/11/magazine/sebastian-seungs-quest-to-map-the-human-brain.html?ref=magazine&_r=1
Q&A with Zoltan Istvan, Transhumanist Party candidate for the US President
http://youtu.be/Xk4olY4qIjg
I recently attended a biology conference where, among many other things, I got to see a talk by Dr. Jeff Lichtman of Harvard University on brain connectomics research.
It’s very interesting stuff. He has produced a set of custom equipment that can scan brain tissue (well, any tissue, but he’s interested in brain tissue) at 5x5x30 nm resolution. His super-duper one of a kind electron microscope can at this point scan about 0.3 cubic millimeters in 5 weeks, if I’m not mistaken. It spits out a dataset in the fractional-petabytes range. He’s had one such dataset for a full 3-4 years but is encountering major problems with analysis—tracing cells and fibers over their full path is a very difficult problem. Automatic cell-tracer programs are good enough over the number of slices that makes up a cell-body but utterly fail at identifying things like synaptic vesicles reliably and when tracing fibers over their full lengths. To the point that most of his good data that he showed us has been manually annotated by graduate students and undergrads working in his laboratory. Hence the above link’s mention of gamifying the task to try to crowdsource it.
Interestingly, he described his equipment as a ‘tissue observatory’. He thinks that neuroscience should take a page from astronomy and just see what the heck is out there. He thinks they are trying to make detailed hypotheses about function and structure and everything else on far far too little data right now and we need a lot more data on the actual structures before we can be confident about much about them other than the mere fact that they correspond to function. To the point that during his talk, when showing the 30 micron wide completely-perfectly-annotated chunk of his first dataset that he has published much analysis on the thousands of synapses of (something like 0.01% of his first raw dataset he has had for years) he showed the exploded figure of hundreds of cellular fibers and the table of dozens of parameters for thousands of synapses and said “And here it is… incredibly beautiful and so far totally useless.” His point being he wants to annotate more of it, and use it as a base to make actual informed inferences and hypotheses about connection formation and network structure. He also notes that his datasets cannot tell apart different neurotransmitter producing cells, gene expression, or the presence of different kinds of proteins pre or post synaptically.
He (knowingly for humor and exasperation’s sake) overstates the case there about ‘uselessness’ even at current levels of annotation—you can focus in on pieces of the dataset and annotate it for your own purposes. Someone else at the conference presented work in which they took his dataset (which is, after all, revolutionary in terms of the sheer amount of fine 3-dimensional data it has on the structure of so many different cell types in their normal living context) and figured out longstanding questions about the topology of certain intracellular structures that have held for quite a long time. He already has interesting statistical information about the connections between the cells in his pefectly-annotated segment of data, showing that if two cells are connected they tend to be connected in multiple places. He also appears to have found cell types in his data he had no idea existed and he still doesn’t know what they are, and noted that the big spine-based synapses that have been well-studied so far only represented less than a third of the synapses in the perfectly-annotated chunk. There’s apparently other people lining up to use his equpment too and if I recall correctly he said someone is hoping do an entire fruit-fly brain, much like was mentioned in the above link.
Upvoted especially for this.
I checked out the Transhumanist Party site, and they didn’t have a list of stuff Zoltan Istvan would do if elected, not even many applause lights. They also clearly haven’t hired a web designer. They don’t have a voting guide for different ballot measures. Finally, I’m tempted to vote for a 3rd-party candidate as my congressional representative, and they seem to only have a presidential candidate listed. I don’t think Istvan has any plans for what he would do as President, and he doesn’t seem to want to be elected.
The Transhumanist Party website is clearly the same template as Istvan’s personal website. He is trying to sell a book, so my guess is the Transhumanist Party is a publicity stunt to sell his book.
Thanks! It did smell like a publicity stunt, but I wasn’t sure what it was trying to promote, since it wasn’t promoting policy changes or some other political goal very well. I’m not sure having a presidential campaign that obviously isn’t trying to get anyone elected is the best way to sell books, though.
I have a gut feeling that a lot of long-shot campaigns are more about publicity/book sales/speaking fees than a genuine desire to be elected.
To be fair to Istvan, I don’t think his motive is primarily financial, since he is giving away a free Kindle version of his book.
In that case, maybe the goal is not to sell books, but rather to publicize them and the ideologies they contain.
Ron Paul and Ralph Nader (many-time presidential candidates with no chance of winning) are concrete examples of this in the US political system.
Both have done decent with respect to speaking fees and personal fame, but of course so do “genuine” candidates!
Istvan doesn’t seem to hurt for money if he can afford to live in the Bay Area, and he has a vineyard in Argentina.
Why would he bother unless he’s a Republican or Democrat?
I just downloaded the free Kindle version of Istvan’s book, and it seems he’s advocating a fusion of Objectivism/egoism and Transhumanism. Transhumanism and objectivism would seem to go together very naturally from a philosophical perspective, yet it seems to me that the great majority of transhumanists are left-liberals.
I watched the Joe Rogan interview with him where he disavowed his books political leanings. I’m a left-liberal who used to hate him because of his book, but after watching that interview I like him.
https://www.youtube.com/watch?v=9grWo5ZofmA
Edge.org 2015 question: WHAT DO YOU THINK ABOUT MACHINES THAT THINK?
There are answers by lots of famous or interesting scientists and philosophers, including Max Tegmark, Nick Bostrom, and Eliezer.
What I find most interesting about the responses is how many of them state an opinion on the Superintelligence danger issue either without responding at all to Bostrom’s arguments, or based on counter-arguments that completely miss Bostrom’s points. And this after the question explicitly cited Bostrom’s work.
I’m a programmer with a fair amount of reasonably diverse experience, e.g. C, C#, F#, Python, Racket, Clojure and I’m just now trying to learn how to write good Java. I think I understand most of the language, but I don’t understand how to like it yet. Most Java programmers seem to basically not believe in many of the ways I have learned to write good software (e.g. be precise and concise, carefully encapsulate state, make small reusable modular parts which are usually pure functions, REPL-driven development, etc. etc.) or they apply them in ways that seem unfortunate to me. However, I feel foolish jumping to the popular conclusion that they are bad and wrong.
I would really like a book—or heck, just a blog post—which is like “Java for Functional Programmers” that bridges the gap for me and talks about how idiomatic Java differs from the style I normally consider good and readable and credibly steelmans the Java way. Most of my coworkers either don’t like the Java style, only know the Java style, or just don’t care very much about this kind of aesthetic stuff, so none of them have been very good at explaining to me how to think about it.
Does this book exist?
I don’t know Java books, but I would like to react to this part anyway:
There are much more bad programmers than good programmers, so any language that is sufficiently widely used is necessarily a language mostly used by bad programmers. (Also, if the programming language is Turing-complete, it also means that you can reinvent any historical bad programming practices in that language.) On the other hand, there are often genuine mistakes in the language design, or in the standard libraries. So here is my opinion on which is which in Java:
precise and concise—sorry, no can do. Using proper formatting, merely declaring a read-only integer property of a class will cost you five lines not including whitespace (1 line declaration, 1 line assignment in constructor, 3 lines read accessor). (EDIT: Removed an obsolete info.)
carefully encapsulate state—that’s what the “private” and “public” keywords are for. I don’t quite understand what could be the problem here (other than bad programmers not using these keywords; or the verbosity).
make small reusable modular parts which are usually pure functions—this is not how Java is typically used, but it can be done. It has the garbage collector. It has immutable types; and for the mutable ones, you could create an immutable wrapper class (yes, a lot of writing again). So you can write a module that gets immutable values as inputs, returns them as outputs, which is more or less what you want. The only problem is that “immutability” is not recognized by the language; you only know that a class is immutable by reading the documentation or looking at the source code; you cannot have the compiler check it for you.
REPL-driven development—it could be technically possible to make an interactive functional shell, and maybe someone already did it. But that’s definitely not how Java is typically used. A slightly more traditional solution, although not exactly what you want, would be to use the Groovy language for the interactive shell. (Groovy is more or less a “scripting Java”. Very similar to Java, with minor differences; can directly call functions from the Java program it is included in.) The traditional solution is to do unit testing with JUnit.
As a beginner, avoid Java EE like hell. That is the really ugly part. Stay with Java SE until the Stockholm syndrome kicks in and you develop feelings for Java, or until you decide you do not want to go this way.
Feel free to give me a short example in other programming language or pseudocode, and I will try to write it in Java in a functional-ish style.
Lambda syntax is definitely present in the currently available version of Java. I use it on a daily basis.
Oops. The version is out there for almost a year. I missed it, because we do not use it at work.
Embarassing to find this out after pretending to be a Java expert. Does not add much credibility. :D
No worries, you obviously know what you’re talking about in general. I just wanted to make sure false impressions don’t spread.
I might try Groovy for the REPL stuff—I was trying Clojure before, but I ran into problems getting it to get the dependencies and stuff all into the REPL (I work on a big project that uses Gradle as a build system, and Clojure doesn’t usually use Gradle.)
One pattern I have in mind here: if I have some algorithm I have to perform that has some intermediate state, I will break it down into some functions, pass the state around from function to function as necessary, and wind up composing five or six functions to get from the start to the end. Java programmers seem to often instead do something like make a class with the five or six functions as methods, and then set all the state as fields on the class, which the methods then initialize, mutate, and consume, and call the methods in the right order to get from the start to the end. That seems a lot harder for me to read because unless there is also great documentation I have to read very closely to understand the contract of each individual method.
(I’m also incidentally confused about when in Java people like to use a dependency injection tool like Guice and when people like to pass dependencies explicitly. I don’t think I understand the motivation for the tool yet.)
Java programmers are usually familiar with procedural programming, not functional. The older ones are probably former C/C++ programmers, so they mostly write C/C++ code using Java syntax. That probably includes most textbook authors.
Nothing in Java prevents you from having intermediate states, and composing the functions. You just have to specify the data type for each intermediate state, which may require creating a new class (which is a lot of typing), or typing something like Pair, List>, so yeah, there are inconveniences.
As a crazy creative solution, I could imagine wrapping the “class with the five or six functions” into multiple interfaces. Something like this:
Old version:
New version:
This would force users to write things like:
I also admit my colleagues would kill me after doing this, and the jury would probably free them.
I am a Java programmer, and I believe in those principles, with some caveats:
Java is verbose. But within the constraints of the language, you should still be as concise as possible.
Encapsulation and reusable modular design is a central goal of the language and OO design in general. I think Java achieves the goal to a significant degree.
Instead of using a REPL, you do edit/compile/run loops. So you get two layers of feedback, one from the compiler and the other from the program itself.
Even though Java doesn’t emphasize functional concepts, you can still use those concepts in Java. For example, you can easily make objects immutable just by supplying only a constructor and no mutator methods (I use this trick regularly).
Java 8 is really a big step forward: we can now use default interface methods (i.e. mixins) and lambda syntax with collection operations.
My feeling towards Java is just that it’s a very reliable old workhorse. It does what I want it to do, consistently, without many major screwups. In this sense it compares very strongly to other technology tools like MySQL (what, an ALTER TABLE is a full table copy? What if the table is very large?) and even Unix (why can’t I do some variant of ls piped through cut to get just the file sizes of all the files in a directory?)
Nerd sniped. After some fiddling, the problem with
ls | cut
is that cut in delimiter mode treats multiple spaces in a row as multiple delimiters. You could put cut in bytes or character mode instead, but then you have the problem that ls uses “as much as necessary” spacing, which means that if the largest file in your directory needs one more digit to represent then ls will push everything to the right one more digit.If you want to handle ls output then awk would be easier, because it collapses multiple successive delimiters [1] but normally I’d just use du [2]. Though I have a vague memory that du and ls -l define file size differently.
(This doesn’t counter your point at all—unix tools are kind of a mess—but I was curious.)
[1] ls -l | awk ‘{print $5}’ [2] du -hs *
Your vague memory is probably that ls -l gives file size, while du give “disk usage”—the number of blocks used. On my computer the blocksize is 4k, so du only reports multiples of this size. (In particular, the default behavior is to report units of historical blocksize, so it only reports multiples of 8.)
A huge difference that I doubt you forget is how they define the size of directories—just metadata vs recursively. But that means that du is expensive. I use it all the time, but not everywhere.
I’m going to CFAR, this week. I’ll have pretty much a full day before the workshop, where I have nothing planned. Are there any cool rationalist things I should see or do in SF? Or even non-rationalist, but worthwhile things?
Are there any “landmarks” where I can just drop by (or maybe call ahead first)? I don’t suppose MIRI welcomes merely -curious tourists /Pilgrims.
I’d suggest the Exploratorium science museum, but this may be a case of Typical Mind Fallacy. There’s lots of museums in SF, so you should be able to find one targeting your interests.
If you’re physically able to, you should walk across the Golden Gate Bridge and visit the visitor center.
If you know any employees, you may be able to get tours of different organizations. I was last in SF with a college professor who was able to get tours from his former students. There’s not really much cost to just asking MIRI for permission to visit, even if you don’t know anyone there.
Random unsolicited advice:
Here’s a self-improvement tip that I’ve come up with and found helpful. It works particularly well with bad habits, which are hard to fix using other self-improvement techniques as they’re often unconscious. To take one example, it’s helped improve my posture significantly.
1) List your bad habits. This is a valuable exercise in its own right! Examples might include bad posture (or, more concretely, crossing your legs), mumbling, vehicular manslaughter, or something you often forget to do.
2) Get in the habit of noticing when they occur, even if it’s after the fact. You can regularly try remembering whether they have at a convenient time for you, such as at lunch or in the evening. Ideally you should try to notice them soon after they occur however, for reasons that will become clear.
3) Come up with a punishment. The point of this is not to create an incentive not to lapse (you could experiment with that, but I’m not sure whether it will work, as bad habits are rarely consciously chosen). Instead, it’s to train yourself by Pavlovian conditioning—training “system one”, in Daniel Kahneman’s terms. Examples of punishments would be literally slapping yourself on the wrist, pinching yourself, or costing your HabitRPG character health points (see https://habitrpg.com/ ).
Positive reinforcement works better than negative. If noticing is followed by a punishment you are disincentivizing yourself to notice. This is bad because noticing is its own super power. Instead maybe try congratulating yourself for noticing, and then replacing the negative habit with some other reward. Eating too many gummy bears in the short term is probably worth it to repair bad habits in the long term for instance.
How long have you been using this?
Just under a year, and I’ve been using it for posture (a really tough habit to break, at least for me), so I have a good bit of data.
If the problem is bad posture while sitting at the computer, you could try removing your chair’s back and armrests. Once I learned how to sit right (with Alexander technique), I discovered that back and armrests are like magnets for my body, and they also make it quite easy to sit in a bad posture for a long time before noticing. Without that support though, a bad posture become uncomfortable much faster, and I soon notice and straighten up.
On the subject of self-improvement and self-control: My big tip for achieving goals is to set goals you actually want to achieve, not goals that you want people to think you want to achieve. For example, if you want to sit with your legs crossed, not only in the immediate term but also upon weighing the advantages and disadvantages of doing so, you’re not likely to succeed in trying to make yourself sit differently.
For example, my goal in anger management is not “always stay calm, even when I stand to personally gain by being angry.” My goal is “avoid being more angry than I want to be.” Thinking things like “it is okay to be angry, but I don’t want to experience desires to do things that I think are immoral, so I should calm down to the point of not wanting to punch anyone” has been far more effective than thinking things like “you’re not allowed to be angry right now, calm down!”.
Amusing product you could use with this—the Pavlok, which gives you electric shocks ( http://pavlok.com/ )
There was also a kickstarter device that sucked off your blood as a penalty, but they banned it.
Death by Robot by Robin Marantz Henig
Part of the hidden ethicist agenda to reveal everyone’s systems of morality via discussion of self-driving cars.
Is there any way to block distracting software on my computer? There are a blue million apps that will block websites, but I can’t find any that will stop me from playing games I’ve installed. Ideally, I’d like some software that lets me play my games, but only after a 10 minute wait. But I’d settle for anything now that can restrict my access to games without uninstalling them entirely.
Dual boot to Linux for working and to Windows for the games?
Cold Turkey
If you have somebody to store your admin password in, Windows’ parental control could help.
Password Door, or Microsoft Family Safety (for Windows, available through the parental controls; I think you need to have some else administer it though).
I’m looking for an old post of Eliezer’s. If I remember the post correctly, he was commenting that a lot of the negative reaction to evopsych might come from having first encountered it in the hands of dumb internet commentators, instead of .
I don’t remember the title he referenced, and the search function is failing me. Can anyone point me in the right direction?
I can’t find the exact post you’re talking about, but the book involved was probably The Adapted Mind, since Eliezer often praises it in terms like those.
Thanks. Having the title was enough to find the post. I turned out to be looking in the wrong place. The comment was on SSC, not LW—which I find amusing given that you were the one to respond.
The book is irritatingly expensive. I might read The Moral Animal instead. Both seem to be widely recommended around here. Searching on the former consistently turns up the latter as well, often in the same breath.
Motivation: I noticed that I can’t distinguish between just-so stories and genuine evopsych insights. I think seeing a known example of it being done right might help fix that.
The post was Polyamory is Boring btw, in case anyone else is curious.
He actually said it beforehand in LW as well. Link.
Next Big Future: Quantifying and defining hard versus soft takeoff of AGI (also references LW)
I have heard that in economics and possibly other social sciences Ph.D. students can staple together three journal articles, call it a dissertation and get awarded their doctorate. But I’ve recently read “Publication, Publication” by Gary King, which I interpret as saying a very bright and hardworking undergraduate can write a quantitative political science article in the space of a semester, while carrying a normal class load.
This is confusing. Now, Dr. King teaches at Harvard so all his students are smart and it’s two students writing one paper but this still seems insane. I’m guessing a full course load is around 6 classes a term and people are to write a journal article or close approximation thereof in a semester when three of them will suffice to get a Ph.D. and many people fail out of said degree who are very, very smart.
Where am I confused? Is research not that hard, a stapler thesis a myth or these class projects not strictly comparable to real papers?
http://gking.harvard.edu/classes/advanced-quantitative-political-methodology-government-2001-government-1002-and-e-2001
I don’t know about social sciences, but the situation in math isn’t that far off. The short answer is that the papers done by the undergraduates are real papers but the level of papers which are of the type and quality that would be stapleable into a thesis are different (higher quality, more important results) than would be the sort done in undergraduate research.
There is usually more to a “PhD by publication” than just publishing any 3 articles and then submitting them for the degree.
A nice 2011 article in Times Higher Education describes what the process actually requires, at least in the UK most importantly, coherence: the articles must be on related themes, and additional supporting documentation on the order of 10k words is usually required to convert the independent publications into some kind of coherent package that, very often resembles a conventional thesis.
It’s also informative to look over the recent discussion on the Thesis Whisperer blog—lots of comments from people in various disciplines about the realities of publication-based theses.… and usually they describe them as more work than a conventional thesis.
For published papers like the one described by Gary King—it may be hard to write a combination of them that meets an institution’s criterion for PhD by publication. Not just the coherence part but usually there is a requirement that a PhD makes a novel contribution to the field—and it is hard to justify this with strictly replication-based approaches.
However, if the work follows King’s suggestion to replicate and then make minimal changes (“make one improvement, or the smallest number of improvements possible to produce new results, and show the results so that we can attribute specific changes in substantive conclusions to particular methodological changes”—King p.120) - a series of such publications on closely related themes starts to look a lot like a conventional PhD..… although getting a paper through peer review is still quite a challenge. King’s paper (and supplemental comments) can also be a useful guide for researchers outside academia to get published.
From an economics perspective, the stapler dissertation is real. The majority of the time, the three papers haven’t been published.
It’s also possible to publish empirical work produced in a few months. The issue is where that article is likely to be published. There’s a clear hierarchy of journals, and a low ranked publication could hurt more than it helps. Dissertation committees have very different standards depending on the student’s ambition to go into academia. If the committee has to write letters of rec to other professors, it takes a lot more work to be sufficiently novel and interesting. If someone goes into industry, almost any three papers will suffice.
I’ve seen people leave because they couldn’t pass coursework or because they felt burnt out, but the degree almost always comes conditional on writing something and having well-calibrated ambitions.
I’ve never been a grad student, so this is pure supposition, but...
I suspect that if you went into a PhD program and tried to hand in a thesis six months later, the response that you’d get from on high is “Ha ha, very funny. Come back in three years”, and that this response would happen whether or not you produced something that’s actually good enough to be a proper thesis. Profs know how long a doctorate is “supposed to” take, and doing it in a tenth of that time will set off alarm bells for them.
Is there a better search term than “self-modification,” or a better place to look other than LW, for self-modification ideas/experiments, of the “when system 1 and system 2 are in conflict, listen to system 2″ type? Any comments like “This particular thing worked for me and here’s a link to it” are welcome.
Filing with the minimum of trivial impediments
This is a system designed especially for people who suffer from depression—one of the symptoms is difficulty with making decisions, so the idea is to require as few decisions as possible—for example, just file the envelope full of stuff from your bank instead of sorting out the advertising.
There’s also minimization of the demands on memory—for example, writing payments on bills.
The piece that really struck me was the recommendation of having a place for the stuff you’re going to file, instead of letting it get scattered and lost.
Forget Skynet: How Person Of Interest Depicts A Realistic A.I. Uprising
I’m in the middle of the third season. Are there spoilers in the link?
Many.
I’ve never studied any branch of ethics, maybe stumbling across something on Wikipedia now and then. Would I be out of my depth reading a metaethics textbook without having read books about the other branches of ethics? It also looks like logic must play a significant role in metaethics given its purpose, so in that regard I should say that I’m going through Lepore’s Meaning and Argument right now.