You want to do user-facing stuff? Then don’t bother with desktop programming, write webapps. HTML and JavaScript are much easier than C++. You don’t even have to learn the server side at first, a lot of useful stuff can be written as a standalone html file with no server support. For example you could make your own draggable interface to the free map tiles from http://openstreetmap.org—basically it’s all just cleverly positioned image elements named like this, within a rectangular element that responds to mouse events. Or, if a little server-side coding doesn’t scare you, you could make a realtime chat webpage. Stuff like that.
If you need any help at all, my email is vladimir.slepnev at gmail and I’m often online in gtalk.
“Easy” is one goal you can have when learning to program. “Soundly written and maintainable” is another. Unfortunately these two goals are sometimes at odds.
Language and platform don’t really matter a whole lot, in the grand scheme of things; learning how to write maintainable programs does matter. Having had lots of experience extending or modifying source code written by others, I wish more novice programmers would make that their number one goal.
A (real) novice programmer’s number one worry should be getting paid. Why should they divert their attention and spend extra effort on writing maintainable code, just so you have an easier time afterward? That’s awfully selfish advice.
You might claim writing maintainable code will pay off for them, but to properly evaluate that we need to weigh the marginal utilities. What’s better, an extra hour improving the maintainability of your code, or an extra hour spent empathizing with the client? Ummm… And you can’t say both of those things are first priority, that’s not how it works. I’ve been coding for money for half of my life so listen to my words, ye lemmings: ship the thing, make the client happy, get paid. That’s number one. Maintainability ain’t number one, it ain’t even in the top ten.
For one thing, that doesn’t sound like something that’s actionable for Silas in the context of his request for advice, compared to advising him to learn some specific techniques, such as MVC, which make for more maintainable code.
For another, your worry should be “getting paid” after you have reached a reasonable level of proficiency. A medical student’s first concern isn’t getting paid, it’s learning how not to harm patients. Similarly if you’re learning programming, as opposed to confident enough of your chops to go on the market, you have a responsibility to learn how not to harm future owners of your code through negligent design practices. That a majority of programmers today fail to fulfill that basic responsibility doesn’t absolve you of it.
Programming is different from medicine. All the good programmers I know have learned their craft on the job. Silas doesn’t have to wait and learn without getting paid, his current skill level is already in demand.
But that’s tangential. More importantly, whenever I hear the word “maintainability” I feel like “uh oh, they wanna sell me some doctrinaire bullshit”. Maintainability is one of those things everyone has a different idea of. In my opinion you should just try to solve each problem in the most natural manner, and maintainability will happen automatically.
Allow me to illustrate with an example. One of my recent projects was a user interface for IPTV set-top boxes. Lots and lots of stuff like “my account”, “channels I’ve subscribed to”, et cetera. Now, the natural way to solve this problem is to have a separate file (a “page”) for each screen that the user sees, and ignore small amounts of code duplication between pages. If you get this right, it’s pretty much irrelevant how crappily each individual page is coded, because it’s only five friggin’ kilobytes and a maintenance programmer will easily find and change any functionality they want. On the other hand, if you get this wrong… and it’s really fucking distressing how many experienced programmers manage to get this wrong… making a Framework with a big Architecture that separates each page into small reusable chunks, perfectly Factored, with shiny and impeccable code… maintenance tasks become hell. And so it is with other kinds of projects too, in fact with most projects I’ve faced in my life. Focus on finding the stupid, straightforward, natural solution, and it will be maintainable with no effort.
Right on. Another way to put it: if you have to spend extra effort on maintainability, you’ve probably screwed up somewhere.
My name for this kind of behavior is “fetish”. For example, some people have a Law of Demeter fetish. Some people have a short function fetish. And so on, all kinds of little cargo cults around.
Allow me to illustrate with another example. One of my recent projects is mostly composed of small functions, but there’s this one function that is three screens long. What does it do? It draws a pie chart with legend. The only pie chart in the whole application. There’s absolutely no use refactoring it because it’s all unique code that doesn’t repeat and isn’t used anywhere else in the app. Pick the colors, draw the slices, draw the legend, stop. All very clear and straightforward, very easy to read and modify. A fetishist would probably throw a fit and start factoring it into small chunks, giving them descriptive names, maybe making it a “class” with some bullshit “parameters” that actually only ever take one value, etc, etc.
Well, that’s merely labeling, not actually advancing an argument. What kind of predictions are we talking about here? Where is our substantial disagreement, if any?
When I talk about maintainability I’m referring to specific sequences of events. In one of the most common negative scenarios, I’m asked to make one change to the functionality of a program, and I find that it requires me to make many coordinated edits in distinct source chunks (files, classes, functions, whatever). This is called “coupling” and is a quantifiable property of a program relative to some functional change specification.
“Maintainable” relative to that change means (among other things) low coupling. You want to change the pie chart to use dotted lines instead of solid inside the pie, and you find that this requires a change in only one code location—that’s low coupling.
Now what often happens is that someone needs a program that’s able to do both dotted-line pies and solid-line pies. And many times the “most natural” thing (by which I only mean, “what I see many programmers do”) is then to copy the pie-chart function, paste it elsewhere with a different name, and change the line style from solid to dotted.
That copy-paste programming “move” has introduced coupling, in the sense that if you want to make a change that affects all pie charts (dotted and solid alike) you’ll have to make the corresponding source change twice.
Someone who programs that way is eventually going to drive coupling through the roof (by repeated applications of this maneuver). At this point the program has become so difficult to change that it has to be rewritten from scratch. Plus, high coupling is also correlated with higher incidence of defects.
Now you may call a “fetishist” someone whose coding style discourages copy-paste programming, that doesn’t change the fact that it is a style which results in lower overall costs for the same total quantity of “delta functionality” integrated over the life of the program.
My contention is that functions which are three screens long are, other things equal, more likely to result in copy-paste parametrizations than smaller functions. (More generally, code that exhibits a higher degree of composability is less susceptible to design mistakes of this kind, at the cost of being slightly harder to understand for a novice programmer.)
I’d probably look hard at this pie chart thingy and consider chopping it up, if I felt the risk mitigation was worth the effort. Or I might agree with you and decide to leave it alone. I would consider it stupid to have a “corporate policy” or a “project rule” or even a “personal preference” of keeping all functions under a screenful. That wouldn’t work, because more forces are in play than just function length.
Rather, I assess all the code I write against the criterion of “a small functional change is going to result in a small code change”, and improve the structure as needed. I have a largish bag of tricks for doing that, in several languages and programming paradigms, and I’m always on the lookout for more tricks to pick up.
What, specifically, do you disagree with in the above?
I agree with most of your comment, except the idea that you can anticipate in what directions your software is going to grow. That’s never actually worked for me. Whenever I tried designing for future requirements instead of current simplicity, clients found a way to throw me a curveball that made me go “oops, this new request screws up my whole design!”
If my program ever needs a second pie chart, it’s better to factor the functionality out then instead of now. Less guesswork, plus a three-screen-long function is way easier to factor than a set of small chunks is to refactor.
except the idea that you can anticipate in what directions your software is going to grow
It’s ironic that I should be suspected of claiming that. Let me reassure that you on this point, we agree as well. (It’s looking more and more as if we have no substantial disagreement.)
My point is that the risk is perhaps lowest if you are going to add the second pie chart, but if someone else is, the three-screens-long function could be riskier than a slightly more factored version. Or not: there is no general rule involving only length.
If you want to make a pastie with that function I could give you an actual opinion. ;)
Programming is different from medicine. All the good programmers I know have learned their craft on the job. Silas doesn’t have to wait and learn without getting paid, his current skill level is already in demand.
I wouldn’t say “on the job”, necessarily. But it is only learned by programming, not by thinking about programming, attending lectures on programming, etc. Programming for class assignments can count for this.
Well, there is some benefit to reading good code, but you have to already have a reasonable idea what good code is for that to help.
try to solve each problem in the most natural manner
That happens to take a significant amount of skill and learning. Read a site like the Daily WTF and you see what too often comes out of letting untrained, untaught programmers do what they’re naturally inclined to do. One could learn a lot about programming simply by thinking about why the examples on that site are bad, and what principles would avoid them.
In practice you’re right: people have different ideas of maintainability. That is precisely the problem.
try to solve each problem in the most natural manner
That happens to take a significant amount of skill and learning.
But I don’t know of any way to acquire this “programming common sense” except on the job. Do you?
One could learn a lot about programming simply by thinking about why the examples on that site are bad, and what principles would avoid them.
Oh, no. What a terrible idea. If you do this without actually pushing through real-world projects of your own, you’ll come up with a lot of bullshit “principles” that will take forever to dislodge. In general, the ratio of actual work to abstract thinking about “principles” should be quite high.
Another case of “let them eat cake”. The very gap in my understanding is the jump between writing input once/output once algorithms, to multi-resource complex-UI programs, when existing open source applications have source files that don’t make sense to me and no one on the project finds it worth their time to bring me up to speed.
Between one-input, one-output programs and complex UIs are simple UIs, such as a program that loops in reading input and output, and maintains state while doing so.
The complex UIs are mostly a matter of wrapping this sort of “event loop” around a given framework or UI library. Some frameworks instead have their own event loop that does this, and instead you write callbacks and other code that the event loop calls at the appropriate times.
Sorry for not reading the follow-up discussion earlier.
Silas doesn’t have to wait and learn without getting paid, his current skill level is already in demand.
What do you mean by this? How can I be hired for programming based on just what I have now? Who hires people at my level, and how would they know whether I’m lying about my abilities? (Yes, I know, interviews, but to they have to thin the field first.) Is there some major job finding trick I’m missing?
My degree isn’t in comp sci (it’s in mech. engineering and I work in structural), and my education in C++ is just high school AP courses and occasional times when I need automation.
Also, I’ve looked at the requests on e.g. rent-a-coder and they’re universally things I can’t get to a working .exe (though of course could write the underlying algorithms for).
Is there some major job finding trick I’m missing?
The best ‘trick’ for job-finding is to get one from someone you know. I’m not sure what you can do with that.
Generally speaking, there are a lot of people who aren’t good at thinking but have training in programming, and comparatively not a lot of people who are good at thinking but not good at programming, and the latter are more valuable than the former. If I were looking for someone entry-level for webdev (and I’m not), I’d be likely to hire you over a random person with a master’s degree in computer science and some experience with webdev.
The best ‘trick’ for job-finding is to get one from someone you know. I’m not sure what you can do with that.
Heh, that’s what I figured, and that’s my weak point. (At least you didn’t say, “Pff, just find one on the internet!” as some have been known to do.)
If I were looking for someone entry-level for webdev (and I’m not), I’d be likely to hire you over a random person with a master’s degree in computer science and some experience with webdev.
Thanks. I don’t doubt people would hire me if they knew me, but there is a barrier to overcome.
For instance, you may be able to get a programming job without at any point being asked to produce a code portfolio or to program in front of an interviewer.
I’d still be keen, by the way, to help you through a specific example that’s giving you trouble compiling. I believe that when smart people get confused by things which their designers ought to have made simple, it’s an opportunity to learn about improving similar designs.
HAI
CAN HAS STDIO?
I HAS A VAR
IM IN YR LOOP
UP VAR!!1
IZ VAR LEFTOVER 15 LIEK 0?
YARLY VISIBLE "FizzBuzz"
NOWAI IZ VAR LEFTOVER 3 LIEK 0?
YARLY VISIBLE "Fizz"
NOWAI IZ VAR LEFTOVER 5 LIEK 0?
YARLY VISIBLE "Buzz"
NOWAI VISIBLE VAR
KTHX
IZ VAR NOT SMALR THAN 100? KTHXBYE
IM OUTTA YR LOOP
KTHXBYE
It might help to note that dialects—and I don’t see any reason not to consider both the various kinds of ‘netspeak and the various programming languages as such, in most cases of human-to-human interaction—are almost exclusively used as methods of signaling cultural affiliation. In this case, I parsed Bogus’ use of ‘netspeak as primarily an avoidance of affiliation with formal programming culture (which tends to linger even when programs are set out in standard English, in my experience), and secondarily a way of bringing in the emotional affect of the highly-social ’netspeak culture.
It is ‘mammal stuff’, but it seems to be appropriate in this instance, to me.
Thanks. I was mostly kidding, but I appreciate the extra perspective.
(Signalling my own affiliation as a true geek, I actually attempted to download a LOLCODE interpreter and run it on the above, but the ones I could get my hands on seem to be broken. I would upvote it if I could run it, and it gave the right answer.)
Looks right to me, though I wound up reformatting the loop a little. That’s most likely a result of me being in the habit of using for loops for everything, and forgetting the proper formatting for other kinds, rather than being an actual flaw in the code—I’m willing to give bogus the benefit of the doubt about it, in any case.
Pretty much. Both you and bogus apparently forget to put an initial value into var (unless your language of choice automatically initializes them as 0).
Using while(1) with a conditional return is a little bizarre, when you can just go while(var<100).
Of course, my own draft used
if(var % 3 == 0 && var % 5 == 0)
instead of the more reasonable x%15.
Pretty much. Both you and bogus apparently forget to put an initial value into var (unless your language of choice automatically initializes them as 0).
Mine does, but I’m aware that it’s good coding practice to specify anyway. I was maintaining his choice.
Using while(1) with a conditional return is a little bizarre, when you can just go while(var<100).
Yep, but I don’t remember how else to signify an intrinsically infinite loop, and bogus’ code seems to use an explicit return (which I wanted to keep for accuracy’s sake) rather than checking the variable as part of the loop.
My method of choice would be for(var=0; var<100; ++var){} (using LSL format), which skips both explicitly returning and explicitly incrementing the variable.
Evidently writing about the FizzBuzz problem on a programming blog results in a nigh-irresistible urge to code up a solution. The comments here, on Digg, and on Reddit—nearly a thousand in total—are filled with hastily coded solutions to FizzBuzz. Developers are nothing if not compulsive problem solvers.
It certainly wasn’t my intention, but a large portion of the audience interpreted FizzBuzz as a challenge. I suppose it’s like walking into Guitar Center and yelling ‘most guitarists can’t play Stairway to Heaven!’* You might be shooting for a rational discussion of Stairway to Heaven as a way to measure minimum levels of guitar competence.
But what you’ll get, instead, is a blazing guitarpocalypse.
Somehow, the other responses to this comment reminded me of that.
main = putStr . unlines $ fizzbuzz 100
fizzbuzz m = map f [1..m] where
f n | n `mod` 15 == 0 = "FizzBuzz"
f n | n `mod` 3 == 0 = "Fizz"
f n | n `mod` 5 == 0 = "Buzz"
f n = show n
I tested myself with MATLAB (which makes it quite easy) out of some unnecessary curiosity—it took me about seven minutes, a fair part of which was debugging.
% FizzBuzz - print all numbers from 1 to 100, replacing multiples of 3 with
% "fizz", multiples of 5 with "buzz", and multiples of 3 and 5 with
% "fizzbuzz".
clear
clc
for i = 1:100
fb = '';
if length(find(factor(i)==3)) > 0
fb = [fb 'fizz'];
end
if length(find(factor(i)==5)) > 0
fb = [fb 'buzz'];
end
if length(fb) > 0
fprintf([fb '\n'])
else
fprintf('%5.0f\n', i)
end
end
A better program (by which I mean “faster”, not “clearer” or “easier to modify” or “easier to maintain”) would replace the tests with something less intensive—for example, incrementing two counters (one for 3 and one for 5) and zeroing them when they hit their respective desired factors.
I wouldn’t be; I’d take it as (anecdotal) evidence that the craft of programming is systematically undertaught. By which I mean, the tiny, nano-level rules of how best to interact with this strange medium that is code.
(Recently added to my growing backlog of possibly-top-level-post-worthy topics is “how and why programming may be a usefull skill for rationalists to pick up”...)
I have to admit, I was looking up functions in the docs, too—I would have been a bit faster working in pseudocode on paper.
Edit: Also, my training is in engineering, not comp. sci. - the programming curriculum at my school consists of one MATLAB course.
(Recently added to my growing backlog of possibly-top-level-post-worthy topics is “how and why programming may be a usefull skill for rationalists to pick up”...)
Querying my brain for cached thoughts:
Programming encourages clear thinking—like evolution, it is immune to rationalization.
Thinking in terms of algorithms, rather than problem-answer pairs, and the former generalize.
That depends on your incentive structure. You may well be right if you work as a contract programmer. If you work as a salaried employee in a large company the calculation could look different.
Yes, absolutely. The former path (working or contracting for many small companies) is the one I’d heartily recommend to novices. The latter path… scares me.
I write maintainable code anyway, and I’m friends with several people who maintain my past code and don’t seem to complain. No, working at BigCo scares me because it tends to be a very one-sided activity. Employees at small companies and contractors face much more variety in what they have to do every day.
Bad design choices are much more expensive to fix down the road than when they were were created. You seem to be saying that any time spent addressing this issue is worthless in comparison to spending more time empathizing with the customer.
You want to do user-facing stuff? Then don’t bother with desktop programming, write webapps. HTML and JavaScript are much easier than C++. You don’t even have to learn the server side at first, a lot of useful stuff can be written as a standalone html file with no server support. For example you could make your own draggable interface to the free map tiles from http://openstreetmap.org—basically it’s all just cleverly positioned image elements named like this, within a rectangular element that responds to mouse events. Or, if a little server-side coding doesn’t scare you, you could make a realtime chat webpage. Stuff like that.
If you need any help at all, my email is vladimir.slepnev at gmail and I’m often online in gtalk.
“Easy” is one goal you can have when learning to program. “Soundly written and maintainable” is another. Unfortunately these two goals are sometimes at odds.
Language and platform don’t really matter a whole lot, in the grand scheme of things; learning how to write maintainable programs does matter. Having had lots of experience extending or modifying source code written by others, I wish more novice programmers would make that their number one goal.
I disagree!
A (real) novice programmer’s number one worry should be getting paid. Why should they divert their attention and spend extra effort on writing maintainable code, just so you have an easier time afterward? That’s awfully selfish advice.
You might claim writing maintainable code will pay off for them, but to properly evaluate that we need to weigh the marginal utilities. What’s better, an extra hour improving the maintainability of your code, or an extra hour spent empathizing with the client? Ummm… And you can’t say both of those things are first priority, that’s not how it works. I’ve been coding for money for half of my life so listen to my words, ye lemmings: ship the thing, make the client happy, get paid. That’s number one. Maintainability ain’t number one, it ain’t even in the top ten.
What are your other nine?
(Edited)
For one thing, that doesn’t sound like something that’s actionable for Silas in the context of his request for advice, compared to advising him to learn some specific techniques, such as MVC, which make for more maintainable code.
For another, your worry should be “getting paid” after you have reached a reasonable level of proficiency. A medical student’s first concern isn’t getting paid, it’s learning how not to harm patients. Similarly if you’re learning programming, as opposed to confident enough of your chops to go on the market, you have a responsibility to learn how not to harm future owners of your code through negligent design practices. That a majority of programmers today fail to fulfill that basic responsibility doesn’t absolve you of it.
Programming is different from medicine. All the good programmers I know have learned their craft on the job. Silas doesn’t have to wait and learn without getting paid, his current skill level is already in demand.
But that’s tangential. More importantly, whenever I hear the word “maintainability” I feel like “uh oh, they wanna sell me some doctrinaire bullshit”. Maintainability is one of those things everyone has a different idea of. In my opinion you should just try to solve each problem in the most natural manner, and maintainability will happen automatically.
Allow me to illustrate with an example. One of my recent projects was a user interface for IPTV set-top boxes. Lots and lots of stuff like “my account”, “channels I’ve subscribed to”, et cetera. Now, the natural way to solve this problem is to have a separate file (a “page”) for each screen that the user sees, and ignore small amounts of code duplication between pages. If you get this right, it’s pretty much irrelevant how crappily each individual page is coded, because it’s only five friggin’ kilobytes and a maintenance programmer will easily find and change any functionality they want. On the other hand, if you get this wrong… and it’s really fucking distressing how many experienced programmers manage to get this wrong… making a Framework with a big Architecture that separates each page into small reusable chunks, perfectly Factored, with shiny and impeccable code… maintenance tasks become hell. And so it is with other kinds of projects too, in fact with most projects I’ve faced in my life. Focus on finding the stupid, straightforward, natural solution, and it will be maintainable with no effort.
I wasn’t with you on the importance of maintainability until you said this. Yes, programming well and naturally is automatically maintainable.
Right on. Another way to put it: if you have to spend extra effort on maintainability, you’ve probably screwed up somewhere.
My name for this kind of behavior is “fetish”. For example, some people have a Law of Demeter fetish. Some people have a short function fetish. And so on, all kinds of little cargo cults around.
Allow me to illustrate with another example. One of my recent projects is mostly composed of small functions, but there’s this one function that is three screens long. What does it do? It draws a pie chart with legend. The only pie chart in the whole application. There’s absolutely no use refactoring it because it’s all unique code that doesn’t repeat and isn’t used anywhere else in the app. Pick the colors, draw the slices, draw the legend, stop. All very clear and straightforward, very easy to read and modify. A fetishist would probably throw a fit and start factoring it into small chunks, giving them descriptive names, maybe making it a “class” with some bullshit “parameters” that actually only ever take one value, etc, etc.
Well, that’s merely labeling, not actually advancing an argument. What kind of predictions are we talking about here? Where is our substantial disagreement, if any?
When I talk about maintainability I’m referring to specific sequences of events. In one of the most common negative scenarios, I’m asked to make one change to the functionality of a program, and I find that it requires me to make many coordinated edits in distinct source chunks (files, classes, functions, whatever). This is called “coupling” and is a quantifiable property of a program relative to some functional change specification.
“Maintainable” relative to that change means (among other things) low coupling. You want to change the pie chart to use dotted lines instead of solid inside the pie, and you find that this requires a change in only one code location—that’s low coupling.
Now what often happens is that someone needs a program that’s able to do both dotted-line pies and solid-line pies. And many times the “most natural” thing (by which I only mean, “what I see many programmers do”) is then to copy the pie-chart function, paste it elsewhere with a different name, and change the line style from solid to dotted.
That copy-paste programming “move” has introduced coupling, in the sense that if you want to make a change that affects all pie charts (dotted and solid alike) you’ll have to make the corresponding source change twice.
Someone who programs that way is eventually going to drive coupling through the roof (by repeated applications of this maneuver). At this point the program has become so difficult to change that it has to be rewritten from scratch. Plus, high coupling is also correlated with higher incidence of defects.
Now you may call a “fetishist” someone whose coding style discourages copy-paste programming, that doesn’t change the fact that it is a style which results in lower overall costs for the same total quantity of “delta functionality” integrated over the life of the program.
My contention is that functions which are three screens long are, other things equal, more likely to result in copy-paste parametrizations than smaller functions. (More generally, code that exhibits a higher degree of composability is less susceptible to design mistakes of this kind, at the cost of being slightly harder to understand for a novice programmer.)
I’d probably look hard at this pie chart thingy and consider chopping it up, if I felt the risk mitigation was worth the effort. Or I might agree with you and decide to leave it alone. I would consider it stupid to have a “corporate policy” or a “project rule” or even a “personal preference” of keeping all functions under a screenful. That wouldn’t work, because more forces are in play than just function length.
Rather, I assess all the code I write against the criterion of “a small functional change is going to result in a small code change”, and improve the structure as needed. I have a largish bag of tricks for doing that, in several languages and programming paradigms, and I’m always on the lookout for more tricks to pick up.
What, specifically, do you disagree with in the above?
I agree with most of your comment, except the idea that you can anticipate in what directions your software is going to grow. That’s never actually worked for me. Whenever I tried designing for future requirements instead of current simplicity, clients found a way to throw me a curveball that made me go “oops, this new request screws up my whole design!”
If my program ever needs a second pie chart, it’s better to factor the functionality out then instead of now. Less guesswork, plus a three-screen-long function is way easier to factor than a set of small chunks is to refactor.
It’s ironic that I should be suspected of claiming that. Let me reassure that you on this point, we agree as well. (It’s looking more and more as if we have no substantial disagreement.)
My point is that the risk is perhaps lowest if you are going to add the second pie chart, but if someone else is, the three-screens-long function could be riskier than a slightly more factored version. Or not: there is no general rule involving only length.
If you want to make a pastie with that function I could give you an actual opinion. ;)
Object-oriented design is overrated. ;)
I wouldn’t say “on the job”, necessarily. But it is only learned by programming, not by thinking about programming, attending lectures on programming, etc. Programming for class assignments can count for this.
Well, there is some benefit to reading good code, but you have to already have a reasonable idea what good code is for that to help.
That happens to take a significant amount of skill and learning. Read a site like the Daily WTF and you see what too often comes out of letting untrained, untaught programmers do what they’re naturally inclined to do. One could learn a lot about programming simply by thinking about why the examples on that site are bad, and what principles would avoid them.
In practice you’re right: people have different ideas of maintainability. That is precisely the problem.
But I don’t know of any way to acquire this “programming common sense” except on the job. Do you?
Oh, no. What a terrible idea. If you do this without actually pushing through real-world projects of your own, you’ll come up with a lot of bullshit “principles” that will take forever to dislodge. In general, the ratio of actual work to abstract thinking about “principles” should be quite high.
Open source.
Another case of “let them eat cake”. The very gap in my understanding is the jump between writing input once/output once algorithms, to multi-resource complex-UI programs, when existing open source applications have source files that don’t make sense to me and no one on the project finds it worth their time to bring me up to speed.
Between one-input, one-output programs and complex UIs are simple UIs, such as a program that loops in reading input and output, and maintains state while doing so.
The complex UIs are mostly a matter of wrapping this sort of “event loop” around a given framework or UI library. Some frameworks instead have their own event loop that does this, and instead you write callbacks and other code that the event loop calls at the appropriate times.
Thanks, that helps. Now I just need to learn the nuts-and-bolts of particular libraries.
Sorry for not reading the follow-up discussion earlier.
What do you mean by this? How can I be hired for programming based on just what I have now? Who hires people at my level, and how would they know whether I’m lying about my abilities? (Yes, I know, interviews, but to they have to thin the field first.) Is there some major job finding trick I’m missing?
My degree isn’t in comp sci (it’s in mech. engineering and I work in structural), and my education in C++ is just high school AP courses and occasional times when I need automation.
Also, I’ve looked at the requests on e.g. rent-a-coder and they’re universally things I can’t get to a working .exe (though of course could write the underlying algorithms for).
The best ‘trick’ for job-finding is to get one from someone you know. I’m not sure what you can do with that.
Generally speaking, there are a lot of people who aren’t good at thinking but have training in programming, and comparatively not a lot of people who are good at thinking but not good at programming, and the latter are more valuable than the former. If I were looking for someone entry-level for webdev (and I’m not), I’d be likely to hire you over a random person with a master’s degree in computer science and some experience with webdev.
Heh, that’s what I figured, and that’s my weak point. (At least you didn’t say, “Pff, just find one on the internet!” as some have been known to do.)
Thanks. I don’t doubt people would hire me if they knew me, but there is a barrier to overcome.
I’m sorry to be the one to break the news to you, but the IT industry has appallingly low standards for hiring.
For instance, you may be able to get a programming job without at any point being asked to produce a code portfolio or to program in front of an interviewer.
I’d still be keen, by the way, to help you through a specific example that’s giving you trouble compiling. I believe that when smart people get confused by things which their designers ought to have made simple, it’s an opportunity to learn about improving similar designs.
A quick solution to the FizzBuzz quiz:
*in ur LessWrong, upvotin’ ur memez*
For the first time here I’m having a Buridan moment—I don’t know whether to upvote or downvote the above.
It might help to note that dialects—and I don’t see any reason not to consider both the various kinds of ‘netspeak and the various programming languages as such, in most cases of human-to-human interaction—are almost exclusively used as methods of signaling cultural affiliation. In this case, I parsed Bogus’ use of ‘netspeak as primarily an avoidance of affiliation with formal programming culture (which tends to linger even when programs are set out in standard English, in my experience), and secondarily a way of bringing in the emotional affect of the highly-social ’netspeak culture.
It is ‘mammal stuff’, but it seems to be appropriate in this instance, to me.
Thanks. I was mostly kidding, but I appreciate the extra perspective.
(Signalling my own affiliation as a true geek, I actually attempted to download a LOLCODE interpreter and run it on the above, but the ones I could get my hands on seem to be broken. I would upvote it if I could run it, and it gave the right answer.)
integer var
while(1)
{
++var
if (var % 15 == 0)
else if (var % 3 == 0)
else if (var % 5 ==0)
else
if !(var<100)
}
Looks right to me, though I wound up reformatting the loop a little. That’s most likely a result of me being in the habit of using for loops for everything, and forgetting the proper formatting for other kinds, rather than being an actual flaw in the code—I’m willing to give bogus the benefit of the doubt about it, in any case.
Pretty much. Both you and bogus apparently forget to put an initial value into var (unless your language of choice automatically initializes them as 0).
Using while(1) with a conditional return is a little bizarre, when you can just go while(var<100).
Of course, my own draft used if(var % 3 == 0 && var % 5 == 0) instead of the more reasonable x%15.
Mine does, but I’m aware that it’s good coding practice to specify anyway. I was maintaining his choice.
Yep, but I don’t remember how else to signify an intrinsically infinite loop, and bogus’ code seems to use an explicit return (which I wanted to keep for accuracy’s sake) rather than checking the variable as part of the loop.
My method of choice would be for(var=0; var<100; ++var){} (using LSL format), which skips both explicitly returning and explicitly incrementing the variable.
Jeff Atwood also makes this meta point about blogging about fizzbuzz:
Somehow, the other responses to this comment reminded me of that.
Somehow, I believe this is my fault for having mentioned trying it myself, and for that, I apologize.
If all you have is regex s/.+/nail/
Warning: Do not try this (or any other perl coding) at home!
I think anyone who applies to a programming job and can’t write this (in whatever language) deserves something worse than being politely turned down.
I tested myself with MATLAB (which makes it quite easy) out of some unnecessary curiosity—it took me about seven minutes, a fair part of which was debugging.
I feel rather ashamed of that, actually.
As everyone else seems to be posting their code:
A better program (by which I mean “faster”, not “clearer” or “easier to modify” or “easier to maintain”) would replace the tests with something less intensive—for example, incrementing two counters (one for 3 and one for 5) and zeroing them when they hit their respective desired factors.
I wouldn’t be; I’d take it as (anecdotal) evidence that the craft of programming is systematically undertaught. By which I mean, the tiny, nano-level rules of how best to interact with this strange medium that is code.
(Recently added to my growing backlog of possibly-top-level-post-worthy topics is “how and why programming may be a usefull skill for rationalists to pick up”...)
I have to admit, I was looking up functions in the docs, too—I would have been a bit faster working in pseudocode on paper.
Edit: Also, my training is in engineering, not comp. sci. - the programming curriculum at my school consists of one MATLAB course.
Querying my brain for cached thoughts:
Programming encourages clear thinking—like evolution, it is immune to rationalization.
Thinking in terms of algorithms, rather than problem-answer pairs, and the former generalize.
That depends on your incentive structure. You may well be right if you work as a contract programmer. If you work as a salaried employee in a large company the calculation could look different.
Yes, absolutely. The former path (working or contracting for many small companies) is the one I’d heartily recommend to novices. The latter path… scares me.
Maybe you are scared because you are aware that writing maintainable code is harder than writing code without that constraint?
I write maintainable code anyway, and I’m friends with several people who maintain my past code and don’t seem to complain. No, working at BigCo scares me because it tends to be a very one-sided activity. Employees at small companies and contractors face much more variety in what they have to do every day.
Bad design choices are much more expensive to fix down the road than when they were were created. You seem to be saying that any time spent addressing this issue is worthless in comparison to spending more time empathizing with the customer.
Thanks for the advice and generous offer of help!