Have lots of problems prepared over a wide range of difficulty. Start with problems you’re pretty sure the student can solve, and turn up the difficulty slowly.
fiddlemath
Meetup : South Bay Meetup: Be Specific
Meetup : Mountain View: Rough Numbers
Actually, I have the whole thing now, and seed it when I can. My, the internet’s a powerful thing when used properly. :)
Well put! Have some internet status points!
I know it’s old, now, but can you seed the latter again? The swarm’s missing about 9% right now.
VFT appears primarily targeted at facilitators and contains much focused material not in VFT
er?
._.
Um—why not get a control group? I’d happily volunteer.
I mean, it might not be perfectly randomized, but you can at least watch for confounders from just being in this community, or introspecting for data collection, or whatnot.
Oh, agreed! Still, journaling in the morning has been rather more useful than failing to journal in the evening.
Consider modifying the habit—maybe journaling at night is harder for you to maintain than in the morning, or around lunch, or something like that? (This was my experience—I tried journaling at night for years and repeatedly failed; now I journal in the morning, and it’s been easy and pleasant. I don’t know any special reason why this would work for you, but it’s cheap to share the idea.)
How is the distinction between functional and imperative programming languages “not a real one”?
“Not a real one” is sort of glib. Still, I think Jim’s point stands.
The two words “functional” and “imperative” do mean different things. The problem is that, if you want to give a clean definition of either, you wind up talking about the “cultures” and “mindsets” of the programmers that use and design them, rather than actual features of the language. Which starts making sense, really, when you note “functional vs. imperative” is a perennial holy war, and that these terms have become the labels for sides in that war, rather than precise technical positions.
I mean, I am somewhat partisan in that war, and rather agree that, e.g., we should point new programmers to Scheme rather than Python or Java. But presenting “functional” vs. “imperative” as the major division in thinking about programming languages is epistemically dirty, when there’s so many distinctions between languages that matter just as much, and describe things more precisely.
(Jim: fair rephrasing?)
I maintain a spotify playlist, here. If you have spotify, this should be a direct link: spotify:user:fiddlemath:playlist:6Iv5fSaguXWHta0Iu80i2N
A few game and movie soundtracks. Instrumental or nearly-instrumental, some odd, kind-of-jangly loud stuff occasionally.
Probably not as good as musicForProgramming(); is, but you can pick among the tracks a lot more easily.
I try to write my journal for me, about ten years from now. So, I don’t spend much time explaining who people are that I know very well, or what my overall situation is—but I do spend quite some time trying to express mental states, because I know that how I think now differs vastly from my thinking ten years ago, and I expect similar changes into the future.
On the other hand, I’ve had lots of experience with trying and failing to understand what I’ve written in programming and mathematics, so I’ve internalized the fact that future-me might not even understand an explanation of things I think are obvious right now. ymmv.
“Influencing” is pretty neutral, if not very specific. “Exploiting the halo effect” is too long, but precise.
My reading of the given quote is the same as buybuy’s. Maybe you’re talking about a more general process? Your comment here is tantalizing, but I don’t have any particular reason to believe it; can you give examples, or explain it further, or something?
If they deserve any credibility, scientists must have some process by which they drop bad truth-finding methods instead of repeating them out of blind tradition.
Plenty of otherwise-good science is done based on poor statistics. Keep in mind, there are tons and tons of working scientists, and they’re already pretty busy just trying to understand the content of their fields. Many are likely to view improved statistical methods as an unneeded step in getting a paper published. Others are likely to view overthrowing NHST as a good idea, but not something that they themselves have the time or energy to do. Some might repeat it out of “blind tradition”—but keep in mind that the “blind tradition” is an expensive-to-move Schelling point in a very complex system.
I do expect that serious scientific fields will, eventually, throw out NHST in favor of more fundamentally-sound statistical analyses. But, like any social change, it’ll probably take decades at least.
Do you believe scientific results?
Unconditionally? No, and neither should you. Beliefs don’t work that way.
If a scientific paper gives a fundamentally-sound statistical analysis of the effect it purports to prove, I’ll give it more credence than a paper rejecting the null hypothesis at p < 0.05. On the other hand, a study rejecting the null hypothesis at p < 0.05 is going to provide far more useful information than a small collection of anecdotes, and both are probably better than my personal intuition in a field I have no experience with.
An important aspect of self-image is whether people consider themselves “successful” or “losers”, based on their previous successes and failures. But we have a bias here: the feeling from a successful or failed task is not proportionate to its difficulty. So people can manipulate their outcomes by only doing easy tasks, which have high success ratio. When used strategically, this can be helpful; but doing it automatically all the time is harmful. Learning new things requires trying new things, but that has a risk of failure, which can harm self-image with possible bad consequences such as learned helplessness. On the other hand, protecting self-image all the times means never learning anything. Updating means admitting you were (more) wrong. How to deal with this?
When you practice or learn, ensure that each session ends on a high note. Either push yourself to accomplish something for the first time and then stop immediately, or end with an exercise that you find difficult but now comfortably within your abilities. This is, apparently, commonly used in animal training—see the “laws of shaping”.
I suspect this works because of the peak-end rule—even if you’ve been working above your comfortable difficulty for most of the session, you’ll remember the session as if you did difficult things, and became more competent by the end. You won’t remember the session as frustrating or painful if the end is especially satisfying.
NHST has been taught as The Method Of Science to lots of students. I remember setting these up explicitly in science class. I expect it will remain in the fabric of any given quantitative field until removed with force.
Later. Keep the project requirements small until it’s working well. Get it to serve one desired purpose very well. Only then look at extending its use.
This is true for any coding project, but an order-of-magnitude more true for a volunteer project. If you want to get a programmer to actually volunteer for a project, convince them that the project will see great rewards while it’s still small. In fact, you basically want to maximize intuitive value, while minimizing expected work. It feels so much better when your actual, original goal is achieved with a small amount of work than it feels when your tiny, first step is only the start of achieving your goal.