norswap
I want to discuss the specific example you picked: Etsy & A/B (or generally, data-driven) testing.
I’ll start by agreeing to your premise in the abstract: Etsy could probably have done better by using a better A/B testing methodology and doing so sooner.
But: while superior tools used effectively are superior, they also tend to be harder to use and to cause more damage when misused. I’ve seen a bunch of math-based data-driven analysis that wasn’t worth a damn because they were misused. The more sophisticated these tools become, the easier they are to misuse.
A popular cautionary tale in that space is “Steve Jobs vs focus groups”. Focus groups are a primitive but, I’d argue, *scientific* method to determine what your customers want. Yet what it comes up with is often lackluster to the point where a visionary with a intuition and taste can run circles around them. Sure, in this primitive case we can easily point to some design flaws in the process. For instance, bringing various people together to discuss a design is likely to produce a compromise design that truly satisfies no one. But is A/B testing free of any such psychological flaws? I think not, and now you also risk screwing up the statistical analysis in one hundred different ways.
Second, trade-offs. If Etsy chose to perfect it’s A/B testing methodology, it has to forego doing other things, because it does not have illimited resources (even if it had, if an entity becomes too large to coordinate effectively that comes with a slew of issues — this is well documented (the mythical man month, etc)). Could it be that their unsophisticated/late application of A/B was an effective trade-off in terms of resource use (à la 20⁄80 principle)?
I think a part of instrumental rationality is the ability to decide which tools to deploy.
Generally, more sophisticated tools can yield better results, but come with an inherent cost. You seem to ignore this basic reality here. Cases where it’s as easy to do the smart thing as to do the dumb thing exist, but they’re not everywhere—especially in business context (and even more so in startups), where there is a strong evolutionary pressure to adopt these low-hanging fruits.
It doesn’t have to be optimal, the question is whether it is better. Is it better to wash all the time (as described in the post), like most people (let’s say before eating and after using the bathroom), once-twice a day, or not at all (hands only washed during showers)? I’m not quite sure that “all the time” is better (it could be, but I’m not sure).
There is clearly a phenomenon of adapting to pathogens. I’ve heard it firsthand from at least two people who worked in less sanitary areas (South-American slums and Center-African countryside). There is no doubt to me that they were better off going through a bit of sickness in order to avoid the overheads of constant hand-washing.
I’ve also never heard heard about the gains in public hygiene due to hand washing in the general public (I mean that literally, I’m pointedly not saying they don’t exist!). In a medical context, sure, the last thing we want is medical personnel spreading pathogens to vulnerable people.
I came to the comment section expecting to see someone pointing out that not washing out your hands so much could improve your immune system by exposing you to more germs, pathogens, etc.
Well, since nobody did. I’m pointing it out. The argument seems sound to me. Is there something to be said against this perspective? Or something more in favor of it?
Clearing a fully-general counter-argument: Everything is based on some amount of trust—radical doubt just doesn’t scale—you couldn’t trust most of what your science textbook tells you without running a lot of experiments, which people don’t tend to do.
With that out of the way, you can decide who to trust based on other information. So in this case, you can look at the collection of people reporting sports-related improvement, and see how it overlaps with people saying that <dubious thing> made them feel better.
As far as I know, there’s a PLETHORA of studies linking exercise to health. You say you assume they are p-hacked etc… hhm why? I know science can be unreliable, but when everything we have points in the same direction, and there’s a large volume of it, well, that’s certainly some strong evidence.
I also seem to recall that elite athletes do in fact live significantly longer than the general population on average.
People report they feel better after they take up exercise / get in shape though. This is not strictly health, but I’d be very surprised if someone tried to argue it’s not correlated. I’d also be surprised that everyone is self-deceiving — especially since that would make me one of them ;)
As for cognitive benefits… I’m more skeptical of that. I haven’t experienced something profound on that front. But you do better work when you feel better day round. I think that ability to focus for longer periods of time improved slightly.
I don’t think we necessarily disagree. Photo feeler does not strike me as requiring a large effort. But taking new pictures did. (In my case the new pictures did work better, so that was a required step.)
I think what you’re saying here is that taking pictures wasn’t a big effort for you (since just a friend could do it?). But for me too it was just my brother who lives with me and using my mobile phone.
And objectively, I expect for some people this is cake-walk, but for me it felt very tedious (but at least I ended up doing it! though it required quite a bit of willpower, explaining why other people who are like me would end up never implementing this strategy).
Regarding pictures, I think you underestimate the effort required.
You need to get a phone or camera capable of taking good-looking picture, you need someone that is semi-competent at shooting, you need nice looking clothes and a good-enough looking background. These are all things that need to be planned/accounted for. It also takes time.
I don’t especially enjoy doing these things, and it took quite a bit of willpower to grab a few nice clothes (I already owned!) and my brother (whom I trust) to go and shoot a few pictures (in my garden).
There is also a diminishing marginal utility of better pictures. If your pictures are ugly blurry messes of you in weird poses, then you stand to gain a lot. If they are already decent, the gain is less.
As others have pointed out, there is a pressure to be “genuine”. I think this is not entirely stupid. If someone likes you for your good looking pictures but you never wear these kind of clothes / go to these kind of places, you’re may be setting yourself up for failure.
On the other hand, in my own experience, getting matches on apps like Tinder has proved to be the bottleneck — people like me well enough when they meet me, but it’s hard to shine whatever they like about me through the pictures. So sweetening the honeypot might not be that bad of a strategy.
Nevertheless, the sentiment that matches obtained through more “genuine” pictures might be better suited might not be wrong. I guess you have to use feedback: are you happy with the matches you get? Why? If you deem they’re “low quality”, maybe you should sell yourself more. If you have too many shallow matches, maybe you should filter more (but consider that this filtering might eliminate the matches you do find desirable). Said otherwise (a) an increase in quantity is not necessarily an increase in quality and (b) a decrease in quantity is not necessarily an increase in quality. But they might be.
Here is what confuses me: from before, I thought morphisms were “just” arrows between objects, with a specific identity.
But in the case of functions, we have to smuggle in the set of ordered pairs that define them. Do you simply equate the identity of a function with this set definition?
That might be fine, but it means there needs to be some kind of … semantics? that gives us the “meaning” (~ implementation) of composition based on the “meaning” (the set of ordered pairs) of the composed morphisms.
Am I right here?
I’ll add the biggest minus in my book:
Potential alternatives:
https://inkdrop.app (not sure if has hyperlinks, couldn’t find the info)
This was a really heartwarming story that brought a smile to my face!
I’d like to give a special shout-out to
As we go I’m going to continue to try very hard not to pressure or manipulate her, while still giving advice and helping her explore her motivations here.
That’s very important indeed.
I watched one or two videos of this channel a while back and was impressed by the seemingly solid—but non-conventional—argument (it was on salt intake). I subscribed and was *dismayed* by further videos. I wouldn’t put much stock into the either the research being quoted (if you didn’t review it yourself) nor the treatment of the research made by this channel.
That being said, I haven’t watched this particular video. What it says might all be true.
Not a very pointed answer, but a collection of leads:
Most books I can find on compilers/PLs tend to spend most of their time on the text representation (and algorithms for translating programs out of text, i.e. parsing) and the machine-code representation (and algorithms for translating programs into machine code).
There are good reasons for the time spent on them — they are more difficult than the parts that go in the middle, which is “merely” software engineering, although of an unusual kind.
There is also a dearth on resources on the topic. And because of that, it is actually fairly hard.
One reason is that the basics of it is quite simple: generate a tree as the output of parsing, then transform this tree. Generate derivative trees and graphs from these trees to perform particular analyses.
Phrased like that, it seems that knowledge on how to work with trees and graphs is going to serve you well, and that is indeed correct.
A good read (though with a very narrow focus) is that discussion of syntax tree architecture in Roslyn. The Roslyn whitepaper is also quite interesting though more oriented towards exposing compiler features to user.
Personally, I did some research on trying to implement name resolution (relating an identifier user to its declaration site) and typing as a reactive framework: you would define the typing rules for you language by defining inference rules, e.g. once your type the type of node A and the type of node B, you can derive the type of node C. The reactive part was then to simply find the applicable inference rules and run them.
The project didn’t really pan out. In reality, the logic ends looking quite obfuscated and it’s just easier to write some boring non-modular old code where the logic is readily apparent.
(Incidentally, fighting against this “it’s easier to just code it manually” effect — but in parsing — is what my PhD thesis is about.)
I might advise you to look at research done on the Spoofax language workbench. Spoofax includes declarative language to specify name binding, typing, semantics and more. These languages do not offer enormous flexibility but they cover the most common language idioms. Since those were codified in a formal system (the declarative language), it might tell you something about the structure of the underlying problem (… which is not really about something quite as simple as data structure selection, but there you have it).
For purposes of this question, I’m not particularly interested in either of these representations—they’re not very natural data structures for representing programs, and we mostly use them because we have to.
I’d like to point out I have seen very convincing arguments to the contrary. One argument in particular was that while the data structures used to represent program will tend to change (for engineering reasons, supporting new features, …), the text representation stays constant. This was made in the context of a macro system, I believe (defending the use of quasiquotes).
Regarding machine code, it would be immensely useful even if we didn’t need to run code on CPUs. Look at virtual machines: they work with bytecode. A list of sequential instructions is just the extremum of the idea of translating high-level stuff into a more limited number of lower-level primitives that are easier to deal with.
Is there some other question I should be asking, e.g. a different term to search for?
On the meta-level, where else should I look/ask this question?
For academic literature on the topic, I would like at the proceedings of the GPCE (Generative Programming: Concepts & Experiences) and SLE (Software Language Engineering) conferences.
I think there exists some program transformation framework out there, and you might also learn something from them, though in my experience they’re quite byzantine. One such is Rascal MPL (meta-programming language). Another is Stratego (part of Spoofax) (I read some papers on that one a while ago that were palatable).
So anyay, here goes. Hope it helps. You can contact me if you need more info!
I’d be more interested in the in-between: what about cases where we don’t have general AI, but we have automation that drastically cuts jobs in a field, without causing counter-balancing wage increases or job creation in another field?
For instance, imagine the new technology is something really simple to manufacture (or worse, a new purpose for something we already manufacture en masse) — it’s so easy to produce these things, we don’t need really need to hire more workers, just push a couple levers and all the demand is met just like that.
Is there something interesting to be said about what happens then? Can this be modeled?
(In practice, even this is too extreme a scenario of course, everything sits on a continuum.)
Something more realistic, I think, is that even when a new useful machine and introduced, and the productivity of the producers of that machine shots up, the salaries of the machine-maker won’t shot up in a way that is proportional (maybe it’s easy to train people to make these machines?). And maybe the ratio skews: like automation will remove X people, and the increased demand for automation will get X/5 people hired. So on the one hand you get major job loss, and on the other a minor salary hike and minor job creation.
How to model what is lost here? Isn’t there some kind of conservation law and the surplus disappears somewhere (presumably in the pockets of the shareholders of both the companies buying and producing the machines?).
I think rationality ought to encompass more than explicit decision making (and I think there are plenty of writing on this website that show it does even within the community).
If you think of instrumental rationality of the science of how to win, then necessarily it entails considering things like how to setup your environment, unthinking habits, how to “hack” into your psyche/emotions.
Put otherwise, it seems you share your definition of Rationality with David Chapman (of https://meaningness.com/ ) — and I’m thinking of that + what he calls “meta-rationality”.
So when is rationality relevant? Always! It’s literally the science of how to make your life better / achieving your values.
Of course I’m setting that up by definition… And if you look at what’s actually available community-wise, we still have a long way to go. But still, there is quite a bit of content about fundamentals ways in which to improve not all of which have to do with explicit decision making or an explicit step-by-step plan where each step is an action to carry explicitly.
Seems to me you’re on about treating (or more to the point, dreaming about treating) the cure rather than the symptoms that make people vulnerable to the social network sink in the first place. The same fundamental weakness probably has a lot of other failure modes.
Category theory, of which I’m acquainted with at a basic level, seems to formalize a lot of regularities I already knew about as a programmer and a student of <those mathematics topics that were taught to me as part of my CS master’s degree>.
I found it mathematically neat, but I have never derived any useful insights from it. Said otherwise, nothing would have changed if I had never been introduced to it. This seems quite wrong to me, so I was quite interested in reading the answers here. Unfortunately, there is not much in ways of insight.
What is this? The links seem to require some login and registration is limited to students of some specific universities.
Is it even possible to avoid for a curated selection to be deemed better? Maybe only if it fails horribly at what it set out to do, but otherwise?
I strongly second Michaël’s recommendation — of any place, the front page of Less Wrong is where things should be clear.
For me, what separates mindfulness from rumination is that in mindfulness you observe things and accept them, whereas in rumination you’re trying to fight or hold onto something.
Constantly reminiscing a slight is a good way to make it loom large. It’s an unwillingness to either resolve the matter and letting it be.
Similarly, fighting some negative emotions (pain, loss, anger) makes them worse when they inevitably breaks through.
Totally. But it’s cool to want to teach things, and kids actually like to learn when it’s fun. So offer to teach, don’t impose your teaching. Be ready to jettison your plans and go with whatever your daughter finds interesting. This is what seems to work best in practice (from remembered anecdotal evidence).