Year 3 Computer Science student
find me anywhere in linktr.ee/papetoast
Year 3 Computer Science student
find me anywhere in linktr.ee/papetoast
Thought about community summaries a very little bit too, with the current LW UI, I envision that the most likely way to achieve this is to
Write a distillation comment instead of post
Quote the first sentence of the sequences post so that it could show up on the side at the top
Wait for the LW team to make this setting persistent so people can choose Show All
There is also the issue of things only being partially orderable.
When I was recently celebrating something, I was asked to share my favorite memory. I realized I didn’t have one. Then (since I have been studying Naive Set Theory a LOT), I got tetris-effected and as soon as I heard the words “I don’t have a favorite” come out of my mouth, I realized that favorite memories (and in fact favorite lots of other things) are partially ordered sets. Some elements are strictly better than others but not all elements are comparable (in other words, the set of all memories ordered by favorite does not have a single maximal element). This gives me a nice framing to think about favorites in the future and shows that I’m generalizing what I’m learning by studying math which is also nice!
It is hard to see, changed to n.
In my life I have never seen a good one-paragraph explanation of backpropagation so I wrote one.
The most natural algorithms for calculating derivatives are done by going through the expression syntax tree[1]. There are two ends in the tree; starting the algorithm from the two ends corresponds to two good derivative algorithms, which are called forward propagation (starting from input variables) and backward propagation respectively. In both algorithms, calculating the derivative of one output variable with respect to one input variable actually creates a lot of intermediate artifacts. In the case of forward propagation, these artifacts means you get for ~free, and in backward propagation you get for ~free. Backpropagation is used in machine learning because usually there is only one output variable (the loss, a number representing difference between model prediction and reality) but a lot of input variables (parameters; in the scale of millions to billions).
This blogpost has the clearest explanation. Credits for the image too.
or maybe a directed acyclic graph for multivariable vector-valued functions like f(x,y)=(2x+y, y-x)
Donated $25 for all the things I have learned here.
Strongly agreed. Content creators seem to get around this by creating multiple accounts for different purposes, but this is difficult to maintain for most people.
I rarely see them show awareness of the possibility that selection bias has created the effect they’re describing.
In my experience with people I encounter, this is not true ;)
Joe Rogero: Buying something more valuable with something less valuable should never feel like a terrible deal. If it does, something is wrong.
clone of saturn: It’s completely normal to feel terrible about being forced to choose only one of two things you value very highly.
Yes!
By default I expect the author to have a pretty strong stance on the main idea of a post, also the content are usually already refined and complete, so the barrier of entry to having a comment that is valuable is higher.
Bob can choose whether to to hide this waste (at a cost of the utility loss by having $300 and worse listening experience, but a “benefit” of misleading Tim about his misplaced altruism)
True in my example. I acknowledge that my example is wrong and should have been more explicit about having an alternative. Quoting myself from the comment to Vladimir_Nesov:
Anyways, the unwritten thing is that Bob care about having a quality headphone and a good pair of shoes equally. So given that he already has an alright headphone, he would get more utility by buying a good pairs of shoes instead. It is essentially a choice between (a) getting a $300 headphone and (b) getting a $100 headphone and a $300 pair of shoes.
If the bad translation is good enough that the incremental value of a good translation doesn’t justify doing it, then that is your answer.
I do accept this as the rational answer, doesn’t mean it is not irritating. If A (skillful translator) cares about having a good translation of X slightly more than Y, and B (poor translator) cares about Y much more than X. If B can act first, he can work on X and “force” A (via expected utility) to work on Y. This is a failure of mine to not talk about difference in preference in my examples and expect people to extrapolate and infer it out.
Again, seems like we are in agreement lol. I agree with what you said and I meant that, but tried to compress it into one sentence and failed to communicate.
It sure can! I think we are in agreement on sunk cost fallacy. I just don’t think it applies to example 1 because there exists alternatives that can keep the sunk resources. Btw this is why my example is on the order of $100, at this price point you probably have a couple alternative things to buy to spend the money.
(I need to defend the sad and the annoying in two separate parts)
Yes, and but sometimes that is already annoying on its own (Bob is not perfectly rational and sometimes he just really want the quality headphone, but now math tells Bob that Tim gifting him that headphone means he would have to wait e.g. ~2 years before it is worth buying a new one). Of course Bob can improve his life in other ways with his saved money, but still, would be nice if you can just ask Tim to buy something else if you had known.
Sometimes increasing sum(projects) does not translate directly to increasing utility. This is more obvious in real life scenarios where actors are less rational and time is a real concept. The sad thing happens when someone with good intention but with poor skill (and you don’t know they are that bad) signing up to a time-critical project and failing/doing sub-par
This is a tangent, but Sunk cost fallacy is not really a fallacy most of the time, because spending more resources beforehand really increases the chance of “success” most of the time. For more: https://gwern.net/sunk-cost
I am trying to pinpoint the concept of “A doing a mediocre job of X will force B to rationally do Y instead of X, making the progress of X worse than if A had not done anything”. The examples are just examples that hopefully helps you locate the thing I am handwaving at. I do not try to make them logically perfect because that would take too much time.
Anyways, the unwritten thing is that Bob care about having a quality headphone and a good pair of shoes equally. So given that he already has an alright headphone, he would get more utility by buying a good pairs of shoes instead. It is essentially a choice between (a) getting a $300 headphone and (b) getting a $100 headphone and a $300 pair of shoes. Of course there are some arguments about preference, utility != dollar amount or something along those lines. But (b) is the better option in my constructed example to show the point.
Let me know if I still need to explain example 2
It is sad and annoying that if you do a mediocre job (according to the receiver), doing things even for free (volunteer work/gifting) can sabotage the receiver along the dimension you’re supposedly helping.
This is super vague the way I wrote it, so examples.
Example 1. Bob wants to upgrade and buy a new quality headphone. He has a $300 budget. His friend Tim not knowing his budget, bought a $100 headphone for Bob. (Suppose second-handed headphones are worthless) Now Bob cannot just spend $300 to get a quality headphone. He would also waste Tim’s $100 which counterfactually could have been used to buy something else for Bob. So Bob is stuck with using the $100 headphone and spending the $300 somewhere else instead.
Example 2. Andy, Bob, and Chris are the only three people who translates Chinese books to English for free as a hobby. Because there are so many books out there, it is often not worth it to re-translate a book even if the previous one is bad, because spending that time to translate a different book is just more helpful to others. Andy and Bob are pretty good, but Chris absolutely sucks. It is not unreadable, but they are just barely better than machine translation. Now Chris has taken over to translate book X, which happens a pretty good book. The world is now stuck with Chris’ poor translation on book X with Andy and Bob never touching it again because they have other books to work on.
I want to use this chance to say that I really want to be able to bookmark a sequence
Agreed on the examples of natural abstractions. I held a couple abstraction examples in my mind (e.g. atom, food, agent) while reading the post and found that it never really managed to attack these truly very general (dare I say natural) abstractions.
I overlayed my phone’s display (using scrcpy) on top of the website rendered on Windows (Firefox). Image 1 shows that they indeed scaled to align. Image 2 (Windows left, Android right) shows how the font is bolder on Windows and somewhat blurred.
The monitor is 2560x1440 (website at 140%) and the phone is 1440x3200 (100%) mapped onto 585x1300.
You can still nominate posts until Dec 14th?