I gave somebody I know (50yo libertarian-leaning conservative) Doing Good Better by William MacAskill. I told them I think they might like the book because it has “interesting economic arguments”, in order to not seem like a crazy EA-evangelist. I thought their response to it was interesting so I am sharing it here.
They received the book mostly positively. Their main takeaway was the idea that thinking twice about whether a particular action is really sensible can have highly positive impacts.
Here were their criticisms / misconceptions (which I am describing in my own words):
(1) Counterfactual concerns. Society benefits from lots of diverse institutions existing. It would be worse off if everybody jumped ship to contribute to effective causes. This is particularly the case with people who would have gone on to do really valuable things. Example: what if the people who founded Netflix, Amazon, etc. instead went down provably effective but in hindsight less value-add paths?
(2) When deciding which action to take, the error bars in expected value calculations are quite high. So, how can we possibly choose? In cases where the expected value of the “effective” option is higher but within error bars, to what extent am I a bad person for not choosing the effective option?
(12) Example: should a schoolteacher quit their job and go do work for charity?
My response to them was the following:
On (1): I think it’s OK for people to choose paths according to comparative advantage. For the Netflix example, early on it was high-risk-high-reward, but the high risk was not ludicrously high because the founders had technical expertise, a novel idea, really liked movies, whatever. Basically the idea here is the Indian TV show magician example 80000 Hours talks about here.
On (2): If the error bars around expected value overlap significantly, then I think they cease to be useful enough to be the main decision factor. So, switch to deciding by some other criterion. Maybe one option is more high-risk-high-reward than the other, so if a person is risk averse / seeking they will have different preferences. Maybe one option increases personal quality of life (this is a valid decision factor!!).
On (12): This combines the previous two. If one option (teaching vs charity) has an expected value of much larger than that of the other option, the teacher should probably pick the higher impact option. This doesn’t have to be charity—maybe the teacher is a remarkable teacher and a terrible charity worker.
As described in my response to (1), the expected value calculation takes into account high-risk-high-reward cases as long as the calculations are done reasonably. (If the teacher thinks their chances of doing extreme amounts of good with the charity are 1%, when in reality they are 0.0001%, this is a problem.)
If the expected values are too close to compare, the teacher is “allowed” to use other decision criteria, as described in my response to (2).
In the end, what we were able to agree on was:
(*) There exist real-world decisions where one option is (with high probability) much more effective than the other option. It makes to choose the effective option in these cases. Example: PlayPump versus an effective charity
(*) High-risk-high-reward pursuits can be better choices than “provably effective” pursuits (e.g. research as opposed to earning to give).
What I think we still disagree on is the extent to which one is morally obligated to choose the effective option when the decision is in more of a gray area.
I take the “glass is half full” interpretation. In a world where most people do not consider the qualitative impact of their actions, choosing the effective option outside the gray areas is already a huge improvement.
“Don’t make us look bad” is a powerful coordination problem which can have negative effects on a movement. Examples:
Veganism has a bad reputation of being holier than thou. It’s hard to be a vegan without getting lumped in with “those vegans”. So, it’s hard to be open about being a vegan, which makes making veganism more socially acceptable tricky.
Ideas perceived as crazy are connected to the EA movement. For example, EAs discuss the possibility that we are living in a simulation seriously. So do flat earthers. Similarly, outsiders could dismiss EA as being too crazy for many other superficial reasons. The NYT’s article on Scott Alexander (https://www.nytimes.com/2021/02/13/technology/slate-star-codex-rationalists.html) sort of acts as an example—juxtaposing “MIRI” and “NRx” implicitly undermines the credibility of AI Safety research. EAs trying to work in public policy for example might not want to publicly identify as “EA” to the same extent because “the other EAs are making them look bad”.
A person who is part of a movement does something controversial. It makes the movement look bad. For example, longevity has been getting negative press due to the Aubrey de Grey scandal.
And that’s—coordination’s a very hard thing to do. People have very strong incentives to defect. If you’re an activist going out and saying a very controversial thing, putting it out there in the most controversial, least favorable light so that you get a lot of negative attention. That’s mostly good for you. That’s how you get attention. It helps your career. It’s how you get foundation money. [...]
And we really noticed that all of these campaigns, other than, I guess, Joe Biden, were embracing these really unpopular things. Not just stuff around immigration, but something like half the candidates who ran for president endorsed reparations, which would have been unthinkable, it would have been like a subject of a joke four years ago. And so we were trying to figure out, why did that happen? [...]
But we went and we tested these things. It turns out these unpopular issues were also bad in the primary. The median primary voter is like 58 years old. Probably the modal primary voter is a 58-year-old black woman. And they’re not super interested in a lot of these radical sweeping policies that are out there.
And so the question was, “Why was this happening?” I think the answer was that there was this pipeline of pushing out something that was controversial and getting a ton of attention on Twitter. The people who work at news stations—because old people watch a lot of TV—read Twitter, because the people who run MSNBC are all 28-year-olds. And then that leads to bookings. And so that was the strategy that was going on. And it just shows that there are these incredible incentives to defect.
One takeaway: a moderate democrat like Joe Biden suffers because the crazier looking democrats like AOC are “making him look bad”, even if his and AOC’s goals are largely aligned. I can only assume that the republican party faces similar issues (not discussed in this podcast episode though)
Are there more examples of “don’t make us look bad” coordination problems like these? Any examples of overcoming this pressure and succeeding as a movement?
How much to extreme people harm movements? What affects this?
For example, in politics, there are a few high-stakes all or nothing elections, where having extreme people quiet down could be beneficial to a particular party. On the other hand, no extreme voices could mean no progress.
In veganism/EA, maybe extreme voices have less of a negative effect because there aren’t as many high-stakes all or nothing opportunities. Instead, a bunch of decentralized actors do stuff. Clearly so far EAs seem to be doing fine interfacing with governments (e.g. CSET) so maybe the “don’t make us look bad” factor is less here.
(this is just a rant, not insightful) Everybody knows how important it is to choose the right time to write something. The optimal time is when you’re really invested in the topic, learning rapidly but know enough to start the writing process. Then, ideally, during the writing process everything will crystalize. If you wait much longer than this the topic will no longer be exciting and you will not want to write about it.
Everybody gives this advice, both within and outside of academia. I’ve heard it from professors, LW-y blog posts (maybe even on LW?), and everywhere in between.
This isn’t a direct answer to your question, but what I’ve personally found is that if I want to get re-excited about a topic that has already passed that critical period, the best thing to do is find people either asking questions about it or Being Wrong On The Internet about it, so that then I want to explain or rant about it again. ;-)
This might be helpful advice. Some of the more required writing I’ve been putting off is probably too niche for the “Being Wrong On The Internet” aspect but I could probably more proactively find people willing to let me explain things to them. Come to think of it this has often been a good way to motivate me to learn / write things...
Yeah, it seems that the desire to write is tied is often tied to a desire to explain things, it’s just that our past self is usually the first person we want to explain things to. ;-) We could think of it as being like a pressure differential of knowledge, where you need a lower-pressure area for your knowledge to overflow into. Having a mental model of a person who needs to know, but doesn’t, then feels like an opportunity to relieve the sudden pressure differential. ;-)
In principle, I suppose imagining that person might also work if you can model such a person well enough in your mind.
This is an interesting way to think about it. For me, I’m not sure whether it’s as much of a pressure differential as much as it is a pressure threshold. The latter meaning, if I exceed a certain level of excitement about a topic, then I feel a compulsion to communicate (and it feels effortless). By contrast, if I have not hit that level, it becomes much harder to write or think about that topic. I wonder whether developing more motivation based on the “sink” would in turn make me a more effective communicator...
Uh oh...”everybody knows” was poor wording here then. I guess it would have been more precise to say “I’ve heard this from multiple different non-overlapping groups so it seems like widely applicable advice”.
Or maybe you write for a living because you are naturally good at selecting the right time to write / have a wider window for when you are capable of writing well?
My favourite version of this advice is Sarah Perry’s writing graph (from the Ribbonfarm longform course, I think) - maybe that’s one of the places you saw it?
I used to struggle to pay attention to audiobooks and podcasts. No matter how fascinating I found the topic, whenever I tried to tune in I would quickly zone out and lose the thread. However I think I am figuring out how to get myself to focus on these audio-only information sources more consistently.
I’ve tried listening to these audio information sources in three different environments:
Doing nothing else
Going on a walk (route familiar or randomly chosen as I go)
Doing menial tasks in minecraft (fishing, villager trading, farming, small amounts of inventory management)
My intuition would have been that my attention would be best with (1), then (2), then (3). In fact the opposite seems to be true. I focus best while playing minecraft, then while walking, then doing nothing else.
I think the explanation for this is fairly self-evident if you turn it around the other way. The reason why I am not able to focus on podcasts while doing nothing else is usually because my mind goes off on tangents, tunes out the audio, and loses the thread. To a lesser extent, this happens on walks. It seems like menial tasks in minecraft take up just enough mental energy for me not to additionally think up tangents, but not so much mental energy that I can no longer follow the discussion. In summary: “Being focused” on a fast-paced one-way stream of information requires not going off on tangents, which my brain can only do if it is sufficiently idle.
Something I am aware of but haven’t tested is that it could be that minecraft is too taxing and I am not absorbing as much as I would be if I were going on a walk. However, I would argue that it is better to consistently absorb 80% of a podcast than it is to absorb 100% of a podcast’s content 80% of the time and be completely zoned out for the other 20% (as is perhaps the case when I am walking). Pausing and rewinding is inefficient and annoying. This is also an argument for listening to podcasts at a faster speed (perhaps at the cost of absorption rate).
Moreover, I am listening to podcasts with the goal of gaining high-level understanding of the topics covered. So, “everything but slightly fuzzy on the details” is better than “the details of 80% of everything” for my purposes. Perhaps if I was listening with a different goal (for example, a podcast discussing a paper I wanted to deeply understand), more of my focus would be required and it would be better for me to walk (or even sit still) than play minecraft.
Initially, I thought I was bad at focusing on podcasts since I lacked the brainpower to follow a fast-paced audio. Having experienced decreased distractability while listening to a podcast and playing minecraft, I have now updated my model of how I focus. I think focus might follow a sort of Laffer curve (upside down U) shape, where the x axis is # external stimuli and the y axis is # content absorbed.
More precisely (a picture really would do better here but I don’t know how to put one in a shortform): Call the most # content absorbed y0 and the corresponding # external stimuli x0. I used to think podcasts were more than x0 stimulus for me, meaning that I could never absorb a near-optimal amount of content. However the minecraft+podcast experiment showed me that podcasts take less than x0 stimulus for me, and minecraft just enoug boosted the amount of stimuli to get me to the optimal (x0, y0) focus situation.
Going forward I definitely want to experiment with different combinations of stimuli (media, physical activity, environment) and see how I can optimize my focus. Some thoughts which seem like other people have them / do them:
What can I focus on best while exercising? Previously I have been putting dumb tv shows on in the background—is this all I can focus on or can I use this time more productively? (If I can be more productive then I will probably exercise more—win win!)
Within the realm of podcasts, can I come up with different “categories” and associate optimal actions to each? Three categories I have experience with are “technical” (AXRP, more hardcore episodes of 80k Hours podcast), “soft skills” (less hardcore episodes of 80k Hours podcast), “fun” (e.g. podcasts about a tv show). Then I could build habits based off this (e.g. pairing “soft skills” with minecraft or “technical” with sitting outside) without having to put as much effort into decision making.
Podcasts + fixed stimuli make for good benchmarks which will help me measure whether my focus is higher or lower than usual. For example, maybe if I am unable to focus on a combo that I usually am able to focus on, that could be a sign there is something wrong with my physical health or that I am mentally exhausted.
Some people report being able to focus on difficult tasks (e.g. theoretical research) best when in a noisy place but otherwise undistracted (e.g. coffee shop). This seems like an instance of what I am talking about here.
Outcome: I will try to think about this more deliberately when planning which activities I do when, and in particular how I pair activities which can be done simultaneously. Who knows—maybe I will finally be able to get through some of those 3+ hour long episodes of the 80,000 Hours podcast! :)
Thing I would do if I had enough money for $200 to be inconsequential: buy 2 pairs of identical bluetooth headphones—one permanently paired to my laptop and one permanently paired to my phone. This would save me lots of annoyance whenever I switch between the two. Bluetooth seems tojust suck
When at all possible, wired >> wireless. I use BT earphones for my phone, because listening while walking/riding is desirable and wires really do suck while moving. Even with a fairly portable laptop, I don’t move enough while using it to be willing to put up with wireless.
Here is a “proof” that π is rational. It uses the fact
π=4−43+45−47+49−…(∗)
as well as induction. It suffices to show that the right-hand side of (∗) is rational. We do this by induction. For the base case, we have that 4 is rational. For the inductive step, assume 4−43+⋯±42k+1 is rational. Then adding or subtracting the next term 42k+3 (which is rational) will result in a rational number.
The flaw is of course that we’ve only shown that the partial sums are rational. Induction doesn’t say anything about their limit, which is not necessarily rational.
Problem: I compulsively pick at scabs. Often I do it even though I don’t want to pick at it because I know I’ll be worse off. (Scab will bleed, it’ll just reform anyway, and I’ll have to deal with the unhealed skin for longer.) Telling myself “don’t pick” doesn’t work, I get very distracted by the presence of the scab and HAVE TO pick.
Solution: put a band-aid over the scab. Blocking the scab makes picking more difficult. More crucially, the adhesive of the bandaid gives me a mildly ticklish sensation which masks the sensation that a pickable scab is present.
Caveat: this has been most helpful for face scabs, but face bandaids are awkward. This has worked fine for me because I tend to pick when I’m alone, so I can just apply bandaids when alone and take them off when people will see me. But if you spend most of your time around people this may not work for you.
Search “pimple patches” at your retailer of choice. They are skin-safe stickers, often clear or beige, sometimes with some generic “good for the skin” additives. They serve the bandage’s function of sticking over a small area of skin that you want to block your hands from, while using a milder adhesive and looking almost invisible on the face.
AI capabilities are advancing rapidly. It’s deeply concerning that individual actors can plan and execute experiments like “give a LLM access to a terminal and/or the internet”. However I need to remember it’s not worth spending my time worrying about this stuff. When I worry about this stuff, I’m not doing anything useful for AI Safety, I am just worrying. This is not a useful way to spend my time. Instead it is more constructive to avoid these thoughts and focus on completing projects I believe are impactful.
Question about people who do calorie restriction (CR) in humans with the goal of anti-aging. Do they experience “brain fog” i.e. decreased cognitive performance? Intuitively that seems like a major drawback of CR but perhaps brain fog can be eliminated with a healthier diet / getting used to consistent CR over time? Curious to hear evidence of/against this.
The case against a inbox with lots of items. This is certainly not a hot or unusual take but I am writing it anyway. I describe how I transitioned my TODO list setup from “big pile of email notifications” to something slightly more efficient (but still low tech overhead).
I used to use my email inbox as a generic TODO list, with items ranging from “reply to this person” to “remember to go to this calendar event” but also “fill out the application for this program”, “read this blog post”, and so on.
I think email inbox is still the best place to put reminders of the form “write an email to this person”. However for all the other stuff I find using email inbox quite inefficient.
Every time I scan through an email inbox with lots of items, I have to re-”compute” what that TODO item was supposed to be again. For example: “hmm, this is just a link, (I click the link), oh, it’s that blog post Person X told me about that I’ve been meaning to read”. Doing this for many items is kind of computationally expensive, no matter how clearly phrased the reminders are.
Instead, here is where I put reminders which used to be items in my inbox:
For reminders of the form “read this thing”, I have bookmark folders based on different topics. (I used to have a big bookmark folder called “To Read”, but it was too computationally expensive to figure out what was what.) The topics are relatively broad; things like “Career advice”, “Fun”, “Math”, “ML”. The idea is that usually I’m in the mood for a specific topic, so I can go to the appropriate folder and knock off some todo items.
For reminders of the form “go to this calendar event” I have been consistently using Google Calendar. The key is to really deliberately get in the habit of actually checking my calendar (I used to not do this so email was a failsafe way to remind myself to go to important things).
For reminders of the form “fill out this form / application” I make that a “this week” or “this month” TODO item in my longer term planning system (which is just a google doc since that’s been working out fine).
This system has been great since I spend way less time scanning through emails that don’t need my attention (it had really been adding up) and because it’s low-tech so it barely requires additional effort.
Summary: this post is mostly for self reflection. I my first impressions (likes/dislikes) after using Roam, a note-taking app with good “linking” features. I also use this post to think about some Roam-related decisions (should I look into competitor apps, should I upgrade to Roam’s lump sum “Believer” plan).
I’m trying out Roam and liking it so far. I started using it when I started working on a research project in order to reap the full benefits.
Before I list pros and cons, I should clarify that I am a novice at Roam. I know I am not using Roam close to optimally, and probably some of the disadvantages I’ve listed are due to my lack of knowledge rather than problems with Roam. (Feel free to tell me where I am Being Wrong on the Internet because then I will learn more about Roam!)
Advantages of roam:
Organization system makes it easy to want to write. There are “daily” pages which you I can tab to at any time by pressing Alt+D. Or, I can create “topic” pages for a book I’m working through, a concept, whatever. So when I open Roam to work I know I’m either learning something and taking notes in Roam, or maybe writing my miscellaneous ideas in my daily page.
It feels fast. Better than google docs or Overleaf. Difference between offline environment barely noticeable (meaning occasionally I get a big lag spike because of an internet problem on my end, rather than a constant amount of noticeable latency).
Search results are fast and relevant. If you’ve written those words before, they’re easy to find. Admittedly my graph is small since I’m new so I’ll have to wait and see whether Roam scales.
“Page links” are useful. I thought it would be annoying to get in the habit of linking [[Some Concept]] every time I say it but it actually isn’t. Worst case I can write first and link later. I think it’s genuinely useful to be able to click [[Some Concept]] and see all places I’ve mentioned it—this is helpful if I want to refresh my understanding of a concept for example.
“Block links” are even more useful. Roam notes are a bunch of nested bullet points and a block is a single bullet point. Each block has a reference code. If you paste that reference code elsewhere, the same block appears (but underlined and “linked” to its source). I find this super super useful because often my brain is writing and thinks “wait this is just X thing I’ve seen / written before”. With block references I can just paste that block in (or a link to it). It’s so much faster than my previous workflow, which was scrolling up through my ideas document, or even browsing different files, looking for where I’d written that thing before.
TODO list. There is an easy shortcut (Ctrl+Enter) to add a block to my TODO list. So it’s easy to go and look at all my TODO items on different pages. This is great if I am writing something and want to make a note to come back to a point without breaking out of flow.
Disadvantages of roam:
LaTeX needs two dollar signs. Also Roam’s LaTeX is clunky. If you write two dollar signs, it autocompletes the next two, but you can’t advance through those by pressing dollar sign—you have to use the right arrow key instead. I wish there was a setting to use Overleaf-style input instead.
Some features which seem like they should be intuitive are not intuitive. Most notably I still am not comfortable with moving blocks and the side panel.
Limited offline functionality (I think?). I thought Roam was only supposed to work if you were connected to the internet. However it actually seems to work fine if I am temporarily disconnected from the internet, which is great! TODO (I’d be pressing Ctrl+Enter if I was in Roam☺) is experiment with this and see how offline I can go.
Works fine but not great on mobile. It’s decent but just slightly clunky in browser. I think I would be more likely to reference my graph if there was a smooth mobile app than if I had to open my phone’s browser and get to the appropriate page every time.
Some shortcuts are counterintuitive. It’s hard for me to remember all the Shifts, Tabs, and Ctrls. And then Ctrl+Shift+o is how you open links for some reason? This is kind of unfair criticism since every feature-heavy platform takes time to adjust to but it’s a current bottleneck of mine for sure.
Some stuff seems to require using the mouse. For an app with so many shortcuts you’d think there’d be shortcuts for thinks like “paste current blocks’ reference into clipboard”. It’s hard to reliably right-click the tiny bullet point (necessary for grabbing the block reference) so this is a bit of a time waster.
Something I am uncertain about is whether I should switch to a cheaper competitor app. I have money but not that much money. $15/month is worth it to me since Roam seems to be increasing my productivity, but also maybe a free competitor app with Roam’s linking capabilities would also increase my productivity by the same amount.
Something else I am uncertain about is how to make the decision of whether to stay on the monthly plan ($15/month) or switch to the Believer plan ($500 for the next 5 years). In order to make this decision I feel like I need to both quantify how much Roam will actually improve my productivity and whether cheaper competitor apps will improve my productivity by the same amount. It’s worth mentioning that the Believer plan comes with extra features (including “offline mode”)—but it’s hard to know how much I will like this without actually purchasing the Believer plan.
I feel like in order to compare which of two apps is better I need to spend a long time (say 6 months) getting fairly good at each app. Otherwise, I will just be comparing how shallow the learning curve is for beginners.
So, my current plan is to go all in on Roam for the next 6 months and give it a fair shot. Then after that I will explore/exploit competitor apps, spending an amount of time on them proportional to how likely I think they are to beat Roam.
>>Each block has a reference code. If you paste that reference code elsewhere, the same block appears
>>It’s hard to reliably right-click the tiny bullet point (necessary for grabbing the block reference)
I never need to do this. If you type “(())” [no quotes] at the destination point and then start typing in text from the block you’re referencing, blocks containing that text will appear in a window. Keep typing until you can see the desired block and click on it to insert it.
If you type the trigger ”/block”, the menu that appears contains four fun things you can do with blocks.
I’ve thought through an explanation as to why there exist people who are not effective altruists. I think it’s important to understand these viewpoints if EAs want to convert more people to their side.
As an added bonus, I think this explanation generalizes to many cases where a person’s actions contradict their knowledge—thinking through this helped me better understand why I think I take actions which contradict my knowledge.
Summary: people’s gut feel (which actually governs most decision-making) takes time, thought and effort to catch up to their systematic reasoning (which is capable of absorbing new information much quicker). This explains phenomena such as “why not everyone who has heard of EA is an EA” and “why not everyone who has heard of factory farming is a vegan”.
Outcome / why this was useful for me to think about: This framework of “systematic reasoning” vs “gut feel” is useful for me when thinking about what I know, how well I know it, and whether I act on this knowledge. This helps distinguish between two possible types of “this person is acting contrary to knowledge they have”: 1) the person’s actions disagree with their gut feel and systematic reasoning (= lack of control) or 2) the person’s actions agree with their systematic reasoning but not gut feel (= still processing the knowledge).
Full explanation: People’s views on career choices, moral principles, and most generally the moral value of particular actions are quite rarely influenced by systematic reasoning. Instead, people automatically develop priors on these things by interacting with society and make most decisions according to gut feel.
Making gut feel decisions instead of using systematic reasoning is generally a good move. At any moment, we are deciding not to do an insanely high number of technically feasible actions. Evaluating all of these is computationally intractable. (for arguments like these see “Algorithms to Live By”)
When people are introduced to EA, they will usually not object to premises such as “we should make choices to do more good at the margin” and “some charities are 10-100x more effective than others”. However just because they agree with this doesn’t mean they’re going to immediately become an EA. In other words, anybody can quickly understand EA concepts through their systematic reasoning, but that doesn’t mean it has also reached their gut feel reasoning (= becoming an EA).
A person’s gut feel on EA topics is all of their priors on charitable giving, global problems, career advice, and doing good in general. Even the most well-worded argument isn’t immediately going to sway a person’s priors so much that they immediately become an EA. But over time, a person’s priors can be updated via repeated exposure and internal reflection. So maybe you explain EA to someone and they’re initially skeptical, but they continue carefully considering EA ideas and become more and more of an EA.
This framework is actually quite general. Here’s another example: consider a person who is aware that factory farming is cruel but regularly eats meat. This is because their gut feel on whether meat is OK hasn’t caught up to systematic reasoning about factory farming being unethical.
Just like the EA example explained above, there is often no perfect explanation which can instantly turn somebody into a gut feel vegan. Rather, they have to put in the work to reflect on pro-vegan evidence presented to them.
(n.b: the terms “systematic reasoning” and “gut feel” are not as thoughtfully chosen as they could be—I’d appreciate references to better or more standard terms!)
Ah, I googled those and the results mostly mentioned “Thinking Fast and Slow”. The book has been on my list for a while but it sounds like I should give it higher priority. Thanks for the pointer!
I gave somebody I know (50yo libertarian-leaning conservative) Doing Good Better by William MacAskill. I told them I think they might like the book because it has “interesting economic arguments”, in order to not seem like a crazy EA-evangelist. I thought their response to it was interesting so I am sharing it here.
They received the book mostly positively. Their main takeaway was the idea that thinking twice about whether a particular action is really sensible can have highly positive impacts.
Here were their criticisms / misconceptions (which I am describing in my own words):
(1) Counterfactual concerns. Society benefits from lots of diverse institutions existing. It would be worse off if everybody jumped ship to contribute to effective causes. This is particularly the case with people who would have gone on to do really valuable things. Example: what if the people who founded Netflix, Amazon, etc. instead went down provably effective but in hindsight less value-add paths?
(2) When deciding which action to take, the error bars in expected value calculations are quite high. So, how can we possibly choose? In cases where the expected value of the “effective” option is higher but within error bars, to what extent am I a bad person for not choosing the effective option?
(12) Example: should a schoolteacher quit their job and go do work for charity?
My response to them was the following:
On (1): I think it’s OK for people to choose paths according to comparative advantage. For the Netflix example, early on it was high-risk-high-reward, but the high risk was not ludicrously high because the founders had technical expertise, a novel idea, really liked movies, whatever. Basically the idea here is the Indian TV show magician example 80000 Hours talks about here.
On (2): If the error bars around expected value overlap significantly, then I think they cease to be useful enough to be the main decision factor. So, switch to deciding by some other criterion. Maybe one option is more high-risk-high-reward than the other, so if a person is risk averse / seeking they will have different preferences. Maybe one option increases personal quality of life (this is a valid decision factor!!).
On (12): This combines the previous two. If one option (teaching vs charity) has an expected value of much larger than that of the other option, the teacher should probably pick the higher impact option. This doesn’t have to be charity—maybe the teacher is a remarkable teacher and a terrible charity worker.
As described in my response to (1), the expected value calculation takes into account high-risk-high-reward cases as long as the calculations are done reasonably. (If the teacher thinks their chances of doing extreme amounts of good with the charity are 1%, when in reality they are 0.0001%, this is a problem.)
If the expected values are too close to compare, the teacher is “allowed” to use other decision criteria, as described in my response to (2).
In the end, what we were able to agree on was:
(*) There exist real-world decisions where one option is (with high probability) much more effective than the other option. It makes to choose the effective option in these cases. Example: PlayPump versus an effective charity
(*) High-risk-high-reward pursuits can be better choices than “provably effective” pursuits (e.g. research as opposed to earning to give).
What I think we still disagree on is the extent to which one is morally obligated to choose the effective option when the decision is in more of a gray area.
I take the “glass is half full” interpretation. In a world where most people do not consider the qualitative impact of their actions, choosing the effective option outside the gray areas is already a huge improvement.
“Don’t make us look bad” is a powerful coordination problem which can have negative effects on a movement. Examples:
Veganism has a bad reputation of being holier than thou. It’s hard to be a vegan without getting lumped in with “those vegans”. So, it’s hard to be open about being a vegan, which makes making veganism more socially acceptable tricky.
Ideas perceived as crazy are connected to the EA movement. For example, EAs discuss the possibility that we are living in a simulation seriously. So do flat earthers. Similarly, outsiders could dismiss EA as being too crazy for many other superficial reasons. The NYT’s article on Scott Alexander (https://www.nytimes.com/2021/02/13/technology/slate-star-codex-rationalists.html) sort of acts as an example—juxtaposing “MIRI” and “NRx” implicitly undermines the credibility of AI Safety research. EAs trying to work in public policy for example might not want to publicly identify as “EA” to the same extent because “the other EAs are making them look bad”.
A person who is part of a movement does something controversial. It makes the movement look bad. For example, longevity has been getting negative press due to the Aubrey de Grey scandal.
The coordination problems the US democratic party faces, described by David Shor in this Rationally Speaking podcast episode (http://rationallyspeakingpodcast.org/wp-content/uploads/2020/11/rs248transcript.pdf).
One takeaway: a moderate democrat like Joe Biden suffers because the crazier looking democrats like AOC are “making him look bad”, even if his and AOC’s goals are largely aligned. I can only assume that the republican party faces similar issues (not discussed in this podcast episode though)
Are there more examples of “don’t make us look bad” coordination problems like these? Any examples of overcoming this pressure and succeeding as a movement?
How much to extreme people harm movements? What affects this?
For example, in politics, there are a few high-stakes all or nothing elections, where having extreme people quiet down could be beneficial to a particular party. On the other hand, no extreme voices could mean no progress.
In veganism/EA, maybe extreme voices have less of a negative effect because there aren’t as many high-stakes all or nothing opportunities. Instead, a bunch of decentralized actors do stuff. Clearly so far EAs seem to be doing fine interfacing with governments (e.g. CSET) so maybe the “don’t make us look bad” factor is less here.
This seems interesting and important.
(this is just a rant, not insightful) Everybody knows how important it is to choose the right time to write something. The optimal time is when you’re really invested in the topic, learning rapidly but know enough to start the writing process. Then, ideally, during the writing process everything will crystalize. If you wait much longer than this the topic will no longer be exciting and you will not want to write about it.
Everybody gives this advice, both within and outside of academia. I’ve heard it from professors, LW-y blog posts (maybe even on LW?), and everywhere in between.
SO WHY DO I CONSTANTLY IGNORE THIS ADVICE?? :(
This isn’t a direct answer to your question, but what I’ve personally found is that if I want to get re-excited about a topic that has already passed that critical period, the best thing to do is find people either asking questions about it or Being Wrong On The Internet about it, so that then I want to explain or rant about it again. ;-)
This might be helpful advice. Some of the more required writing I’ve been putting off is probably too niche for the “Being Wrong On The Internet” aspect but I could probably more proactively find people willing to let me explain things to them. Come to think of it this has often been a good way to motivate me to learn / write things...
Yeah, it seems that the desire to write is tied is often tied to a desire to explain things, it’s just that our past self is usually the first person we want to explain things to. ;-) We could think of it as being like a pressure differential of knowledge, where you need a lower-pressure area for your knowledge to overflow into. Having a mental model of a person who needs to know, but doesn’t, then feels like an opportunity to relieve the sudden pressure differential. ;-)
In principle, I suppose imagining that person might also work if you can model such a person well enough in your mind.
This is an interesting way to think about it. For me, I’m not sure whether it’s as much of a pressure differential as much as it is a pressure threshold. The latter meaning, if I exceed a certain level of excitement about a topic, then I feel a compulsion to communicate (and it feels effortless). By contrast, if I have not hit that level, it becomes much harder to write or think about that topic. I wonder whether developing more motivation based on the “sink” would in turn make me a more effective communicator...
Protagonist: “Everybody knows!”
Narrator: “Everybody didn’t know.”
(edit: I think this came out meaner than I meant it to, mostly I thought it was a fun injoke about the everybody knows post)
I have never heard this advice and I write for a living ¯\_(ツ)_/¯
Uh oh...”everybody knows” was poor wording here then. I guess it would have been more precise to say “I’ve heard this from multiple different non-overlapping groups so it seems like widely applicable advice”.
Or maybe you write for a living because you are naturally good at selecting the right time to write / have a wider window for when you are capable of writing well?
My favourite version of this advice is Sarah Perry’s writing graph (from the Ribbonfarm longform course, I think) - maybe that’s one of the places you saw it?
I also ignore it a lot :(
Yes, that’s exactly what I was thinking of!
I used to struggle to pay attention to audiobooks and podcasts. No matter how fascinating I found the topic, whenever I tried to tune in I would quickly zone out and lose the thread. However I think I am figuring out how to get myself to focus on these audio-only information sources more consistently.
I’ve tried listening to these audio information sources in three different environments:
Doing nothing else
Going on a walk (route familiar or randomly chosen as I go)
Doing menial tasks in minecraft (fishing, villager trading, farming, small amounts of inventory management)
My intuition would have been that my attention would be best with (1), then (2), then (3). In fact the opposite seems to be true. I focus best while playing minecraft, then while walking, then doing nothing else.
I think the explanation for this is fairly self-evident if you turn it around the other way. The reason why I am not able to focus on podcasts while doing nothing else is usually because my mind goes off on tangents, tunes out the audio, and loses the thread. To a lesser extent, this happens on walks. It seems like menial tasks in minecraft take up just enough mental energy for me not to additionally think up tangents, but not so much mental energy that I can no longer follow the discussion. In summary: “Being focused” on a fast-paced one-way stream of information requires not going off on tangents, which my brain can only do if it is sufficiently idle.
Something I am aware of but haven’t tested is that it could be that minecraft is too taxing and I am not absorbing as much as I would be if I were going on a walk. However, I would argue that it is better to consistently absorb 80% of a podcast than it is to absorb 100% of a podcast’s content 80% of the time and be completely zoned out for the other 20% (as is perhaps the case when I am walking). Pausing and rewinding is inefficient and annoying. This is also an argument for listening to podcasts at a faster speed (perhaps at the cost of absorption rate).
Moreover, I am listening to podcasts with the goal of gaining high-level understanding of the topics covered. So, “everything but slightly fuzzy on the details” is better than “the details of 80% of everything” for my purposes. Perhaps if I was listening with a different goal (for example, a podcast discussing a paper I wanted to deeply understand), more of my focus would be required and it would be better for me to walk (or even sit still) than play minecraft.
Initially, I thought I was bad at focusing on podcasts since I lacked the brainpower to follow a fast-paced audio. Having experienced decreased distractability while listening to a podcast and playing minecraft, I have now updated my model of how I focus. I think focus might follow a sort of Laffer curve (upside down U) shape, where the x axis is # external stimuli and the y axis is # content absorbed.
More precisely (a picture really would do better here but I don’t know how to put one in a shortform): Call the most # content absorbed y0 and the corresponding # external stimuli x0. I used to think podcasts were more than x0 stimulus for me, meaning that I could never absorb a near-optimal amount of content. However the minecraft+podcast experiment showed me that podcasts take less than x0 stimulus for me, and minecraft just enoug boosted the amount of stimuli to get me to the optimal (x0, y0) focus situation.
Going forward I definitely want to experiment with different combinations of stimuli (media, physical activity, environment) and see how I can optimize my focus. Some thoughts which seem like other people have them / do them:
What can I focus on best while exercising? Previously I have been putting dumb tv shows on in the background—is this all I can focus on or can I use this time more productively? (If I can be more productive then I will probably exercise more—win win!)
Within the realm of podcasts, can I come up with different “categories” and associate optimal actions to each? Three categories I have experience with are “technical” (AXRP, more hardcore episodes of 80k Hours podcast), “soft skills” (less hardcore episodes of 80k Hours podcast), “fun” (e.g. podcasts about a tv show). Then I could build habits based off this (e.g. pairing “soft skills” with minecraft or “technical” with sitting outside) without having to put as much effort into decision making.
Podcasts + fixed stimuli make for good benchmarks which will help me measure whether my focus is higher or lower than usual. For example, maybe if I am unable to focus on a combo that I usually am able to focus on, that could be a sign there is something wrong with my physical health or that I am mentally exhausted.
Some people report being able to focus on difficult tasks (e.g. theoretical research) best when in a noisy place but otherwise undistracted (e.g. coffee shop). This seems like an instance of what I am talking about here.
Outcome: I will try to think about this more deliberately when planning which activities I do when, and in particular how I pair activities which can be done simultaneously. Who knows—maybe I will finally be able to get through some of those 3+ hour long episodes of the 80,000 Hours podcast! :)
Thing I would do if I had enough money for $200 to be inconsequential: buy 2 pairs of identical bluetooth headphones—one permanently paired to my laptop and one permanently paired to my phone. This would save me lots of annoyance whenever I switch between the two. Bluetooth seems to just suck
Airpods are amazing at switching between devices (in particular macs and iPhones). Only set of headphones that seems to have made this work reliably.
When at all possible, wired >> wireless. I use BT earphones for my phone, because listening while walking/riding is desirable and wires really do suck while moving. Even with a fairly portable laptop, I don’t move enough while using it to be willing to put up with wireless.
Fun brainteaser for students learning induction:
Here is a “proof” that π is rational. It uses the fact
π=4−43+45−47+49−…(∗)
as well as induction. It suffices to show that the right-hand side of (∗) is rational. We do this by induction. For the base case, we have that 4 is rational. For the inductive step, assume 4−43+⋯±42k+1 is rational. Then adding or subtracting the next term 42k+3 (which is rational) will result in a rational number.
The flaw is of course that we’ve only shown that the partial sums are rational. Induction doesn’t say anything about their limit, which is not necessarily rational.
Here is much shorter proof: 22⁄7 is rational :D
Problem: I compulsively pick at scabs. Often I do it even though I don’t want to pick at it because I know I’ll be worse off. (Scab will bleed, it’ll just reform anyway, and I’ll have to deal with the unhealed skin for longer.) Telling myself “don’t pick” doesn’t work, I get very distracted by the presence of the scab and HAVE TO pick.
Solution: put a band-aid over the scab. Blocking the scab makes picking more difficult. More crucially, the adhesive of the bandaid gives me a mildly ticklish sensation which masks the sensation that a pickable scab is present.
Caveat: this has been most helpful for face scabs, but face bandaids are awkward. This has worked fine for me because I tend to pick when I’m alone, so I can just apply bandaids when alone and take them off when people will see me. But if you spend most of your time around people this may not work for you.
Search “pimple patches” at your retailer of choice. They are skin-safe stickers, often clear or beige, sometimes with some generic “good for the skin” additives. They serve the bandage’s function of sticking over a small area of skin that you want to block your hands from, while using a milder adhesive and looking almost invisible on the face.
AI capabilities are advancing rapidly. It’s deeply concerning that individual actors can plan and execute experiments like “give a LLM access to a terminal and/or the internet”. However I need to remember it’s not worth spending my time worrying about this stuff. When I worry about this stuff, I’m not doing anything useful for AI Safety, I am just worrying. This is not a useful way to spend my time. Instead it is more constructive to avoid these thoughts and focus on completing projects I believe are impactful.
Question about people who do calorie restriction (CR) in humans with the goal of anti-aging. Do they experience “brain fog” i.e. decreased cognitive performance? Intuitively that seems like a major drawback of CR but perhaps brain fog can be eliminated with a healthier diet / getting used to consistent CR over time? Curious to hear evidence of/against this.
The case against a inbox with lots of items. This is certainly not a hot or unusual take but I am writing it anyway. I describe how I transitioned my TODO list setup from “big pile of email notifications” to something slightly more efficient (but still low tech overhead).
I used to use my email inbox as a generic TODO list, with items ranging from “reply to this person” to “remember to go to this calendar event” but also “fill out the application for this program”, “read this blog post”, and so on.
I think email inbox is still the best place to put reminders of the form “write an email to this person”. However for all the other stuff I find using email inbox quite inefficient.
Every time I scan through an email inbox with lots of items, I have to re-”compute” what that TODO item was supposed to be again. For example: “hmm, this is just a link, (I click the link), oh, it’s that blog post Person X told me about that I’ve been meaning to read”. Doing this for many items is kind of computationally expensive, no matter how clearly phrased the reminders are.
Instead, here is where I put reminders which used to be items in my inbox:
For reminders of the form “read this thing”, I have bookmark folders based on different topics. (I used to have a big bookmark folder called “To Read”, but it was too computationally expensive to figure out what was what.) The topics are relatively broad; things like “Career advice”, “Fun”, “Math”, “ML”. The idea is that usually I’m in the mood for a specific topic, so I can go to the appropriate folder and knock off some todo items.
For reminders of the form “go to this calendar event” I have been consistently using Google Calendar. The key is to really deliberately get in the habit of actually checking my calendar (I used to not do this so email was a failsafe way to remind myself to go to important things).
For reminders of the form “fill out this form / application” I make that a “this week” or “this month” TODO item in my longer term planning system (which is just a google doc since that’s been working out fine).
This system has been great since I spend way less time scanning through emails that don’t need my attention (it had really been adding up) and because it’s low-tech so it barely requires additional effort.
Summary: this post is mostly for self reflection. I my first impressions (likes/dislikes) after using Roam, a note-taking app with good “linking” features. I also use this post to think about some Roam-related decisions (should I look into competitor apps, should I upgrade to Roam’s lump sum “Believer” plan).
I’m trying out Roam and liking it so far. I started using it when I started working on a research project in order to reap the full benefits.
Before I list pros and cons, I should clarify that I am a novice at Roam. I know I am not using Roam close to optimally, and probably some of the disadvantages I’ve listed are due to my lack of knowledge rather than problems with Roam. (Feel free to tell me where I am Being Wrong on the Internet because then I will learn more about Roam!)
Advantages of roam:
Organization system makes it easy to want to write. There are “daily” pages which you I can tab to at any time by pressing Alt+D. Or, I can create “topic” pages for a book I’m working through, a concept, whatever. So when I open Roam to work I know I’m either learning something and taking notes in Roam, or maybe writing my miscellaneous ideas in my daily page.
It feels fast. Better than google docs or Overleaf. Difference between offline environment barely noticeable (meaning occasionally I get a big lag spike because of an internet problem on my end, rather than a constant amount of noticeable latency).
Search results are fast and relevant. If you’ve written those words before, they’re easy to find. Admittedly my graph is small since I’m new so I’ll have to wait and see whether Roam scales.
“Page links” are useful. I thought it would be annoying to get in the habit of linking [[Some Concept]] every time I say it but it actually isn’t. Worst case I can write first and link later. I think it’s genuinely useful to be able to click [[Some Concept]] and see all places I’ve mentioned it—this is helpful if I want to refresh my understanding of a concept for example.
“Block links” are even more useful. Roam notes are a bunch of nested bullet points and a block is a single bullet point. Each block has a reference code. If you paste that reference code elsewhere, the same block appears (but underlined and “linked” to its source). I find this super super useful because often my brain is writing and thinks “wait this is just X thing I’ve seen / written before”. With block references I can just paste that block in (or a link to it). It’s so much faster than my previous workflow, which was scrolling up through my ideas document, or even browsing different files, looking for where I’d written that thing before.
TODO list. There is an easy shortcut (Ctrl+Enter) to add a block to my TODO list. So it’s easy to go and look at all my TODO items on different pages. This is great if I am writing something and want to make a note to come back to a point without breaking out of flow.
Disadvantages of roam:
LaTeX needs two dollar signs. Also Roam’s LaTeX is clunky. If you write two dollar signs, it autocompletes the next two, but you can’t advance through those by pressing dollar sign—you have to use the right arrow key instead. I wish there was a setting to use Overleaf-style input instead.
Some features which seem like they should be intuitive are not intuitive. Most notably I still am not comfortable with moving blocks and the side panel.
Limited offline functionality (I think?). I thought Roam was only supposed to work if you were connected to the internet. However it actually seems to work fine if I am temporarily disconnected from the internet, which is great! TODO (I’d be pressing Ctrl+Enter if I was in Roam☺) is experiment with this and see how offline I can go.
Works fine but not great on mobile. It’s decent but just slightly clunky in browser. I think I would be more likely to reference my graph if there was a smooth mobile app than if I had to open my phone’s browser and get to the appropriate page every time.
Some shortcuts are counterintuitive. It’s hard for me to remember all the Shifts, Tabs, and Ctrls. And then Ctrl+Shift+o is how you open links for some reason? This is kind of unfair criticism since every feature-heavy platform takes time to adjust to but it’s a current bottleneck of mine for sure.
Some stuff seems to require using the mouse. For an app with so many shortcuts you’d think there’d be shortcuts for thinks like “paste current blocks’ reference into clipboard”. It’s hard to reliably right-click the tiny bullet point (necessary for grabbing the block reference) so this is a bit of a time waster.
Something I am uncertain about is whether I should switch to a cheaper competitor app. I have money but not that much money. $15/month is worth it to me since Roam seems to be increasing my productivity, but also maybe a free competitor app with Roam’s linking capabilities would also increase my productivity by the same amount.
Something else I am uncertain about is how to make the decision of whether to stay on the monthly plan ($15/month) or switch to the Believer plan ($500 for the next 5 years). In order to make this decision I feel like I need to both quantify how much Roam will actually improve my productivity and whether cheaper competitor apps will improve my productivity by the same amount. It’s worth mentioning that the Believer plan comes with extra features (including “offline mode”)—but it’s hard to know how much I will like this without actually purchasing the Believer plan.
I feel like in order to compare which of two apps is better I need to spend a long time (say 6 months) getting fairly good at each app. Otherwise, I will just be comparing how shallow the learning curve is for beginners.
So, my current plan is to go all in on Roam for the next 6 months and give it a fair shot. Then after that I will explore/exploit competitor apps, spending an amount of time on them proportional to how likely I think they are to beat Roam.
>>Each block has a reference code. If you paste that reference code elsewhere, the same block appears
>>It’s hard to reliably right-click the tiny bullet point (necessary for grabbing the block reference)
I never need to do this. If you type “(())” [no quotes] at the destination point and then start typing in text from the block you’re referencing, blocks containing that text will appear in a window. Keep typing until you can see the desired block and click on it to insert it.
If you type the trigger ”/block”, the menu that appears contains four fun things you can do with blocks.
Ah, that is a big timesaver. Thanks!
I’ve thought through an explanation as to why there exist people who are not effective altruists. I think it’s important to understand these viewpoints if EAs want to convert more people to their side.
As an added bonus, I think this explanation generalizes to many cases where a person’s actions contradict their knowledge—thinking through this helped me better understand why I think I take actions which contradict my knowledge.
Summary: people’s gut feel (which actually governs most decision-making) takes time, thought and effort to catch up to their systematic reasoning (which is capable of absorbing new information much quicker). This explains phenomena such as “why not everyone who has heard of EA is an EA” and “why not everyone who has heard of factory farming is a vegan”.
Outcome / why this was useful for me to think about: This framework of “systematic reasoning” vs “gut feel” is useful for me when thinking about what I know, how well I know it, and whether I act on this knowledge. This helps distinguish between two possible types of “this person is acting contrary to knowledge they have”: 1) the person’s actions disagree with their gut feel and systematic reasoning (= lack of control) or 2) the person’s actions agree with their systematic reasoning but not gut feel (= still processing the knowledge).
Full explanation: People’s views on career choices, moral principles, and most generally the moral value of particular actions are quite rarely influenced by systematic reasoning. Instead, people automatically develop priors on these things by interacting with society and make most decisions according to gut feel.
Making gut feel decisions instead of using systematic reasoning is generally a good move. At any moment, we are deciding not to do an insanely high number of technically feasible actions. Evaluating all of these is computationally intractable. (for arguments like these see “Algorithms to Live By”)
When people are introduced to EA, they will usually not object to premises such as “we should make choices to do more good at the margin” and “some charities are 10-100x more effective than others”. However just because they agree with this doesn’t mean they’re going to immediately become an EA. In other words, anybody can quickly understand EA concepts through their systematic reasoning, but that doesn’t mean it has also reached their gut feel reasoning (= becoming an EA).
A person’s gut feel on EA topics is all of their priors on charitable giving, global problems, career advice, and doing good in general. Even the most well-worded argument isn’t immediately going to sway a person’s priors so much that they immediately become an EA. But over time, a person’s priors can be updated via repeated exposure and internal reflection. So maybe you explain EA to someone and they’re initially skeptical, but they continue carefully considering EA ideas and become more and more of an EA.
This framework is actually quite general. Here’s another example: consider a person who is aware that factory farming is cruel but regularly eats meat. This is because their gut feel on whether meat is OK hasn’t caught up to systematic reasoning about factory farming being unethical.
Just like the EA example explained above, there is often no perfect explanation which can instantly turn somebody into a gut feel vegan. Rather, they have to put in the work to reflect on pro-vegan evidence presented to them.
(n.b: the terms “systematic reasoning” and “gut feel” are not as thoughtfully chosen as they could be—I’d appreciate references to better or more standard terms!)
The standard terms: Gut feel = ‘System 1’, systematic reasoning = ‘system 2’ :)
Ah, I googled those and the results mostly mentioned “Thinking Fast and Slow”. The book has been on my list for a while but it sounds like I should give it higher priority. Thanks for the pointer!