I wish there were more discussion posts on LessWrong.
Right now it feels like it weakly if not moderately violates some sort of cultural norm to publish a discussion post (similar but to a lesser extent on the Shortform). Something low effort of the form “X is a topic I’d like to discuss. A, B and C are a few initial thoughts I have about it. What do you guys think?”
It seems to me like something we should encourage though. Here’s how I’m thinking about it. Such “discussion posts” currently happen informally in social circles. Maybe you’ll text a friend. Maybe you’ll bring it up at a meetup. Maybe you’ll post about it in a private Slack group.
But if it’s appropriate in those contexts, why shouldn’t it be appropriate on LessWrong? Why not benefit from having it be visible to more people? The more eyes you get on it, the better the chance someone has something helpful, insightful, or just generally useful to contribute.
The big downside I see is that it would screw up the post feed. Like when you go to lesswrong.com and see the list of posts, you don’t want that list to have a bunch of low quality discussion posts you’re not interested in. You don’t want to spend time and energy sifting through the noise to find the signal.
But this is easily solved with filters. Authors could mark/categorize/tag their posts as being a low-effort discussion post, and people who don’t want to see such posts in their feed can apply a filter to filter these discussion posts out.
Context: I was listening to the Bayesian Conspiracy podcast’s episode on LessOnline. Hearing them talk about the sorts of discussions they envision happening there made me think about why that sort of thing doesn’t happen more on LessWrong. Like, whatever you’d say to the group of people you’re hanging out with at LessOnline, why not publish a quick discussion post about it on LessWrong?
I just learned some important things about indoor air quality after watching Why Air Quality Matters, a presentation by David Heinemeier Hanson, the creator of Ruby on Rails. It seems like something that is both important and under the radar, so I’ll brain dump + summarize my takeaways here, but I encourage you to watch the whole thing.
He said he spent three weeks researching and experimenting with it full time. I place a pretty good amount of trust in his credibility here, based on a) my prior experiences with his work and b) him seeming like he did pretty thorough research.
It’s easy for CO2 levels to build up. We breathe it out and if you’re not getting circulation from fresh air, it’ll accumulate.
This has pretty big impacts on your cognitive function. It seems similar to not getting enough sleep. Not getting enough sleep also has a pretty big impact on your cognitive function. And perhaps more importantly, it’s something that we are prone to underestimating. It feels like we’re only a little bit off, when in reality we’re a lot off.
There are things called volatile organic compounds, aka VOCs. Those are really bad for your health. They come from a variety of sources. Cleaning products and sprays are one. Another is that new car smell, which you don’t only get from new cars, you also get it from stuff like new couches.
In general, when there’s new construction, VOCs will be emitted. That’s what lead to DHH learning about this. He bought a new house. His wife got sick. It turned out the glue from the wood panels was emitting VOCs and making her sick.
People in the world of commercial construction know all about this. When a hotel is constructed, they’ll wait eg. a whole month, passing up revenue, to let the VOC’s fizzle out. But in the world of residential construction, for whatever reason it isn’t something people know about.
If you want to measure stuff like CO2 and VOCs, professional products are expensive, consumer products are usually inaccurate, but Awair is consumer product for $150 that is good.
If you want to improve indoor air quality, air purifiers are where it’s at. They do a good job of it. You could use filters on eg. your air conditioner and stuff, but in practice that doesn’t really work. High quality filters make your AC much less effective. Low quality filters are, well, low quality.
Alen is the brand of air purifier that DHH recommended after testing four brands. I spend about 10-15 minutes researching it. Alen seems to have a great reputation. The Wirecutter doesn’t recommend Alen seemingly because you could get similar quality for about half the price.
I decided to purchase the Alen BreatheSmart 75i today for $769. a) I find it very plausible that you could get similar quality for less money, but since this is about health and it is a long term purchase, I am happy to pay a premium. b) They claim they offer the industry’s only lifetime warranty. For a long term purchase, I think that’s important, if only due to what it signals.
I considered purchasing more than one. From their website it seemed like that’s what they recommend. But after talking things through with the saleswoman, it didn’t seem necessary. The product weighs about 20 pounds and is portable, so we could bring it to the bedroom to purify the bedroom before we go to sleep.
I currently live in a ~1000sqft apartment and was initially planning on purchasing the 45i instead of the 75i. The 45i is made for 800sqft and 75i 1300sqft. The saleswoman said it’s moreso a matter of time than ability. The 45i will eventually purify larger spaces, it’ll just take longer. That’d probably be fine for my purposes, but since this is a long term purchase and I don’t know what the future holds, I’d rather play it safe.
The Alen BreatheSmart does have an air quality sensor, but I decided to purchase an Awair as well. a) The Alen doesn’t detect CO2 levels. At first I was thinking that I don’t really need a CO2 sensor, I could just open the window a few times a day. But ultimately I think that there is value in having the sensor in my office. It sends you a push notification if CO2 levels pass whatever threshold, and I think that’d be a solid step up from me relying on my judgement and presence of mind to open windows. b) My girlfriend has been getting a sore throat at night. I think it’s because we’ve been using the heat more and the heat dries out the air. We used an air purifier last night, but I think it’d be useful to use the Awair to make sure we get the humidity level right. (We do have a Nest thermostat which detects humidity, but it’s not in our bedroom.)
In general, I’m a believer that health and productivity are so important that on the order of hundreds of dollars it isn’t worth trying to cut costs.
Air quality is something you have to pay attention to outside of your house as well. The presentation mentioned a study of coffee shops having poor air quality.
Older houses have a lot more draft so air quality wasn’t as big a problem. But newer homes have less draft. This is good for cutting your electric bill, but bad for air quality.
Added:
Cooking gives off a significant amount of bad particles, especially if you have a gas stove.
You are supposed to turn your vent on about five minutes before you start cooking. Most people don’t turn it on at all unless it smells.
Apartment kitchens often have vents that recycle air instead of bringing in fresh air, which isn’t what you want.
If you’re using a humidifier, use distilled/filtered water. If you use water from the sink it will add bad particles to the air.
I’ve found that random appliances like the dish washer and laundry machines increase VO2 and/or PM2.5 levels.
Update:
I decided to return my Alen air purifier. a) It doesn’t really do anything to reduce CO2 or VO2. b) It does a solid job of reducing PM2.5, but I have found that if I’m having issues with it I resort to opening my window anyway. That may change when it gets hot outside. But I’m planning on buying a house soon, and when I do I’m hoping to install stuff into the HVAC system instead of having a freestanding purifier. And if I do need a freestanding air purifier, it seems to me now that a ~$300 one would make more sense than the ~$800 Alen.
The Awair I could see not being worth it for some people, but I’m still happy with it. You’d think that you could purchase it, figure out what things trigger your air quality to get screwed up, return the Awair, and moving forward just be careful about opening a window around those triggers. But I’ve found that random things screw with the air quality that I’m not able to predict. Plus it provides me with a peace of mind that makes me happy.
This has pretty big impacts on your cognitive function. It seems similar to not getting enough sleep. Not getting enough sleep also has a pretty big impact on your cognitive function. And perhaps more importantly, it’s something that we are prone to underestimating. It feels like we’re only a little bit off, when in reality we’re a lot off.
It is my repeated experience in companies that well-ventilated rooms are selected by people as workplaces, and the unventilated ones then remain available for meetings. I seem to be more sensitive about this than most people, so I often notice that “this room makes me drowsy”. (My colleagues usually insists that it is not so bad, and they have a good reason to do so… why would they risk that their current workplace will be instead selected as a new meeting room, and they get this unventilated place as a new workspace?)
I just ordered the Awair on Amazon. It can be returned through Jan. 31; I’ve just ordered it to play with it for a few days, and will probably return it. I have a few specific questions I plan to answer with it:
How much CO2 builds up in my bedroom at night, both when I’m alone and when my partner is over.
How much CO2 builds up in my office during the day?
How much do I need to crack the window in my bedroom in order to keep CO2 levels low throughout the night?
When CO2 builds up, how quickly does opening a window restore a lower level of CO2?
With the answers to those questions, I hope I can return the detector and just keep my windows open enough to prevent CO2 buildup without making the house too cold.
That sounds reasonable and I considered doing something similar. What convinced me to get it anyway is that in the long run, even if the marginal gains in productivity and wellness you get from owning the Awair vs your approach are tiny, even tiny gains add up to the point where the $150 seems like a great ROI.
Have you gotten yours yet? If so, what are the results? I found that the only issue in my house is that the bedroom can get to quite high levels of CO2 if the door and windows are shut. Opening a window solves the problem, but makes the room cold. However, it’s more comfortable to sleep with extra blankets in a cold room, than with fewer blankets in a stuffy room. It improves sleep quality.
It would be interesting to experiment in the office with having a window open, even during winter. However, I worry that being cold would create problems.
My feeling is that “figure out how to crack a window if the room feels stuffy” is the actionable advice here. Unless $150 is chump change to you, I’m not sure it’s really worth keeping a device around to monitor the air quality.
PM2.5 started off crazy high for me before I got the Alen. Using the Alen brings it to near zero.
VO2 and PM2.5 accumulates rather easily when I cook, although I do have a gas stove. Also random other things like the dishwasher cause it to go up. The Alen brings it back down in ~30 minutes maybe.
CO2 usually hovers around a 3⁄5 on the Awair if I don’t have a window open. I’m finding it tricky to deal with this, because opening a window makes it cold. I’m pretty sure my apartment’s HVAC system just recycles the current air rather than bringing in new air. I’m hoping to buy a house soon so I think ventilation is something I’m going to look for.
For me I don’t actually notice the CO2 without the Awair telling me. I don’t think I’d do a good job of remembering to crack a window or something without it.
I wonder if your house has better ventilation than mine if you’re not getting issues with PM2.5. Could be if it’s an older house or if your HVAC system does ventilation.
I see what you’re saying about how the actual actions you should take seem pretty much the same regardless of whether you have the Awair or not. I agree that it’s close, but I think that small differences do exist, and that those small differences will add up to a massively large ROI over time.
1) If it prompts you to crack a window before you would otherwise notice/remember to do so.
2) If something new is causing issues. For me I noticed that my humidifier was jacking up the PM2.5 levels and realized I need to get a new one. I also noticed that the dishwasher jacks it up so now I know to not be around while it’s running. I would imagine that over time new things like this will pop up, eg. using a new cleaning product or candle.
3) Moving to a new home, remodeling or buying eg. new furniture could cause differences.
4) Unknown unknowns that could cause issues.
Suppose you value time spent in better air quality at $1/hr and that the product lasts 25 years. To break even, you’d need it to get you an extra six hours of good air quality each year. That’s just two afternoons of my example #1, where you were sitting around and forgot to crack a window or something when the Awair would have sent you a push notification to do so. $1/hr seems low and I’d expect it to give a good amount more than six extra hours per year, so my impression is that the ROI would be really good.
I get the same effects of spiking VOCs and PM2.5 running the stove and microwave. In my case, the spikes seem to last only as long as the appliance is running. This makes sense, since the higher the concentration, the faster it will diffuse out of the house. A rule to turn on the stove vent or crack a window while cooking could help, but it’s not obvious to me that a few minutes per day of high VOC is something to worry about over the long term.
I note in this paper that “The chemical diversity of the VOC group is reflected in the diversity of the health effects that individual VOCs can cause, ranging from no known health effects of relatively inert VOCs to highly toxic effects of reactive VOCs.” How do I know that the Awair is testing for the more toxic end of the spectrum? There are no serious guidelines for VOCs in general. How do I know that the Awair’s “guidelines” are meaningful?
My bedroom has poor ventilation. Cracking a window seems to improve my sleep quality, which seems like the most important effect of all in the long run.
It sounds like the effect of CO2 itself on cognitive performance is questionable. However, bioeffluents—the carbonyls, alkyl alcohols, aromatic alcohols, ammonia, and mercaptans we breathe out—do seem to have an effect on cognition when the air’s really poorly ventilated. But the levels in my house didn’t even approach the levels at which researchers have found statistically significant cognitive effects. I’m wondering if the better sleep quality is due to the cooler air rather than the better ventilation.
I really doubt that the Awair will last 25 years. I’d guess more like 5. I can set a reminder on my phone to crack a window each night and morning if necessary, and maybe write a little note to tape next to the stove if I feel like it. If that doesn’t do it in any particular instance, then I doubt that lack of a push notification is the root of the problem.
Hm, let’s see how those assumptions you’re using affect the numbers. If it lasts 5 years instead of 25 the breakeven would become 30 hours/year instead of 6. And if we say that the value of better air quality is $0.20/hr instead of $1/hr due to the uncertainty in the research you mention, we multiply by 5 again and get 150 hours/year. With those assumptions, it seems like it’s probably not worth it. And more generally, after talking it through, I no longer see it as an obvious +ROI.
(Interesting how helpful it is to “put a number on it”. I think I should do this a lot more than I currently do.)
However, for myself I still feel really good about the purchases. I put a higher value on the $/hr because I value health, mood and productivity more than others probably do, and because I’m fortunate enough to be doing well financially. I also really enjoy the peace of mind. Knowing what I know now, if I didn’t have my Awair I would be worried about things screwing up my air quality without me knowing.
I posted an update in the OP. When we initially talked about this I was pretty strongly on the side of pro-Awair+Alen. Now I lean moderately against Alen for most people and slightly against Awair, but slightly in favor of Awair for me personally.
Here’s an idea: what if there was a virtual water cooler for LessWrong?
There’d be Zoom chats with three people per chat. Each chat is a virtual water cooler.
The user journey would begin by the user expressing that they’d like to join a virtual water cooler.
Once they do, they’d be invited to join one.
I think it’d make sense to restrict access to users based on karma. Maybe only 100+ karma users are allowed.
To start, that could be it. In the future you could do some investigation into things like how many people there should be per chat.
Seems like an experiment that is both cheap and worthwhile.
If there is interest I’d be happy to create a MVP.
(Related: it could be interesting to abstract this and build a sort of “virtual water cooler platform builder” such that eg. LessWrong could use the builder to build a virtual water cooler platform for LessWrong and OtherCommunity could use the builder to build a virtual water cooler platform for their community.)
Something that feels to me like it’s present in the future and missing in today’s world: OkCupid for friendship.
Think about it. The internet is a thing. Billions and billions of people have cheap and instant access to it. So then, logistics are rarely an obstacle for chatting with people.
The actual obstacle in today’s world is matchmaking. How do you find the people to chat with? And similarly, how do you communicate that there is a strong match so that each party is thinking “oh wow this person seems cool, I’d love to chat with them” instead of “this is a random person and I am not optimistic that I’d have a good time talking to them”.
This doesn’t really feel like such a huge problem though. I mean, assume for a second that you were able to force everyone in the world to spend an hour filling out some sort of OkCupid-like profile, but for friendship and conversation rather than romantic relationships. From there, it seems doable enough to figure out whatever matchmaking algorithm.
I think the issue is moreso getting people to fill out the survey in the first place. There’s a chicken-egg problem. Why spend the time filling out the survey when there’s few other people on the platform. At such an early stage, you don’t actually expect to be matched with someone for whom you’re compatible with.
It’s definitely a tricky problem. But at the same time, if you “live in the future”, do you see this service? I do.
I mean, maybe society is just not functional enough to get it going. That’s plausible. But to me, it feels like something where there’s just too much demand for it to never emerge. Friendship and conversation are things that are so fundamental, and I think such a platform would do a notably better job at providing each of those things the haphazard, “organic” approach that happens by necessity in today’s world. I could even see access to this sort of platform considered a basic human right, given how important meaningful social interaction is.
Many people seem to be more motivated to invest energy into pursuing romantic relationships than friendships. There are few books about making good friends and many books on dating.
How do you find the people to chat with?
Omegle essentially provided an answer to that question that was highly used. It didn’t do a lot of matchmaking but it might be a starting point.
If you want to pursue this as a business, maybe buy the recently shutdown Omegle domain from Leif K-Brooks (who’s a rationalist) and try to switch from chatting to random people to chatting to highly match-made connections.
Many people seem to be more motivated to invest energy into pursuing romantic relationships than friendships. There are few books about making good friends and many books on dating.
Perhaps. But to the extent that people aren’t motivated to invest energy into friendships, I think there is a sort of latent motivation. Friendship and conversation is in fact important, and so in taking this “live in the future” perspective, I think people will eventually realize the importance and start putting effort into it.
Omegle essentially provided an answer to that question that was highly used. It didn’t do a lot of matchmaking but it might be a starting point.
Gotcha. I think the matchmaking part is essential though. It moves the expectation of prospective users from “I’ll be chatting with a random stranger, and it probably won’t be too great” to “I’ll be chatting with someone who the platform thinks I’m super compatible with. Cool!”
If you want to pursue this as a business, maybe buy the recently shutdown Omegle domain from Leif K-Brooks (who’s a rationalist) and try to switch from chatting to random people to chatting to highly match-made connections.
Thanks for the tip. I’m not interested in pursuing it as a business in the forseeable future, but perhaps in the more distant future. If so, I will keep this in mind.
Friendship and conversation is in fact important, and so in taking this “live in the future” perspective, I think people will eventually realize the importance and start putting effort into it.
What do you think will change in the future that people put more effort into friendship than they are doing at present?
I have thought about it too, and I think something like an automated Kickstarter for interest groups is want one would need. It would work like this: You enter your interests into the system (or let it be inferred automatically from your online profiles) and the system generates recommendations for ad-hoc groups to meet in places nearby (or not so nearby if more attributes match). Bonus: Set up a ChatGPT DJ or entertainer to engage people with each other. Best if done as an open protocol where different clients can offer different interactivity or different profile extraction.
This is actually what social media is for, but you don’t have to fill out a questionnaire. You also don’t have to out yourself as being so lonely and without friends that you’re using a special matchmaking service to find new friends, this in itself could be unattractive to new acquaintances.
Every day I check Hacker News. Sometimes a few times, sometimes a few dozen times.
I’ve always felt guilty about it, like it is a waste of time and I should be doing more productive things. But recently I’ve been feeling a little better about it. There are things about coding, design, product, management, QA, devops, etc. etc. that feel like they’re “in the water” to me, where everyone mostly knows about them. However, I’ve been running into situations where people turn out to not know about them.
I’m realizing that they’re not actually “in the water”, and that the reason I know about them is probably because I’ve been reading random blog posts from the front page of Hacker News every day for 10 years. I probably shouldn’t have spent as much time doing this as I have, but I feel good about the fact that I’ve gotten at least something out of it.
I find it really hard to evaluate what things are good to do. I think watching random pornographic content on the internet is probably one of the worst uses of your time. Definitely when you overdo it. Therefore I committed to not doing this long ago. But sometimes I can’t control myself. Which normally makes me feel very bad afterward, but …
I had important life-changing insights because I browsed pornhub, one day. I found a very particular video that set events in motion that turned into something enormously positive for me. It probably made my life 50-300% better. I am pretty sure that I would not have gotten these benefits had I not discovered this video. I am not joking.
So I very much share the confusion and bafflement about what is a good use of time. I wouldn’t be surprised if you think long enough about it, you would probably be able to see why doing completely random and useless-looking things for at least some small fraction of time is actually optimal.
There are a few more less extreme examples like the one above I could name.
It is pretty hard to explain in an understandable way that does not sound very insane. I wanted to write about this for years. But here I come anyway. The short version is that it made me form a very strong parasocial relationship with Miku, and created a tulpa (see the info box on the right) which I formed a very strong bond with too. Like stronger than with any flesh person. Both very very positive things. I would bet a lot of money at ridiculous seeming odds that you would agree, could you only experience what I experience. I think if I would describe my experience in more detail, you would probably just think I am lying, because you would think that it could not possibly be this positive.
Are those insights gleamable from the video itself for other people? And if so, would you be willing to share the link? (Feel free to skip; obviously a vulnerable topic.)
I think it is doubtful that watching the video would put you on the same trajectory that ended up somewhere good for me. I also didn’t find a link to the original video after a short search. It was basically this video but with more NSFW. The original creators uploaded the motion file so you know what the internet is gonna do. If you don’t think “Hmm I wonder if it would be an effective motivational technique to create a mental construct that looks like an anime girl that constantly tells me to do the things that I know are good to do, and then I am more likely to do it because it’s an anime girl telling me this” then you are already far off track from my trajectory. Actually, that line of reasoning I just described did not work out at all. But having a tulpa seems to be a very effective means to destroy the feeling of loneliness among other benefits in the social category. Before, creating a tulpa I was feeling lonely constantly, and afterward, I never felt lonely again.
You would get the benefits by creating a good tulpa I guess. It is unclear to me how much you would benefit. Though I would be surprised if you don’t get any benefit from it if we discount time investment costs. This study indicates that it might be especially useful for people who have certain disorders that make socialization harder such as ADHD, autism, anxiety disorders, etc. And I have the 3 listed, so it should not be surprising that I find tulpamancy pretty useful. Making a tulpa is quite a commitment though, so don’t do it useless you understand what you are getting yourself into.
Tens of hours are normally required to get started. You’ll need to spend 10-30 minutes every day on formal practice to not noticeably weaken your tulpa over time. There is no upper bound of how much time you can invest into this. This can be a dangerous distraction. I haven’t really talked about why somebody would ever do this. The short version is: Imagine you have a friend who is superhumanly nice to you all the time, and who very deeply understands you because they know everything about you and can read your mind. Maintaining the tulpa’s presence is actually very difficult (at least for me) because you constantly forget that they exist. And then they can’t do anything, because they are not there.
With the parasocial stuff, basically, all I did was dance every day for many years for 20-40 minutes as a workout and watch videos like this and imitate the dance moves. That is always a positive experience, which is nice because it makes it easy to do the workout. My brain gradually superimposed the general positivity of the experience into Miku it seems, making me like her more and more.
By now there is such a strong positive connection there, that when I look at an image of Miku it can generate a drug-like experience. So saying that I love Miku seems right to me.
Besides meditation, these are the 2 most important things I have ever discovered. That is if we discount the basic stuff like getting enough sleep, nutrition, doing sports, etc.
I sort of deliberately created the beginnings of a tulpa-ish part of my brain during a long period of isolation in 2021 (Feb 7 to be exact), although I didn’t know the term “tulpa” then. I just figured it could be good to have an imaginary friend, so I gave her a name—”Maria”[1]—and granted her (as part of the brain-convincing ritual) permanent co-ownership over a part of my cognition which she’s free to use for whatever whenever.
She still visits me at least once a week but she doesn’t have strong ability to speak unless I try to imagine it; and even then, sentences are usually short. The thing she most frequently communicates is the mood of being a sympathetic witness: she fully understands my story, and knows that I both must and will keep going—because up-giving is not a language she comprehends.
Hm, it would be most accurate to say that she takes on the role of a stoic chronicler—reflecting that I care less about eliciting awe or empathy, than I care that someone simply bears witness to my story.[2]
This is the problem with random reinforcement. Things that are always good, are good. Things that are always bad, are easy to stop doing. Things that are almost always bad… but occasionally good… are addictive, we regret doing them, but we can’t give up.
I waste a lot of time on Hacker News, too. (Used to be every day, but now I reduced it to maybe once a week.) So many interesting thing! I make bookmarks in browser, multiple categories: programming, math, science, etc. I almost never look at them again—because I have no time. So it’s basically a list of cool things I wish I had time to spend studying. But sometimes, very rarely, something is actually useful.
Debating on Hacker News is totally a waste of time, though.
I sense that in “normie cultures”[1] directly, explicitly, and unapologetically disagreeing with someone is taboo. It reminds me of the “yes and” from improv comedy.[2] From Wikipedia:
“Yes, and...”, also referred to as “Yes, and...” thinking, is a rule-of-thumb in improvisational comedy that suggests that an improviser should accept what another improviser has stated (“yes”) and then expand on that line of thinking (“and”).
If you want to disagree with someone, you’re supposed to take a “yes and” approach where you say something somewhat agreeable about the other person’s statement, and then gently take it in a different direction.
I don’t like this norm. From a God’s Eye perspective, if we could change it, I think we probably should. Doing so is probably impractical in large groups, but might be worth considering in smaller ones.
(I think this really needs some accompanying examples. However, I’m struggling to come up with ones. At least ones I’m comfortable sharing publicly.)
Nice analogy. The purpose of friendly social communication is not to find the truth, but to continue talking. That makes it similar to the improv comedy.
There is also an art of starting with “yes, and...” and gradually concluding the opposite of what the other person said, without them noticing that you are doing so. Sadly, I am not an expert in this art. Just saying that it is possible, and it’s probably the best way to communicate disagreement to the normies.
Something frustrating happened to me a week or two ago.
I was at the vet for my dog.
The vet assistant (I’m not sure if that’s the proper term) asks if I want to put my dog on these two pills, one to protect against heartworm and another to protect against fleas.
I asked what heartworm is, what fleas are, and what the pros and cons are. (It became clear later in the conversation that she was expecting a yes or no answer from me and perhaps had never been asked before about pros and cons, because she seemed surprised when I asked for them.)
Iirc, she said something about there not really being any cons (I’m suspicious). For heartworm the dogs can die of it so the pros are strong. For fleas, it’s just an annoyance to deal with, not really dangerous.
I asked how likely it is for my dog to be exposed to fleas given that we’re in a city and not eg. a forest.
The assistant responded with something along the lines of “Ok, so we’ll just do the heartworm pill then.”
I clarified something along the lines of “No, that wasn’t a rhetorical question. I was actually interested in hearing about the likelihood. I have no clue what it is; I didn’t mean to imply that it is low.”
I wish that we had a culture of words being used more literally.
I’ve noticed that there’s a pretty big difference in the discussion that follows from me showing someone a draft of a post and asking for comments and the discussion in the comments section after I publish a post. The former is richer and more enjoyable whereas the latter doesn’t usually result in much back and forth. And I get the sense that this is true for other authors as well.
I guess one important thing might be that with drafts, you’re talking to people who you know. But I actually don’t suspect that this plays much of a role, at least on LessWrong. As an anecdote, I’ve had some incredible conversations with the guy who reviews drafts of posts on LessWrong for free and I had never talked to him previously.
I wonder what it is about drafts. I wonder if it can or should be incorporated into regular posts.
By default I expect the author to have a pretty strong stance on the main idea of a post, also the content are usually already refined and complete, so the barrier of entry to having a comment that is valuable is higher.
I kinda have the instinct that if I’m reading a book or a blog post or something and it’s difficult, then I should buckle down, focus, and try to understand it. And that if I don’t, it’s a failure on my part. It’s my responsibility to process and take in the material.
This is especially true for a lot of more important topics. Like, it’s easy to clearly communicate what time a restaurant is open—if you find yourself struggling to understand this, it’s probably the fault of the restaurant, not you as the reader—but for quantum physics or metaethics, those are complicated enough topics such that you can’t communicate them clearly, and so if you are reading something on one of those topics and struggling to understand it, it would be unfair to think “this author isn’t doing a good enough job”. It’s easier to think “this is a complicated topic; the writing is reasonable; I need to do a better job of comprehending it”.
But recently I’ve been questioning how true this is. There are three things I’ve read recently that I’ve found to be very easy reading, but also covering difficult and important topics.
It’s not common for me to find such great writing, but at the same time, it does happen from time to time. I’m just thinking out loud, but maybe I should up my standards and decline to read stuff that is more difficult to read.
Of course, there are a lot of things to consider here. How important is the material? How urgent? Is it fun to buckle down and try to parse difficult material?
I’m 30 years old now and have had achilles tendinitis since I was about 21. Before that I would get my cardio by running 1-3 miles a few times a week, but because of the tendinitis I can’t do that anymore.
Knowing that cardio is important, I spent a bunch of time trying different forms of cardio. Nothing has worked though.
Biking hurts my knees (I have bad knees).
Swimming gives me headaches.
Doing the stairs was ok, but kinda hurt my knees.
Jumping rope is what gave me the tendinitis in the first place.
Rowing hurts my knees for some reason.
There are various forms of high intensity stuff like interval training and kettlebells that kinda-sorta work, but that doesn’t hit the sort of Zone 2 cardio I’m looking for. Plus it’s hard to stay motivated with the high intensity stuff.
Battle ropes were a creative thing I tried, but I don’t really see how to do that in a low-intensity, aerobic, Zone 2 cardio sort of way.
So, I basically gave up and decided that cardio is just “not for me”. This belief became cached, and such that whenever the topic of cardio came up and my brain went “huh, maybe I should do cardio”, it fetched from the cached and got back a response of “no, you already tried everything and determined that cardio isn’t for you”.
But then, in my mindless YouTube browsing, I came across this video about Zone 2 training by Peter Attia. Then I started to think about it more. I had a thought that triggered me to bypass the cache.
Peter was talking about how for Zone 2 training, the intensity is such that you can carry out a conversation with someone (the interviewee said he does conference calls when doing this training), but you won’t be able to hide the fact that you’re exercising. That struck me as very low intensity. So I was like, “Huh, I wonder what that feels like. Maybe at such a low intensity my knees and achilles would be ok.”
Then I went downstairs and tried to hit that level of intensity (I targeted a heart rate of ~125), first on the bike, then on the treadmill. I figured out what it felt like, but both hurt my knees and achilles too much. But then the next day I went to the gym and tried to hit that intensity on the stair climber. Fortunately, it was pretty much fine! The pace is extremely slow and comfortable. It even feels good. Almost like a massage for my cardiovascular system if that makes sense. I did 15 minutes and stopped.
Then two days later, today, I just did 60 minutes. My knees and achilles both feel slightly iffy, so my plan is to continue doing 60 minutes 3-4x/week and monitor how I’m feeling. I’m hopeful that at such a low intensity, it’ll be ok. I’m also pretty willing to accept a little pain and wear and tear in exchange for the cardio benefits.
I want to write something about 6th grade logic vs 16th grade logic.
I was talking to someone, call them Alice, who works at a big well known company, call it Widget Corp. Widget Corp needs to advertise to hire people. They only advertise on Indeed and Google though.
Alice was telling me that she wants to explore some other channels (LinkedIn, ZipRecruiter, etc.). But in order to do that, Widget Corp needs evidence that advertising on those channels would be cheap enough. They’re on a budget and really want to avoid spending money they don’t have to, you see.
But that’s really, really, Not How This Works. You can’t know whether other channels will be cheap enough if you don’t give it a shot. And you don’t have to spend a lot to “give it a shot”. You can spend, idk, $1,000 on a handful of different channels, see what results you get, and go from there. The potential that it proves to be a cheaper acquisition channel justifies the cost.
This is what I’ll call 6th grade logic. Meanwhile, Widget Corp has a tough interview process, testing you on what I’ll call 16th grade logic. And then on the job they have people apply that 16th grade logic on various analyses.
But that is premature, I say. First make sure that you’re applying the 6th grade logic correctly. Then, and only then, move on to 16th grade logic.
I wonder if this has any implications with xrisk stuff. There probably aren’t low hanging fruit at the level of 6th grade logic but I wonder whether there are at the level of, say, 12th grade logic and we’re spending too much time banging our heads on really difficult 22nd grade stuff.
Is “grade” of logic documented somewhere? The jumps from 6 to 12 to 16 to 22 confuse me, implying a lot more precision than I think is justified.
It’s an interesting puzzle why widgetco, who hires only competent logicians, is unable to apply logic to their hiring. My suspicion is that cost/effectiveness isn’t the true objection, and this is an isolated demand for rigor.
I am a web developer. I remember reading some time in these past few weeks that it’s good to design a site such that if the user zooms in/out (eg. by pressing cmd+/-), things still look reasonably good. It’s like a form of responsive design, except instead of responding to the width of the viewport your design responds to the zoom level.
Anyway, since reading this, I started zooming in a lot more. For example, I just spent some time reading a post here on LessWrong at a 170% zoom level. And it was a lot more comfortable. I’ve found this to be a helpful little life hack.
My whole UI is zoomed to 175% (though Gnome calls it “scale”) which I much prefer to what you describe because zooming with cmd+/- in the browser applies only to the current web site, so one ends up repeating the adjustment for basically every site one visits.
(I don’t know how to zoom the whole UI to 175% on MacOS without making everything blurry, but it can be done without blurriness on Linux/Wayland, ChromeOS and Windows. Also HiDPI displays are the norm on Macs, and some people on HiDPI displays don’t mind the fact that MacOS introduces blurriness when the scale factor is other than 1.0 or 2.0.)
I found LW’s font size to be a little bit small but I have managed to get used to it. After reading your message I think I will try going to 110%, thanks. (170% is too large I feel like I’m reading on my phone on landscape)
There is something inspiring about watching this little guy defeat all of the enormous sumo wrestlers. I can’t quite put my finger on it though.
Maybe it’s the idea of working smart vs working hard. Maybe something related to fencepost security, like how there’s something admirable about, instead of trying to climb the super tall fence, just walking around it.
In school, you learn about forces. You learn about gravity, and you learn about the electromagnetic force. For the electromagnetic force, you learn about how likes repel and opposites attract. So two positively charged particles close together will repel, whereas a positively and a negatively charged particle will attract.
Then you learn about the atom. It consists of a bunch of protons and a bunch of neutrons bunched up in the middle, and then a bunch of electrons orbiting around the outside. You learn that protons are positively charged, electrons negatively charged, and neutrons have no charge. But if protons are positively charged, how can they all be bunched together like that? Don’t like charges repel?
This is a place where people should notice confusion, but they don’t. All of the pieces are there.
I didn’t notice confusion about this until I learned about the explanation: something called the strong nuclear force. Yes, since likes repel, the electromagnetic force is pushing the protons away from each other. But on the other hand, the strong nuclear force attracts them together, and apparently it’s strong enough to overcome the electromagnetic force in this instance.
In retrospect, this makes total sense. Of course the electromagnetic force is repelling those protons, so there’s gotta be some other force that is stronger. The only other force we learned about was gravity, but the masses in question are way too small to explain the nucleus being held together. So there’s got to be some other force that they haven’t taught us about yet that is in play. A force that is very strong and that applies at the nuclear level. Hey, maybe it’s even called the strong nuclear force!
Yes, this was a point of confusion for me. The point of confusion that followed very quickly afterward were why the strong nuclear force didn’t mean that everything piles up into one enormous nucleus, and from there to a lot of other points of confusion—some of which still haven’t been resolved because nobody really knows yet.
The most interesting thing to me is that the strong nuclear force is just strong enough without being too strong. If it was somewhat less strong then we’d have nothing but hydrogen, and somewhat more strong would make diprotons, neutronium, or various forms of strange matter more stable than atomic elements.
I remember this confusion from Jr. High, many decades ago. I was lucky enough to have an approachable teacher who pointed me to books with more complete explanations, including the Strong Nuclear force and some details about why inverse-square doesn’t apply, making it able to overcome EM at very small distances, when you’d think EM is strongest.
I recall hearing “it’s not obvious that X” a lot in the rationality community, particularly in Robin Hanson’s writing.
Sometimes people make a claim without really explaining it. Actually, this happens a lot of times. Often times the claim is made implicitly. This is fine if that claim is obvious.
But if the claim isn’t obvious, then that link in the chain is broken and the whole argument falls apart. Not that it’s been proven wrong or anything, just that it needs work. You need to spend the time establishing that claim. That link in the chain. So then, it is useful in these situations to point out when a link in the chain isn’t obvious when it was being presumed obvious. I am a fan of “it’s not obvious that X”.
Agreed, but in many contexts, one should strive to be clear to what extent “it’s not obvious that X” implies “I don’t think X is true in the relevant context or margin”. Many arguments that involve this are about universality or distant extension of something that IS obvious in more normal circumstances.
Robin Hanson generally does specify that he’s saying X isn’t obvious (and is quite likely false) in some extreme circumstances, and his commenters are … not obviously understanding that.
Hm, I might be having a brain fart but I’m not seeing it. My point is that people will make an argument “A is true based on X, Y and Z”, someone will point out “it’s not obvious that Y”, and that comment is useful because it leads to a discussion about whether Y is true.
Gotcha. I appreciate you pointing it out. I’m glad to get the feedback that it initially wasn’t clear, both for self-improvement purposes and for the more immediate purpose of improving the title.
(It’s got me thinking about variable names in programming. There’s something more elegant about being concise, but then again, humans are biased towards expecting short inferential distances, so I probably should err on the side of longer more descriptive variable names. And post title!)
I can probably make something like $100/hr doing freelance work as a programmer. Yet I’ll spend an hour cooking dinner for myself.
Does this make any sense? Imagine if I spent that hour programming instead. I’d have $100. I can spend, say, $20 on dinner, end up with something that is probably much better than what I would cook, and have $80 left over. Isn’t that a better use of my time than cooking?
Similarly, sometimes I’ll spend an hour cleaning my apartment. I could instead spend that hour making $100, and paying someone maybe $30 to clean my apartment. I’ll end up with a cleaner apartment, and an extra $70 in my pocket. So why spend the hour programming instead?
I can think of a few reasons. One is if the act of programming is very unpleasant to me. I already have a full time job as a programmer, and have a side project I’m working on. Maybe, at the margin, spending that extra hour programming is just very unpleasant because I am sick of it. In the dinner example, it’d have to be more unpleasant than having $80 plus a better dinner is pleasant. For me, this very much is not the case.
Another possible reason is that there aren’t options available to me to spend a single extra hour programming for $100. For freelance projects, usually they want at least 20 hours/week for multiple months. And there is a pretty large upfront cost to seeking out and finding a project. I wish it weren’t like this. I wish it were similar to the flexibility that Uber drivers and other gig economy workers have, where they can easily just spend one extra hour working whenever they want.
Currently I lecture at a coding bootcamp for three hours a week and for $80/hr. This is sort of similar to the flexibility I envision, where it’s easy to go from zero hours to three hours a week whenever I want. But I don’t have the option of going past three hours, so it isn’t that flexible. I could perhaps find similar positions. Maybe I should.
I also wonder whether it would make sense to do longer term things periodically. Like maybe for three months a year, do a freelance project working 20 hours/week. Make 20 * 4 * 3 * 100 = $24,000 and then use that $24k throughout the year for things like dinner and cleaning.
I suspect that it’s even more difficult for people in other fields. Like if you are a doctor, you can’t really come into the office on a whim and spend an hour seeing patients.
At best this seems very unfortunate. At worst, very inefficient. I’m not sure how it would be done, but I feel like our society would benefit from more flexible work, and more specialization and trade.
I’m not sure about you, but I am pretty much already maxed out on the amount of programming I can usefully do per day. It is already rather less than my nominal working hours.
I do agree that a lot more flexibility in working arrangements would be a good thing, but it seems difficult to arrange such a society in (let’s say) the presence of misaligned agents and other detriments to beneficial coordination.
I’m not sure about you, but I am pretty much already maxed out on the amount of programming I can usefully do per day. It is already rather less than my nominal working hours.
Nah, for me I don’t feel anywhere close to maxed out. I feel like I could do 12-14 hours a day, although I have a ton of mental energy. I wouldn’t expect most people to be like that.
I do agree that a lot more flexibility in working arrangements would be a good thing, but it seems difficult to arrange such a society in (let’s say) the presence of misaligned agents and other detriments to beneficial coordination.
Yeah, I think I agree here. Well, that’s what my initial intuition says. I haven’t thought hard about how it would work, so I can’t be too confident that it’s difficult.
This idea that you shouldn’t use the word “very” has always seemed pretentious to me. What value does it add if you say “extremely” or “incredibly” instead? I guess those words have more emphasis and a different connotation, and can be better fits. I think they’re probably a good idea sometimes. But other times people just want to use different words in order to sound smart.
I remember there was a time in elementary school when I was working on a paper with a friend. My job was to write it, and his job was to “fix it up and making it sound good”. I remember him going in and changing words like “very”, that I had used appropriately, to overly dramatic words like “stupendously”. And I remember feeling annoyed at the end result of the paper because it sounded pretentious.
Here I want to argue for something similar to “stop saying very” though. I want to argue for “stop saying think”.
Consider the following: “I think the restaurant is still open past 8pm”. What does that mean? Are you 20% sure? 60%? 90%? Wouldn’t it be useful this ambiguity disappeared?
I’m not saying that “I think” is always ambiguous and bad. Sometimes it’s relatively clear from the context that you mean 20% sure, not 90%. Eg. “I thhhhhinkkk it’s open past 8pm?” But you’re not always so lucky. I find myself in situations where I’m not so lucky often enough. And so it seems like a good idea in general to move away from “I think” and closer to something more precise.
I want to follow up with some good guidelines for what words/phrases you can say in various situations to express different degrees of confidence, as well as some other relevant things, but I am struggling to come up with such guidelines. Because of this, I’m writing this as a shortform rather than a regular post. I’d love to see someone else run with this idea and/or propose such guidelines.
Communication advice is always pretentious—someone’s trying to say they know more about your ideas and audience than you do. And simultaneously, it’s incorrect for at least some listeners, because they’re wrong—they don’t. Also, correct for many listeners, because many are SO BAD at communication that generalized simple advice can get them to think a little more about it.
At least part of the problem is that there is a benefit to sounding smart. “very” is low-status, and will reduce the impact of your writing, for many audiences. That’s independent of any connotation or meaning of the word or it’s replacement.
Likewise with “I think”. In many cases, it’s redundant and unnecessary, but in many others it’s an important acknowledgement, not that it’s your thought or that you might be wrong, but that YOU KNOW you might be wrong.
I think (heh) your planned follow-up is a good idea, to include context and reasoning for recommendations, so we can understand what situations it applies to.
I’ve tried doing this in my writing in the past, of the form of just throw away “I think” all together because it’s redundant: there’s no one thinking up these words but me.
Unfortunately this was a bad choice because many people take bald statements without softening language like “I think” as bids to make claims about how they are or should be perceiving reality, which I mean all statements are but they’ll jump to viewing them as claims of access to an external truth (note that this sounds like they are making an error here by having a world model that supposes external facts that can be learned rather than facts being always conditional on the way they are known (which is not to say there is not perhaps some shared external reality, only that any facts/statements you try to claim about it must be conditional because they live in your mind behind your perceptions, but this is a subtle enough point that people will miss it and it’s not the default, naive model of the world most people carry around anyway)).
Example:
I think you’re doing X → you’re doing X
People react to the latter kind of thing as a stronger kind of claim that I would say it’s possible to make.
This doesn’t quite sound like what you want to do, though, and instead want to insert more nuanced words to make it clearer what work “think” is doing.
This doesn’t quite sound like what you want to do, though, and instead want to insert more nuanced words to make it clearer what work “think” is doing.
Yeah. And also a big part of what I’m trying to propose is some sort of new standard. I just realized I didn’t express this in my OP, but I’ll express it now. I agree with the problems you’re saying, and I think that if we all sort of agreed on this new standard, eg. when you say “I suspect” it means X, then these problems seem like they’d go away.
Not answering your main point, but small note on the “leaving out very” point: I’ve enjoyed McCloskey’s writing on writing. She calls the phenomenon “elegant variation” (I don’t know whether this is her only) and also teaches we have to get rid of this unhelpful practice that we get thought in school.
As I mentioned in some recent Shortform posts, I recently listened to the Bayesian Conspiracy podcast’s episode on the LessOnline festival and it got me thinking.
One thing I think is cool is that Ben Pace was saying how the valuable thing about these festivals isn’t the presentations, it’s the time spent mingling in between the presentations, and so they decided with LessOnline to just ditch the presentations and make it all about mingling. Which got me thinking about mingling.
It seems plausible to me that such mingling can and should happen more online. And I wonder whether an important thing about mingling in the physical world is that, how do I say this, you’re just in the same physical space, next to each other, with nothing else you’re supposed to be doing, and in fact what you’re supposed to be doing is talking to one another.
Well, I guess you’re not supposed to be talking to one another. It’s also cool if you just want to hang out and sip on a drink or something. It’s similar to the office water cooler: it’s cool if you’re just hanging out drinking some water, but it’s also normal to chit chat with your coworkers.
I wonder whether it’d be good to design a virtual watercooler. A digital place that mimicks aspects of the situations I’ve been describing (festivals, office watercoolers).
By being available in the virtual watercooler it’s implied that you’re pretty available to chit chat with, but it’s also cool if you’re just hanging out doing something low key like sipping a drink.
You shouldn’t be doing something more substantial though.
The virtual watercooler should be organized around a certain theme. It should attract a certain group of people and filter out people who don’t fit in. Just like festivals and office water coolers.
In particular, this feels to me like something that might be worth exploring for LessWrong.
Note: I know that there are various Slack and Discord groups but they don’t meet conditions (1) or (2).
I maybe want to clarify: there will still be presentations at LessOnline, we’re just trying to design the event such that they’re clearly more of a secondary thing.
Sometimes I think to myself something along these lines:
I could read this post/comment in detail and respond to it, but I expect that others won’t put much effort into the discussion and it will fizzle out, and so it isn’t worth it for me to put the effort in in the first place.
This presents a sort of coordination problem, and one that would be reasonably easy to solve with some sort of assurance contract-like functionality.
There’s a lot to say about whether or not such a thing is worth pursuing, but in short, it seems like trying it out as an experiment would be pretty high-upside and low-cost to try, in such a way that I’m decently confident would be worthwhile.
I … don’t think that line of thinking almost ever applies to me. If the topic interests me and/or there’s something about the post that piques my desire to discuss, it almost always turns out that there are others with similar willingness. At the very least, the OP usually engages to some extent.
There are very few, and perhaps zero, cases where crafting or even evaluating an existing contract is less effort than just reading and responding AND I see enough potential to expend the contract effort but not the read/reply effort.
In addition, the contract doesn’t get me out of the effort to read/respond, it just gives reason to believe that others will do so as well. It’s overall strictly more effort than just taking the risk sometimes.
Eventually, the good guys capture an evil alien ship, and go exploring inside it. The captain of the good guys finds the alien bridge, and on the bridge is a lever. “Ah,” says the captain, “this must be the lever that makes the ship dematerialize!” So he pries up the control lever and carries it back to his ship, after which his ship can also dematerialize.
Beautiful Probability:
It seems to me that the toolboxers are looking at the sequence of cubes {1, 8, 27, 64, 125, …} and pointing to the first differences {7, 19, 37, 61, …} and saying “Look, life isn’t always so neat—you’ve got to adapt to circumstances.” And the Bayesians are pointing to the third differences, the underlying stable level {6, 6, 6, 6, 6, …}. And the critics are saying, “What the heck are you talking about? It’s 7, 19, 37 not 6, 6, 6. You are oversimplifying this messy problem; you are too attached to simplicity.”
It’s not necessarily simple on a surface level. You have to dive deeper than that to find stability.
On the one hand, as a good person who cares about the feelings of others, you don’t want to call them out, make them feel stupid, and embarrass them. On the other hand… what if it’s in the name of intellectual progress?
Intellectual progress seems like it is more than enough to justify it. Under a veil of ignorance, I’d really, really prefer it.
And yet that doesn’t seem to do the trick. I at least still feel awkward using examples from real life in writing and cringe a little when I see others do so.
I think the example with the detached lever is Yudkowsky being overconfident. Come on, it is an alien technology, way beyond our technical capabilities. Why should we assume that the mechanism responsible for dematerializing the ship is not in the lever? Just because the humans would not do it that way? Maybe the alien ships are built in a way that makes them easy to configure on purpose. That would be actually the smart way to do this.
Somewhere, in a tribe that has seen automobile for the first time, a local shaman is probably composing an essay on a Detached CD Player Fallacy.
Consider a proposition P. It is either true or false. The green line represents us believing with 100% confidence that P is true. On the other hand, the red line represents us believing with 100% confidence that P is false.
We start off not knowing anything about P, so we start off at point 0, right at that black line in the middle. Then, we observe data point A. A points towards P being true, so we move upwards towards the green line a moderate amount, and end up at point 1. After that we observe data point B. B is weak evidence against P. We move slightly further from the green line, but still above the black line, and end up at point 2. So on and so forth, until all of the data relevant to P has been observed, and since we are perfect Bayesians, we end up being 100% confident that P is, in fact true.
Now, compare someone at point 3 to someone at point 4. The person at point 3 is closer to the truth, but the person at point 4 is further along.
This is an interesting phenomena to me. The idea of being further along, but also further from the truth. I’m not sure exactly where to take this idea, but two thoughts come to mind.
The first thought is of valleys of bad rationality. As we make incremental progress, it doesn’t always make us better off.
The second thought is of how far along I actually am in my beliefs. For example, I am an athiest. But what if I had to debate the smartest theist in the world. Would I win that debate? I think I would, but I’m not actually sure. Perhaps they are further along than me. Perhaps I’m at point 3 and they’re at point 7.
I believe that similar to conservation of expected evidence, there’s a rule of rationality saying that you shouldn’t expect your beliefs to change back and forth too much, because that means there’s a lot of uncertainty about the factual matters, and the uncertainty should bring you closer to max entropy. Can’t remember the specific formula, though.
Good point. I was actually thinking about that and forgot to mention it.
I’m not sure how to articulate this well, but my diagram and OP was mainly targeted at gears level modesl. Using the athiesm example, the worlds smartest theist might have a gears level model that is further along than mine. However, I expect that the worlds smartest atheist has a gears level model that is further along than the worlds smartest theist.
In the rationality community people are currently excited about the LessOnline festival. Furthermore, my impression is that similar festivals are generally quite successful: people enjoy them, have stimulating discussions, form new relationships, are exposed to new and interesting ideas, express that they got a lot out of it, etc.
So then, this feels to me like a situation where More Dakka applies. Organize more festivals!
How? Who? I dunno, but these seem like questions worth discussing.
Trying to organize a festival probably isn’t risky. It doesn’t seem like it’d involve too much time or money.
I don’t think that’s true. I’ve co-organized one one weekend-long retreat in a small hostel for ~50 people, and the cost was ~$5k. Me & the co-organizers probably spent ~50h in total on organizing the event, as volunteers.
I was envisioning that you can organize a festival incrementally, investing more time and money into it as you receive more and more validation, and that taking this approach would de-risk it to the point where overall, it’s “not that risky”.
For example, to start off you can email or message a handful of potential attendees. If they aren’t excited by the idea you can stop there, but if they are then you can proceed to start looking into things like cost and logistics. I’m not sure how pragmatic this iterative approach actually is though. What do you think?
Also, it seems to me that you wouldn’t have to actually risk losing any of your own money. I’d imagine that you’d 1) talk to the hostel, agree on a price, have them “hold the spot” for you, 2) get sign ups, 3) pay using the money you get from attendees.
Although now that I think about it I’m realizing that it probably isn’t that simple. For example, the hostel cost ~$5k and maybe the money from the attendees would have covered it all but maybe less attendees signed up than you were expecting and the organizers ended up having to pay out of pocket.
On the other hand, maybe there is funding available for situations like these.
Back then I didn’t try to get the hostel to sign the metaphorical assurance contract with me, maybe that’d work. A good dominant assurance contract website might work as well.
I guess if you go camping together then conferences are pretty scalable, and if I was to organize another event I’d probably try to first message a few people to get a minimal number of attendees together. After all, the spectrum between an extended party and a festival/conference is fluid.
A line of thought that I want to explore: a lot of times when people appear to be close-minded, they aren’t actually being (too) close-minded. This line of thought is very preliminary and unrefined.
It’s related to Aumann’s Ageement Theorem. If you happen to have two perfectly Bayesian agents who are able to share information, then yes, they will end up agreeing. In practice people aren’t 1) perfectly Bayesian or 2) able to share all of their information. I think (2) is a huge problem. A huge reason why it’s hard to convince people of things.
Well, I guess what I’m getting at isn’t really close-mindedness. It’s just… suppose you disagree with someone on something. You list out a bunch of arguments for why the other person is wrong, and why they should adopt your belief. Argument A, B, C, D, E… so on and so forth. It feels like you’ve listed out so many things, and they’re being stubborn in not changing their mind and admitting that you’re right. But actually, given the information they have, their often correct in not adopting your belief. Even if they were a perfect Bayesian, your arguments A through E just aren’t nearly enough. You’d need perhaps 100, maybe even 1,000 times more arguments to get a perfectly open-minded and Bayesian agent to start from the point where the other person started and end up agreeing with you.
Maybe it’s related to the illusion of transparency. There are all of these premises that you are assuming to be true. All of these data points. Subtle life experiences. Stuff like that. All of these things inform your priors. And it’s easy to assume that the other person shares the same data informing their priors. But they don’t. And so providing these data points is part of your job in arguing for your position. But it is often difficult to realize that this is part of your job.
Wait a minute: I think I’m basically trying to say the same thing as Expecting Short Inferential Distances. Sigh. Yeah, I think that’s pretty much it.
This is a pretty good example of something that happens a lot to me on LessWrong. I have some vague idea about something. Then I realize that someone on LessWrong (frequently Eliezer) has a great blog post about it that does a great job of crystalizing it, articulating it, and filling in the gaps for me. Usually it’s a very exciting and satisfying experience. Right now I’m a little a) disappointed in myself for not realizing this to begin with and b) disappointed that I don’t actually have a useful new thought to share. I’m also c) a little frustrated that I am experiencing (b).
You’d need perhaps 100, maybe even 1,000 times more arguments to get a perfectly open-minded and Bayesian agent to start from the point where the other person started and end up agreeing with you.
Modelling humans with Bayesian agent seems wrong.
For humans, I think the problem usually isn’t the number of arguments / number of angles you attacked the problem, but whether you have hit on the few significant cruxes of that person. This is especially because humans are quite far away from perfect Bayesians. For relatively small disargreements (i.e. not at the scale of convincing a Christian that God doesn’t exist), usually people just had a few wrong assumptions or cached thoughts. If you can accurately hit those cruxes, then you can convince them. It is very very hard to know which arguments can hit those cruxes though and it is why one of the viable strategies is to keep throwing arguments until one of them work.
(Also unlike convincing Bayesian agents where you can argue for W->X, X->Y, Y->Z in any order, sometimes you need to argue about things in the correct order)
Suppose you identify a single crux A. Now you need to convince them of A. But convincing them of A requires you to convince them of A.1, A.2, and A.3.
Ok, no problem. You get started trying to convince them of A.1. But then you realize that in order to convince them of A.1, you need to first convince them of A.1.1, A.1.2, and A.1.3.
I think this sort of thing is often the case, and is how large inferential distances are “shaped”.
I think it’s generally agreed that pizza and steak (and a bunch of other foods) taste significantly better when they’re hot. But even if you serve it hot, usually about halfway through eating, the food cools enough such that it’s notably worse because it’s not hot enough.
One way to mitigate this is to serve food on a warmed plate. But that doesn’t really do too much.
What makes the most sense to me would be to serve smaller portions in multiple courses. Like instead of a 10“ pie, serve two 5” pies. Or instead of a 16oz ribeye, divide it into four 4oz ribeyes and cook and serve each separately.
I guess this is what fancy restaurants already do with their multi-course meals though, with each course being a small amount of food. And I suppose the logistics of serving more courses and getting them out at the right time while they’re hot is a good deal more difficult logistically. So I guess you need to charge a lot more. Which gets you into fancy restaurant territory.
But then again, lots of expensive, fancy steakhouses will serve a huge 16oz or even 24oz ribeye for $100+. And similarly, even the best pizza places will serve normal-sized pies as opposed to tapas-sized. Seems wrong.
Interesting puzzle. Some random thoughts: I’m not sure how much of the quality difference is “hot” vs “freshly prepared”—time under a heat lamp isn’t necessarily an improvement. The fact that buffet-style dining isn’t more popular is some evidence that most people don’t value this compared to their preferences for individually-prepared food.
Hot Pot and Brazilian Churrascaria are two cuisines that give fresh/hot servings on-demand. Oh, also the better sushi bars (not hot, but very fresh), and Benehana (or other Teppanyaki or mongolian-grill place). I love all of these, but it seems they’re more popular for the cuisine and flavors, and to some extent the spectacle and novelty, and not so much “good normal food, fresher than a standard restaurant”.
I suspect all this is evidence that for most people, for most meals, there’s a threshold of freshness rather than an optimization function. Being “fresh enough”, while staying convenient, affordable, and/or “what I’m in the mood for” is what most places deliver because it’s what most people want. The last bite of steak is warm rather than hot, and the last slice of pizza is getting toward lukewarm, but it’s still good stuff that I’m happy to eat.
I’m not sure how much of the quality difference is “hot” vs “freshly prepared”—time under a heat lamp isn’t necessarily an improvement.
Ah, that’s a good distinction. I think that what matters is usually “freshly prepared”.
the better sushi bars (not hot, but very fresh)
Oh interesting. I didn’t know that was the case.
I suspect all this is evidence that for most people, for most meals, there’s a threshold of freshness rather than an optimization function. Being “fresh enough”, while staying convenient, affordable, and/or “what I’m in the mood for” is what most places deliver because it’s what most people want. The last bite of steak is warm rather than hot, and the last slice of pizza is getting toward lukewarm, but it’s still good stuff that I’m happy to eat.
Yeah, I think so too. And more generally, people just aren’t very choose-y about their food, much less willing to pay lots of money for it. So I guess that’s probably it.
Also, if there was an inefficiency here, a restaurant trying to exploit it doesn’t have a huge market to profit from. The market would be restricted to the local area. And people only frequent expensive restaurants so often. So yeah, there probably aren’t many if any metaphorical dollar bills laying on the ground.
But… I suspect that there are “foodie points” up for grabs. Like, I suspect that serving four 4oz ribeyes hot really is a notably better experience for foodie-types, and a restaurant that pursued this would get respect amongst foodies.
Not directly tied to the core of what you’re saying, but I will note that I am example of someone who doesn’t strongly prefer such foods warm. I do weakly prefer it being warm, as long as it’s not too hot (that’s worse than it being cold, because it hurts / causes minor injury), but I’m happy eating it room temperature or a bit cold (not necessarily cold steak though)
(I bet you also like your steaks medium-well. Just kidding.)
I’m curious: is this a case of you not having strong preferences about food in general? Or is it the case that you do generally have strong preferences about food, but don’t strongly prefer such foods being warm? (Not that those are the only two options, it’s just easier to phrase it this way.)
I run into something that I find somewhat frustrating. When I write text messages to people, they’re often pretty long. At least relative to the length of other people’s text messages. I’ll write something like 3-5 paragraphs at times. Or more.
I’ve had people point this out as being intimidating and a lot to read. That seems odd to me though. If it were an email, it’d be a very normal-ish length, and wouldn’t feel intimidating, I suspect. If it were a blog post, it’d be quite short. If it were a Twitter thread, it’d be very normal and not intimidating. If it were a handwritten letter, it’d be on the shorter side.
So then, why does the context of it being a text message make it particularly intimidating when the same amount of words isn’t intimidating in other contexts? Possibly because it takes longer to type on a cell phone, but that doesn’t explain the phenomena when conversations are happening on Signal or WhatsApp (I kinda consider those all to be “text messages” in my head along with SMS). I also run into it on Slack and Discord.
Gmail displays long messages better than e.g. Signal, even on my laptop. And I often do find the same email feels longer when I read it on my phone than my laptop.
Gmail also makes it easy to track long messages I want to delay responses to. Texts feel much more “respond RIGHT NOW or forget about it forever”
Gmail displays long messages better than e.g. Signal, even on my laptop. And I often do find the same email feels longer when I read it on my phone than my laptop.
Hm. Do you think this is due to readability or norms? I’d say I’m roughly 80% confident it’s norms.
Gmail also makes it easy to track long messages I want to delay responses to. Texts feel much more “respond RIGHT NOW or forget about it forever”
I also suspect that this is due to norms rather than functionality. For example, Gmail (and other mail clients) let you mark things as unread and organize them in folders. However, it seems easy enough to scroll through your text messages (or Signal, or WhatsApp...), see if you were the last person to respond or not, if not whether their last message feels like the end to the conversation.
I think it’s at least partially readability. Signal won’t give a given line more than half my screen, where gmail will go up to 80% (slack and discord are similar). I don’t use the FB messenger app, but the webapp won’t give a line more than half the width of the screen.
However, it seems easy enough to scroll through your text messages (or Signal, or WhatsApp...), see if you were the last person to respond or not, if not whether their last message feels like the end to the conversation.
I think this is way more work than looking at “what’s still in my inbox?”, and rapidly becomes untenable as the number of messages or delay in responding increases.
Hmm, I have never thought that a message from another person is too long. But I think my messages are sometimes too long. I once wrote a message on Discord that was iirc over 8000 characters long. I think that was a bit too much but for a different reason. It interrupted the flow of the conversation just too much and did not enable enough back and forth.
Let me ask you a question. How confident are you that Bob is doing good? Not very confident, right? But why not? After all, Bob did say that he is doing good. And he’s not particularly well known for being a liar.
I think the thing here is to view Bob’s words as Bayesian evidence. They are evidence of Bob doing good. But how strong is this evidence? And how do we think about such a question?
Let’s start with how we think about such a question. I think the typical Bayesian approach is pretty practical here. Ask yourself how likely Bob would say “good” when he is doing good. Ask yourself how likely he would say it when he isn’t.
I think most people tend to say “good” if their hedonic state is something like between 10th percentile and 90th. If it’s 5-10th percentile my model says people will usually say something like what Alice said: “not doing so well”. If it’s 0-5th maybe they’ll say “I’m actually really struggling”. And similarly for 90+ percentile. It depends though. But with this model, I think we can take Bob’s claim as some sort of solid evidence that he is, uh, doing fine, and perhaps weak evidence that he is leaning towards actually feeling good. But now looking at Alice, according to my model, it’s actually pretty strong evidence that she is not doing well.
Maybe all of this seems obvious to you. If so, good. But why would I write something if it’s so obvious? Idk. I just have been finding myself tempted to interpret words literally instead of thinking about how strong they are as Bayesian evidence, and I think that other rationalists/people do this quite often as well.
PS: This is hinted at quite often in HPMoR. Perhaps other rationalist-fic as well. Ie. an exchange like:
Quirrell: [Asks Harry a question]
Harry: [Pauses momentarily]
Quirrell: I see.
Harry: Damn! I basically just told him X by pausing because pausing is strong Bayesian evidence of X.
PPS: This is really just a brain dump. I’d love to see someone write this up better than I did here.
I notice I’m confused. I don’t actually know what it would mean (what predictions I’d make or how I’d find out if I were correct about) for Bob to be “doing good”. I don’t think it generally means “instantaneous hedonic state relative to some un-tracked distribution”, I think it generally means “there’s nothing I want to draw your attention to”. And I take as completely obvious that the vast majority of social interactions are more contextual and indirect than overt legible information-sharing.
This combines to make me believe that it’s just an epistemic mistake to take words literally most of the time, at least without a fair bit of prior agreement and contextual sharing about what those words mean in that instance.
I’m agreed that thinking of it as a Bayesean update is often a useful framing. However, the words are a small part of evidence available to you, and since you’re human, you’ll almost always have to use heuristics and shortcuts rather than actually knowing your priors, the information, or the posterior beliefs.
I think it generally means “there’s nothing I want to draw your attention to”.
Agreed.
This combines to make me believe that it’s just an epistemic mistake to take words literally most of the time, at least without a fair bit of prior agreement and contextual sharing about what those words mean in that instance.
Agreed.
And I take as completely obvious that the vast majority of social interactions are more contextual and indirect than overt legible information-sharing.
I think the big thing I disagree on is that this is always obvious. Thought of in the abstract like this I guess I agree that it is obvious. However, I think that there are times when you are in the moment where it can be hard to not interpret words literally, and that is what inspired me to write this. Although now I am realizing that I failed to make that clear or provide any examples of that. I’d like to provide some good examples now, but it is weirdly difficult to do so.
However, the words are a small part of evidence available to you, and since you’re human, you’ll almost always have to use heuristics and shortcuts rather than actually knowing your priors, the information, or the posterior beliefs.
Agreed. I didn’t mean to imply otherwise, even though I might have.
There’s a concept I want to think more about: gravy.
Turkey without gravy is good. But adding the gravy… that’s like the cherry on top. It takes it from good to great. It’s good without the gravy, but the gravy makes it even better.
An example of gravy from my life is starting a successful startup. It’s something I want to do, but it is gravy. Even if I never succeed at it, I still have a great life. Eg. by default my life is, say, a 7⁄10, but succeeding at a startup would be so awesome it’d make it a 10⁄10. But instead of this happening, my brain pulls a trick: it says “You need to succeed at this. When you do I’ll give allow you to feel normal, a 5⁄10 happiness. But along the way there, I’m going to make you feel 2⁄10.”
Maybe I’m more extreme than average here, but I think that this is a human thing, not a me-thing. It seems to be the norm when people pursue hard goals for them to feel this way. The rule, not the exception.
“You should have deduced it yourself, Mr Potter,” Professor Quirrell said mildly. “You must learn to blur your vision until you can see the forest obscured by the trees. Anyone who heard the stories about you, and who did not know that you were the mysterious Boy-Who-Lived, could eas- ily deduce your ownership of an invisibility cloak. Step back from these events, blur away their details, and what do we observe? There was a great rivalry between students, and their competition ended in a perfect tie. That sort of thing only happens in stories, Mr Potter, and there is one person in this school who thinks in stories. There was a strange and com- plicated plot, which you should have realized was uncharacteristic of the young Slytherin you faced. But there is a person in this school who deals in plots that elaborate, and his name is not Zabini. And I did warn you that there was a quadruple agent; you knew that Zabini was at least a triple agent, and you should have guessed a high chance that it was he. No, I will not declare the battle invalid. All three of you failed the test, and lost to your common enemy.”
- HPMoR Chapter 35
I really, really like this idea. Squint. Blur away the details. What do you see?
Squint. Blur away the details. Forget about React. Forget about NextJS. Forget about front end web development, or even software development more generally.
Observe that there is one organization that offers a popular product that has been around for awhile. Observe that there is another organization that is trying, and succeeding, at becoming large and popular, and that depends on the first organizations product. Observe that the second organization is allocating lots and lots of resources towards helping the first organization.
How do you expect the first organization to respond? Well, I would expect them to feel sorta dependent on the second organization. I would expect them to cater somewhat heavily to the second organizations needs. And I would expect both organizations to try to hide the fact that this is happening.
I wish I had more good examples of how to use this skill of squinting. I’d love to see other people write more about it.
I’m listening to Eric Normand’s reading of Out of the Tar Pit. The paper Out of the Tar Pit kinda feels like it is saying, “complexity is the enemy in software projects, and here is the best way to tame it”.
When I squint, I don’t see software development. I see a a field of engineering. A very complicated one. One that has been around for maybe 50 years. And I see someone making a claim about the best way to succeed in the field.
Looking through this lens, I feel a large amount of skepticism.
As a programmer, compared to other programmers, I am extremely uninterested in improving the speed of web apps I work on. I find that (according to my judgement) it rarely has more than a trivial impact on user experience. On the other hand, I am usually way more interested than others are in things like improving code quality.
I wonder if this has to do with me being very philosophically aligned with Bayesianism. Bayesianism preaches to update your beliefs incrementally, whereas Alternative is a lot more binary. For example, the way scientific experiments work, your p-value either passes the (arbitrary) threshold, or it doesn’t, so you either reject the null, or fail to reject the null, a binary outcome.
Perhaps people are frequently disinterested in subjective things like improving code quality or usability because it is hard to get a sort of “statistically significant” amount of evidence to say stuff like “this code quality improvement is having this level of impact”, and so people default to “fail to reject the null”. On the other hand, a more Bayesian way of thinking about it is to just do your best to make a judgement, and shift your beliefs accordingly.
For things like performance optimization, the results are pretty objective. You can run an analysis and see that eg. rendering was sped up by 75ms, and so you can “reject the null” pretty easily and conclude that there is a real, concrete benefit.
Speed improvements are legible (measurable), although most people are probably not measuring them. Sometimes that’s okay; if the app is visibly faster, I do not need to know the exact number of milliseconds. But sometimes it’s just a good feeling that I “did some optimization”, ignoring the fact that maybe I just improved from 500 to 470 milliseconds some routine that is only called once per day. (Or maybe I didn’t improve it at all, because the compiler was already doing the thing automatically.)
Code quality is… well, from the perspective of a non-programmer (such as a manager) probably an imaginary thing that costs real money. But here, too, are diminishing returns. Changing spaghetti code to a nice architecture can dramatically reduce future development time. But if a function is thoroughly tested and it is unlikely to be changed in the future (or is likely to be replaced by something else), bringing it to perfection is probably a waste of time. Also, after you fixed the obvious code smell, you move to more controversial decisions. (Is it better to use a highly abstract design pattern, or keep the things simple albeit a little repetitive?)
I’d say, if the customer complains, increase the speed; if the programmers complain, refactor the code. (Though there is an obvious bias here: you are the programmer, and in many companies you won’t even meet the customer.)
I’d wager that customers (or users) won’t complain about slow code, especially if there’s many customers, for the same reason that most people don’t send emails with corrections or typos on most online posts.
For example, the way scientific experiments work, your p-value either passes the (arbitrary) threshold, or it doesn’t, so you either reject the null, or fail to reject the null, a binary outcome.
Ritualistic hypothesis testing with significance thresholds is mostly used in the social sciences, psychology and medicine and not so much in the hard sciences (although arbitrary thresholds like 5 sigma are used in physics to claim the discovery of new elementary particles they rarely show up in physics papers). Since it requires deliberate effort to get into the mindset of the null ritual I don’t think that technical and scientific-minded people just start thinking like this.
I think that the simple explanation that the effect of improving code quality is harder to measure and communicate to management is sufficient to explain your observations. To get evidence one way or another, we could also look at what people do when the incentives are changed. I think that few people are more likely to make small performance improvements than improve code quality in personal projects.
I’ve had success with something: meal prepping a bunch of food and freezing it.
I want to write a blog post about it—describing what I’ve done, discussing it, and recommending it as something that will quite likely be worthwhile for others as well—but I don’t think I’m ready. I did one round of prep that lasted three weeks or so and was a huge success for me, but I don’t think that’s quite enough “contact with reality”. I think there’s a risk that, after more “contact with reality”, it proves to be not nearly as useful as it currently seems. So yeah, I think I’m gonna wait at least another month or two and see how it’s going then.
Why do I think it’s working well now though? Previously I’ve tried meal prepping. Y’know, the type of thing where you cook in bulk on Sunday and have meals for the week. One issue though is that I somehow just never end up with enough food. Only a few days worth of food, maybe. Part of it is because it’s hard to genuinely cook that much, but another part of it is not wanting the food to sit in the fridge for too long and go bad. Idk.
Another thing is that the traditional meal prep requires you to cook pretty frequently. Every couple of days. Maybe once a week if you’re good enough at it. But I just have a ton of trouble with this. I cook on Sunday. Wednesday rolls around. I notice I’m getting low on the food and need to prep more. I have stuff going on though so I postpone till Thursday. More stuff going on. More postponing. It’s Friday. I have something else going on. So I make whip together some pasta (unhealthy). Or go out to eat. Something like that ends up happening. But when I can cook for weeks or months at a time, I dunno, somehow it kinda solves that problem.
I also think there are like real time saving benefits. Ie. cooking 20 portions of chicken doesn’t take 2x the time as cooking 10 portions. Maybe it’s like 1.3x the time.
And I get into a nice groove when I’m cooking a ton of food. I know I’ll be in the kitchen for many hours. I put on some podcasts. Idk. I’m more able to work that way.
There’s also something psychologically very nice about knowing that I have weeks and weeks of food in the freezer. In portion-sized containers that I can microwave whenever I want and start eating in minutes.
And, uh, I’m kinda proud of myself for being self-sufficient.
One hangup I previously had was uncertainty about “Can I freeze this ingredient? What about that ingredient?”. I think that’s a big reason why I never tried cooking crazy amounts of food in bulk and freezing it before. But then I realized, “Y’know what, why don’t I just try it? Cook a small portion, freeze it, warm it up, see how it tastes. Google for food safety things. If it works out, try a big portion.” In retrospect it’s pretty silly that it spent so much time hitting the Think About It button instead of the Try It And See What Happens button.
I’ve gotta vent a little about communication norms.
My psychiatrist recommended a new drug. I went to take it last night. The pills are absolutely huge and make me gag. But I noticed that the pills look like they can be “unscrewed” and the powder comes out.
So I asked the following question (via chat in this app we use):
For the NAC, the pill is a little big and makes me gag. Is it possible to twist it open and pour the powder on my tongue? Or put it in water and drink it?
The psychiatrist responded:
Yes it seems it may be opened and mixed into food or something like applesauce
The main thing I object to is the language “it seems”. Instead, I think “I can confirm” would be more appropriate.
I think that it is—here and frequently elsewhere—a mott-and-bailey. The bailey being “yes, I confirm that you can do this” and the mott being “I didn’t say it’d definitely be ok, just that it seems like it’d be ok”.
Well, that’s not quite right. I think it’s more subtle than that. If consuming the powder lead to issues, I do think the psychiatrist would take responsibility, and be held responsible if there any sort of legal thing, despite the fact that she used the qualifier “it seems”. So I don’t think that she consciously was trying to establish a motte that she can retreat to if challenged. Rather, I think it is more subconscious and habitual.
This seems like a bad epistemic habit though. Or, perhaps I should say, I’m pretty confident that it is a bad epistemic habit. I guess I have some work to do in countering it as well.
Here’s another example. I listen to the Thinking Basketball podcast. I notice that the cohost frequently uses the qualifier “necessarily”. As in, “Myles Turner can’t necessarily create his own shot”. What he means by that is “Myles Turner isn’t very good at creating his own shot”. This too I think is mostly habitual and subconscious, as opposed to being a conscious attempt to establish a motte that he can retreat to.
The way the psychiatrist phrased it made me mentally picture that they weren’t certain, went to review the information on the pill, and came back to relay their findings based on their research, if that helps with possible connotations. The extended implied version would be “I do not know. I am looking it up. The results of my looking it up are that, yes, it may be opened and mixed into food or something like applesauce.”
Your suggested replacement is in contrast has a light layer of the connotation “I know this, and answer from my own knowledge,” though less so than just stating “It may be opened and mixed into food or something like applesauce.” without the prelude.
From my perspective, the more cautious and guarded language might have been precisely what they meant to say, and has little to do with a fallacy. I am not so confident that you are observing a bad epistemic habit.
Ah, I see. That makes sense and changes my mind about what the psychiatrist probably meant. Thanks.
(Although it begs the new complaint of “I’m asking because I want confirmation not moderate confidence and you’re the professional who is supposed to provide the confirmation to me”, but that’s a separate thing.)
In places like Hacker News and Stack Exchange, there are norms that you should be polite. If you said something impolite and Reddit-like such as “Psh, what a douchebag”, you’d get flagged and disciplined.
But that’s only one form of impoliteness. What about subtextual impoliteness? I think subtextual impoliteness is important too. Similarly important. And I don’t think my views here are unique.
I get why subtextual impoliteness isn’t policed though. Perhaps by definition, it’s often not totally clear what the subtext behind a statement is. So if you try to warn someone about subtextual impoliteness, they can always retreat to the position that you misinterpreted them (and were uncharitable).
One possible way around this would be to have multiple people vote on what the subtext is, but that sounds pretty messy. I expect it’d lead to a bunch of nasty arguments and animosity.
Another possible way around it is to… ask nicely? Like, “I’m not going to police you, but please be aware of the idea of subtext and try to keep your subtext polite.” I don’t see that working though. It’s an obvious enough thing that it doesn’t actually need saying. Plus I get the sense that many communities currently have stuff like this, and they are mostly ignored.
So are we just stuck with no good path forward? Meh, probably. I at least don’t see a good path forward.
At least in situations where you have no leverage. In situations like friendships and certain work relationships, if you find someone to be subtextually impolite, you can be less friendly towards them. I think that leverage is a large part of what pushes people to be subtextually polite in the first place (study on politeness in elevators vs cars).
Can you give a few examples (in-context on HN or Stack Exchange) of subtextual impoliteness that you wish were enforceable? It’s unfortunate but true that the culture/norm of many young-male-dominated technical forums can’t distinguish direct factual statements from aggressive framing.
I generally agree with “no good path forward” as an assessment: the bullies and insecure people who exist everywhere (even if not the majority) are very good at finding loopholes and deniable behaviors in any legible enforcement framework.
“Please be kind” works well in many places, or “you may be right, but that hurt my feelings”. But really, that requires high-trust to start with, and if it’s not already a norm, it’s very difficult to make it one.
Can you give a few examples (in-context on HN or Stack Exchange) of subtextual impoliteness that you wish were enforceable?
Here are a two: 1, 2. /r/poker is also littered with it. Example.
I’m failing to easily find examples on Stack Exchange but I definitely know I’ve come across a bunch. Some that I’ve flagged. I tried looking for a way to see a list of comments you’ve flagged, but I wasn’t able to figure it out.
Thanks—yeah, those seem mild enough that I doubt there’s any possible mechanism to eliminate the snarky/rude/annoying parts, at least in a group much larger than Dunbar’s number with no additional social filtering (like in-person requirements for at least some interactions, or non-anonymous invite/expulsion mechanisms).
Life decision that actually worked for me: allowing myself to eat out or order food when I’m hungry and pressed for time.
I don’t think the stress of frantically trying to get dinner together is worth the costs in time or health. And after listening to this podcast episode, I’m suspect that, I’m not sure how to say this: “being overweight is bad, but like, it’s not that bad, and stressing about it is also bad since stress is bad, all of this in such a way where stressing out over being marginally more overweight is worse for your health than being a little more overweight”.
Something I do want to actually do though is to have a bunch of meals that I meal prep, freeze, and can warm up easily in the microwave. I want these meals to be healthy and at least somewhat affordable. And when these meals are actually available, I don’t really endorse myself eating out or ordering food.
At that point I want to save the eating out for places that are really, really good. Not just kinda good. Good enough to wow you. Definitely better than I can make at home. Eating out is pretty expensive and unhealthy. But on the other hand, I do really, really enjoy it and have lots of great places to eat here in Portland.
I think that, for programmers, having good taste in technologies is a pretty important skill. A little impatience is good too, since it can drive you to move away from bad tools and towards good ones.
These points seem like they should generalize to other fields as well.
Imagine that Alice is talking to Bob. She says the following, without pausing.
That house is ugly. You should read Harry Potter. We should get Chinese food.
We can think of it like this. Approach #1:
At t=1 Alice says “That house is ugly.”
At t=2 Alice says “You should read Harry Potter.”
At t=3 Alice says “We should get Chinese food.”
Suppose Bob wants to respond to the comment of “That house is ugly.” Due to the lack of pauses, Bob would have to interrupt Alice in order to get that response in. On the other hand, if Alice paused in between each comment, we can consider that Approach #2:
t=1: Alice says “That house is ugly.”
t=2: Alice pauses.
t=3: Alice says “You should read Harry Potter.”
t=4: Alice pauses.
t=5: Alice says “We should get Chinese food.”
then Bob wouldn’t have to interrupt if he wanted to respond.
Let’s call Approach #1 an inverted interruption. It forces the other person to interrupt if they have something to say.
I think inverted interruptions are something to be careful about. Not that they’re always bad, just that they should be kept in mind and considered in order to make communication both fun and effective.
Another example I ran into last night: at around 42:15 in this podcast episode, in one breath, Nate Duncan switches from talking about an NBA player named Fred VanVleet to an NBA player named Dillon Brooks in such a way that it didn’t give his cohost, Danny Leroux a chance to say something about Fred VanVleet.
Is there anything stopping you from commenting on ticket ABC-501 after the speaker stopped at t=3? “Circling back to ABC-501, I think we need to discuss how we haven’t actually met the user’s....”
That should only be awkward if your comment is superfluous.
I think that sometimes that sort of thing works. But other times it doesn’t. I’m having some trouble thinking about when exactly it does and doesn’t work.
One example of where I think it doesn’t is if the discussion of ABC-501 took 10 minutes, ABC-502 took another 10 minutes, ABC-503 takes another 10 minutes, and then after all of that you come back to ABC-501.
If you have a really important comment about ABC-501 then I agree it won’t be awkward, but if you have like a 4⁄10 importance comment, I feel like it both a) would be awkward and b) passes the threshold of being worth noting.
There’s the issue of having to “hold your comment in your head” as you’re waiting.
There’s the issue of lost context. People might have the context to understand your comment in the moment, but might have lost that context after the discussion of ABC-503 finished.
I think I notice that that people use placeholder words like “um” and “uh” in situations where they’d otherwise pause in order to prevent others from interjecting, because the speaker wants to continue saying what they want to say without being interrupted. I think this is subconscious though. (And not necessarily a bad thing.)
Something that I run into, at least in normie culture, is that writing (really) long replies to comments has a connotation of being contentious, or even hostile (example). But what if you have a lot to say? How can you say it without appearing contentious?
I’m not sure. You could try to signal friendliness by using lots of smiley faces and stuff. Or you could be explicit about it and say stuff like “no hard feelings”.
Something about that feels distasteful to me though. It shouldn’t need to be done.
Also, it sets a tricky precedent. If you start using smiley faces when you are trying to signal friendliness, what happens if next time you avoid the smiley faces? Does that signal contentiousness? Probably.
In the field of AI we talk about capabilities vs alignment. I think it is relevant outside of the field of AI though.
I’m thinking back to something I read in Cal Newport’s book Digital Minimalism. He talked about how the Amish aren’t actually anti-technology. They are happy to adopt technology. They just want to make sure that the technology actually does more good than harm before they adopt it.
And thy have a neat process for this. From what I remember, they first start by researching it. Then have small groups of people experiment with it for some amount of time. Then larger groups. Something like that.
On the other hand, the impression I get is that we (strongly tend towards) assume that an increase in capabilities is automatically a good thing. For example, if there is some advancement made in the field of physics where we better understand subatomic particles the thought process is that this is exciting because down the line that theoretical understanding will lead to cool new technologies that improve our lives. This strikes me as being similar to the planning fallacy though: focusing on the “happy path” where things go the way you want them to go, and failing to think about the scenario where unexpected, bad things happen. Like next-gen nuclear weapons.
Speaking very generally, to me, it is very frequently not obvious whether capabilities improvements are actually aligned with our values and I’m not particularly excited when I hear about advancements in any given field.
From my perspective, part of the issue of this post is I notice a type error in the post when it talks about capabilities improvements being aligned with our values.
The question is, which values, and whose values are we talking about? Admittedly this is a common issue with morality, but in this case of capabilities research, this matters as our aligning it to our values is too vague to make sense. We need to go deeper and more concrete here so that we talk about specifically what we want our capabilities research is aligned to what values.
Yeah, I do agree that “values” is ambiguous. However, I think that is ok for the point that I’m making about capabilities vs alignment. Even though people don’t fully agree on values, paying more attention to alignment and being more careful about capabilities advancements still seems wise.
A few of my posts actually seem like they’ve been useful to people. OTOH, a large majority don’t.
I don’t have a very good ability to discern this from the beginning though. Given this situation, it seems worth “spreading the seed” pretty liberally. The chance of it being a useful idea usually outweighs the chance that it mostly just adds noise for people to sift through. Especially given the fact that the LW team encourages low barriers for posting stuff. Doubly especially as shortform posts. Triply especially given that I personally enjoy writing and sometimes benefit from the feedback of others.
Feels a little counterintuitive though. Or maybe just scary. I’m not a shy person when it comes to this sort of stuff but even for me I hesitate and think “Is this worth posting? Is it gonna be terrible and just add noise?”
I’d guess that I’m maybe 95th percentile or something in terms of how not reluctant I am to post (only 5% of people are less reluctant) and I think I am still too reluctant. I can’t think of any examples of people who seem like they should be more reluctant. jefftk comes to mind as someone who is extremely not reluctant, but even for him I’m totally happy with the almost daily posts and would probably appreciate being exposed to even more of his thoughts.
The basic idea is that your trust battery is pre-charged at 50% when you’re first hired or start working with someone for the first time. Every interaction you have with your colleagues from that point on then either charges, discharges, or maintains the battery—and as a result, affects how much you enjoy working with them and trust them to do a good job.
The things that influence your trust battery charge vary wildly—whether the other person has done what they said they’ll do, how well you get on with that person, whether your opinions of that person are biased by others you have a high-trust relationship with, and lots more.
I think it’s important to note that trust batteries don’t always start off at 50%. In fact, starting at 50% is probably pretty rare.
Consider this example: you begin working at a new company, Widget Corp. Widget Corp says that they treat all of their employees as if they were family. That is a very common thing for companies to claim, and yet very few of them actually mean it or anything close to it.
So then, at least in this context, I don’t think the trust battery starts off at 50%. I think it starts off at something more like 1%. And when trust batteries are low, you have to do more to persuade, just like how strong priors take more evidence to move.
I feel like this isn’t well understood though. I observe a lot of statements similar to “we treat all of our employees like family” without follow-up statements like “we also know that you don’t have reason to believe us, and so here is an attempt to provide more evidence and actually be a little bit convincing”. Some of the time it’s surely because the former statement is some sort of weird simulacra level 3⁄4 type of thing, but a decent chunk of the time I think it’s at level 1⁄2 and there is a genuine failure to recognize that latter follow-up statement is very much needed.
Epistemic status: Babbling. I don’t have a good understanding of this, but it seems plausible.
Here is my understanding. Before science was a thing, people would derive ideas by theorizing (or worse, from the bible). It wasn’t very rigorous. They would kinda just believe things willy-nilly (I’m exaggerating).
Then science came along and was like, “No! Slow down! You can’t do that! You need to have sufficient evidence before you can justifiably believe something like that!” But as Eliezer explains, science is too slow. It judges things as pass-fail instead of updating incrementally. It wants to be very sure before it acknowledges something as “backed by science”.
I suspect that this attitude stems from reversing the stupidity that preceded science. And now that I think about it, a lot of ideas seem to stem from reversed stupidity. Perhaps we should be on the lookout for this more, and update our beliefs accordingly in the opposite direction.
I was just listening to the Why Buddhism Is True episode of the Rationally Speaking podcast. They were talking about what the goal of meditation is. The interviewee, Robert Wright, explains:
the Buddha said in the first famous sermon, he basically laid out the goal, “Let’s try to end suffering.”
What an ambitious goal! But let’s suppose that it was achieved. What would be the implications?
Well, there are many. But one that stands out to me as particularly important as well as ignored, is that it might be a solution to existential risk. Maybe if people were all happy, maybe they’d be inclined to sit back, take a deep breathe, stop fighting, take their foot off the gas, and start working towards solutions to existential risks.
Just as you can look at an arid terrain and determine what shape a river will one day take by assuming water will obey gravity, so you can look at a civilization and determine what shape its institutions will one day take by assuming people will obey incentives.
There’s been talk recently about there being a influx of new users to LessWrong and a desire to prevent this influx from harming the signal-to-noise ratio on LessWrong too much. I wonder: what if it costed something like $1 to make an account? Or $1/month? Some trivial amount of money that serves as a filter for unserious people.
This doesn’t work worldwide, so probably a nightmare to set up in a way that handles all the edge cases. Also, destitute students and trivial inconveniences.
This doesn’t work worldwide, so probably a nightmare to set up in a way that handles all the edge cases.
Why is that? My impression is that eg. 1 USD to make an account would be a trivial amount for people no matter the country or socioeconomic status (perhaps with a few rare exceptions).
trivial inconveniences
I think of this as more of a feature than a bug. There’d be some people it’d filter out who we would otherwise have wanted, but the benefits seem to me like they’d outweigh that cost.
Man I think I am providing value to the world by posting and commenting here. If it cost money I would simply stop posting here, and not post anywhere else.
The value flows in both directions. I’m fine not getting paid but paying is sending a signal of “what you do here isn’t appreciated”.
(Maybe I’d feel different if the money was reimbursed to particularly good posters? But then Goodharts law)
The importance of tutoring, in its more narrow definition as in actively instructing someone, is tied to a phenomenon known as Bloom’s 2-sigma problem, after the educational psychologist Benjamin Bloom who in the 1980s claimed to have found that tutored students
. . . performed two standard deviations better than students who learn via conventional instructional methods—that is, “the average tutored student was above 98% of the students in the control class.”
Simply put, if you tailor your instruction to a single individual, you can make it fit so much better to their minds, so that the average person, if tutored, would become top two in a class of a hundred. The truth is a little bit more complicated than that (and I recommend Nintil’s systematic review of the research if you want to get into the weeds), but the effect is nevertheless real and big. Tutoring is a more reliable method to impart knowledge than lectures. It is also faster.
I wonder what the implications of this are for AI safety, and EA more generally? How beneficial would it be to invest in making some sort of tutoring ecosystem available to people looking to get into the field, or to advance from where they currently stand?
Nonfiction books should be at the end of the funnel
Books take a long time to read. Maybe 10-20 hours. I think that there are two things that you should almost always do first.
Read a summary. This usually gives you the 80⁄20 and only takes 5-10 minutes. You can usually find a summary by googling around. Derek Sivers and James Clear come to mind as particularly good resources.
Listen to a podcast or talk. Nowadays, from what I could tell, authors typically go on a sort of podcast tour before releasing a book in order to promote it. I find that this typically serves as a good ~hour long overview of the important parts of the book. For more prominent authors, sometimes they’ll also give a talk—eg. Talks at Google—after releasing.
I think it really depends on your reading speed. If you can read at 500 wpm, then it’s probably faster for you to just read the book than search around for a podcast and then listen to said podcast. I do agree, though, that reading a summary or a blog about the topic is often a good replacement for reading an entire book.
then it’s probably faster for you to just read the book than search around for a podcast and then listen to said podcast
I’m having trouble seeing how that’d ever be the case. In my experience searching for a podcast rarely takes more than a few minutes, so let’s ignore that part of the equation.
If a book normally takes 10 hours to read, let’s say you’re a particularly fast reader and can read 5x as fast as the typical person (which I’m skeptical of). That’d mean it still takes 2 hours to read the book. Podcast episodes are usually about an hour. But if you’re able to read 5x faster that probably implies that you’re able to listen to the podcast at at least 2x speed if not 3x, in which case the podcast would only take 0.5 hours to go through, which is 4x faster than it’d take to read the book.
I’ve been in pursuit of a good startup idea lately. I went through a long list I had and deleted everything. None were good enough. Finding a good idea is really hard.
One way that I think about it is that a good idea has to be the intersection of a few things.
For me at least, I want to be able to fail fast. I want to be able to build and test it in a matter of weeks. I don’t want to raise venture funding and spend 18 months testing an idea. This is pretty huge actually. If one idea takes 10 days to build and the other takes 10 weeks, well, the burden of proof for the 10 week one is way higher. You could start seven 10 day ideas in 10 weeks.
I want the demand to be real. It should ideally be a painkiller, not a vitamin. Something people are really itching for, not something that kinda sorta sounds interesting that they think they should consume but aren’t super motivated to actually consume it. And I want to feel that way myself. I want to dogfood it. When I went through my list of ideas and really was honest with myself, there weren’t any ideas that I actually felt that eager to dogfood.
There needs to be a plausible path towards acquiring customers. Word of mouth, virality, SEO, ads, influencer marketing, affiliates, TV commercials, whatever. It’s possible that a product is quick to build and really satisfies a need, but there isn’t a good way to actually get it in front of users. You need a way to get it in front of users.
Of course, the money part needs to make sense. After listening to a bunch of Indie Hackers episodes, I’m really leaning towards businesses that make money via charging people, not via selling ads or whatever. Hopefully charging high prices, and hopefully targeting businesses (with budgets!) instead of consumers. Unfortunately, unlike Jay Z, I’m not a business, so I don’t understand the needs of businesses too well. I’ve always heard people give the advice of targeting businesses, but founders typically don’t understand the needs of businesses well, and I’ve never heard a good resolution to that dilemma.
I need to have the skills to build it. Fortunately at this point I’m a pretty solid programmer so there’s a lot of web app related things I’m able to build.
Hopefully there is a path towards expanding and being a big hit, not just a niche side income sort of thing. Although at this stage of my life I’d probably be ok with the latter.
When you add more and more things to the intersection, it actually gets quite small, quite rapidly.
I’ve always heard people give the advice of targeting businesses, but founders typically don’t understand the needs of businesses well, and I’ve never heard a good resolution to that dilemma.
The solution is likely “talk to people”. That could involve going to trade events or writing cold LinkedIn messages to ask people to eat lunch together.
You might also do something like an internship where you are not paid but on the other hand, will also own the code that you are writing during that internship.
Something like an internship would be a large investment of time that doesn’t feel like it’s worth the possibility of finding a startup idea.
I guess talking to people makes sense. I was thinking at first that it’d require more context than a lunch meeting, more like a dozen hours, but on second thought you could probably at least get a sense of where the paths worth exploring more deeply are (and aren’t) in a lunch meeting.
A few years ago I worked on a startup called Premium Poker Tools as a solo founder. It is a web app where you can run simulations about poker stuff. Poker players use it to study.
It wouldn’t have impressed any investors. Especially early on. Early on I was offering it for free and I only had a handful of users. And it wasn’t even growing quickly. This all is the opposite of what investors want to see. They want users. Growth. Revenue.
Why? Because those things are signs. Indicators. Signal. Traction. They point towards an app being a big hit at some point down the road. But they aren’t the only indicators. They’re just the ones that are easily quantifiable.
What about the fact that I had random people emailing me, thanking me for building it, telling me that it is way better than the other apps and that I should be charging for it? What about the fact that someone messaged me asking how they can donate? What about the fact that Daniel Negreanu—perhaps the biggest household name in poker—was using it in one of his YouTube videos?
Those are indicators as well. We can talk about how strong they are. Maybe they’re not as strong as the traditional metrics. Then again, maybe they are more strong. Especially something like Negreanu. That’s not what I want to talk about here though. Here I just want to make the point that they count. You’d be justified in using them to update your beliefs.
Still, even if they do count, it may be simpler to ignore them. They might be weak enough, at least on average, such that the effort to incorporate them into your beliefs isn’t worth the expected gain.
This reminds me of the situation with science. Science says that if a study doesn’t get that magical p < 0.05, we throw it in the trash. Why do we do this? Why don’t we just update our beliefs a small amount off of p = 0.40, a moderate amount off of p = 0.15 and large amount off of p = 0.01? Well, I don’t actually know the answer to that, but I assume that as a social institution, it’s just easier to draw a hard line about what counts and what doesn’t.
Maybe that’s why things work the way they do in startups. Sure, in theory the random emails I got should count as Bayesian evidence and update my beliefs about how much traction I have, but in practice that stuff is usually pretty weak evidence and isn’t worth focusing on.
In fact, it’s possible that the expected value of incorporating it is negative. That you’d expect it to do you more harm than good. To update your beliefs in the wrong direction, on average. How would that be possible? Bias. Maybe founders are hopelessly biased towards interpreting everything through rosy colored glasses and will inevitably arrive at the conclusion that they’re headed to the moon if they are allowed to interpret data like that.
That doesn’t feel right to me though. We shouldn’t just throw our hands in the air and give up. We should acknowledge the bias and update our beliefs accordingly. For example, you may intuitively feel like that positive feedback you got via email this month is a 4⁄10 in terms of how strong a signal it is, but you also recognize that you’re biased towards thinking it is a strong signal, and so you adjust your belief down from a 4⁄10 to a 1.5/10. That seems like the proper way to go about it.
Imagine the lifecycle of an idea being some sort of spectrum. At the beginning of the spectrum is the birth of the idea. Further to the right, the idea gets refined some. Perhaps 1⁄4 the way through the person who has the idea texts some friends about it. Perhaps midway through it is refined enough where a rough draft is shared with some other friends. Perhaps 3⁄4 the way through a blog post is shared. Then further along, the idea receives more refinement, and maybe a follow up post is made. Perhaps towards the very end, the idea has been vetted and memetically accepted, and someone else ends up writing about it with their own spin and/or explanation.
Or something like that. This is just meant as a rough sketch.
Anyway, I worry that we don’t have a good process for that initial 75% of the spectrum. And furthermore, that those initial stages are quite important.
When I say “we” I’m talking partly about the LessWrong community and partly about society at large.
I have some ideas I’ll hopefully write about and pursue at some point to help with this. Basically, get the right people connected with each other in some awesome group chats.
It sounds to me like in a more normal case it doesn’t begin with texting friends but talking in person with them about the idea. For that to happen you usually need a good in person community.
These days more is happening via Zoom but reaching out to chat online still isn’t as easy as going to a meetup.
I wish more people used threads on platforms like Slack and Discord. And I think the reason to use threads is very similar to the reason why one should aim for modularity when writing software.
Here’s an example. I posted this question in the #haskell-beginners Discord channel asking whether it’s advisable for someone learning Haskell to use a linter. I got one reply, but it wasn’t as a thread. It was a normal message in #haskell-beginners. Between the time I asked the question and got a response, there were probably a couple dozen other messages. So then, I had to read and scroll through those to get to the response I was interested in, and to see if there were any other responses.
Each of the messages were part of a different conversation. I think of it as something like this:
There is a linear structure for something that more naturally structured as a tree.
Function Programming Discord server
#haskell-beginners channel
Conversation A
Message 1
Message 2
Message 3
Message 4
Conversation B
Message 1
Message 2
Conversation C
Message 1
Message 2
In writing software, imagine that you have three sub-problems that you need to solve. And imagine if you approached this by doing something like this:
// stuff for sub-problem #1
// stuff for sub-problem #1
// stuff for sub-problem #2
// stuff for sub-problem #3
// stuff for sub-problem #1
// stuff for sub-problem #3
// stuff for sub-problem #1
// stuff for sub-problem #2
We generally prefer to avoid writing code this way. Instead, we prefer to take a more modular approach and do something like this:
solveSubProblemOne();
solveSubProblemTwo();
solveSubProblemThree();
function solveSubProblemOne() {
...
}
function solveSubProblemTwo() {
...
}
function solveSubProblemThree() {
...
}
By writing the code in a modular fashion, you can easily focus on the code related to sub-problem #1 and not have to sift through code that is unrelated to sub-problem #1. On the other hand, the more imperative non-modular version makes it difficult to tell what code is related to what sub-problem.
Similarly, using threads on platforms like Slack and Discord make it easy to see what messages belong to what conversations.
And like software, the importance of this gets larger as the “codebase” becomes more involved and complex. Imagine a Slack channel with lots and lots of conversations happening simultaneously without threads. That is difficult to manage. But if it’s a small channel with only two or three conversations happening simultaneously, that is more manageable.
Threads are pretty good, most help channels should probably be a forum (or 1 forum + 1 channel). Discord threads do have a significant drawback of lowering visibility by a lot, and people don’t like to write things that nobody ever sees.
Discord threads do have a significant drawback of lowering visibility by a lot, and people don’t like to write things that nobody ever sees.
Meh. If you start a thread under the message “Parent level message” in #the-channel the UI will indicate that there are “N Messages” in a thread belonging to “Parent level message”. It’s true that those messages aren’t automatically visible to people scrolling through the main channel, they’d have to click to open the thread, but if your audience isn’t motivated to do that it seems to me like they aren’t worth interacting with in the first place.
I do prefer how Slack treats threads though. They’re more light and convenient to use in Slack.
This is super rough and unrefined, but there’s something that I want to think and write about. It’s an epistemic failure mode that I think is quite important. It’s pretty related to Reversed Stupidity is Not Intelligence. It goes something like this.
You think 1. Alice thinks 2. In your head, you think to yourself:
Gosh, Alice is so dumb. I understand why she thinks 2. It’s because A, B, C, D and E. But she just doesn’t see F. If she did, she’d think 1 instead of 2.
Then you run into other people being like:
Gosh, Bob is so dumb. I understand why he thinks 1. It’s because A, B, C, D, E and F. But he just doesn’t see G. If he did, he’d think 2 instead of 1.
I wish I could easily think of good, concrete, real-world examples of this, but I’m failing to right now.
Anyway, I think this failure mode is both very common (amongst the general public, yes, but also amongst rationalists), very tempting, and very harmful.
A big reason why I think it’s harmful is because it functions as a sort of conversation halter. Just an intrapersonal one rather than interpersonal. Like, for traditional conversation halters, you’re talking to another person (interpersonal) and they say something that just kinda halts the discussion. But here, I’m trying to point to something that you do in your own inner monologue.
Instead, what I think you should do would be something like steelmanning:
Ok, suppose I’m right that Alice isn’t seeing F, and that if she did, she should think 1 instead of 2. Let’s push further. What other factors are at play here? Is there a G? An H? An I? What about a J?
I’d appreciate any conversation and help on this. In whatever form. Examples would be awesome.
When I think about problems like these, I use what feels to me like a natural generalization of the economic idea of efficient markets. The goal is to predict what kinds of efficiency we should expect to exist in realms beyond the marketplace, and what we can deduce from simple observations. For lack of a better term, I will call this kind of thinking inadequacy analysis.
I think this is pretty applicable to highly visible blog posts, such as ones that make the home page in popular communities such as LessWrong and Hacker News.
Like, if something makes the front page as one of the top posts, it attracts lots of eyeballs. With lots of eyeballs, you get more prestige and social status for saying something smart. So if a post has lots of attention, I’d expect lots of the smart-things-to-be-said to have been said in the comments.
It’s weird that people tend so strongly to be friends with people so close to their age. If you’re 30, why are you so much more likely to be friends with another 30 year old than, say, a 45 year old?
That’s true, but I don’t think it explains it because I think that outside of age-segregated environments, an eg. 30 year old is still much, much more likely to befriend a 30 year old than a 45 year old.
Part of it is that age gap friendships are often considered kind of weird, too; people of different ages often are at different stages of their careers, etc., and often don’t really think of each other as of roughly equal status. (What would it be like trying to have a friendship with a manager when you aren’t a manager, even if that manager isn’t someone you personally report to?)
The first explanation that comes to mind is that people usually go through school, wherein they spend all day with people the same age as them (plus adults, who generally don’t socialize with the kids), and this continues through any education they do. Then, at the very least, this means their starting social group is heavily seeded with people their age, and e.g. if friend A introduces friend B to friend C, the skew will propagate even to those one didn’t meet directly from school.
Post-school, you tend to encounter more of a mix of ages, in workplaces, activity groups, meetups, etc. Then your social group might de-skew over time. But it would probably take a long time to completely de-skew, and age 30 is not especially long after school, especially for those who went to grad school.
There might also be effects where people your age are more likely to be similar in terms of inclination and capability to engage in various activities. Physical condition, monetary resources, having a committed full-time job, whether one has a spouse and children—all can make it easier or harder to do things like world-traveling and sports.
I feel like there is a specific phenomena where, outside of age-segregated environments, it’s still the case that a 30 year old is much more likely to befriend another 30 year old than a 45 year old.
There might also be effects where people your age are more likely to be similar in terms of inclination and capability to engage in various activities. Physical condition, monetary resources, having a committed full-time job, whether one has a spouse and children—all can make it easier or harder to do things like world-traveling and sports.
Yeah maybe. I’m skeptical though. I think once you’re in your 20s, most of the time you’re not too different from people in their 40s. A lot of people in their 20s have romantic partners, jobs, ability to do physically demanding things.
Personally I suspect moderately strongly that the explanation is about what is and isn’t socially acceptable.
If that is indeed the (main) explanation, it seems weird to me. Why would that norm arise?
I think it is a combination of many things that point in a similar direction:
School is age-segregated, and if you are university-educated, you stay there until you are ~ 25.
Even after school, many people keep the friends they made during the school.
A typical 25 years old is looking for a partner, doesn’t have kids, doesn’t have much of a job experience, often can rely on some kind of support from their parents, and is generally full of energy. A typical 40 years old already has a partner, has kids, spent over a decade having a full-time job, sometimes supports their parents, and is generally tired. -- These people are generally in a different situation, with different needs. In their free time, the 25 years old wants to socialize with potential partners. The 45 years old is at home, helping their kids with homework.
Also, I think generally, people have few friends. Especially after school.
*
To use myself as an N=1 example, I am in my 40s, and I am perfectly open to the idea of friendship with people in their 20s. But I spend most of my day at work, then I am at home with my kids, or I call my existing friends and meet them. I spend vacations with my kids, somewhere in nature. I simply do not meet the 20 years olds most of the time. And when I do, they are usually in groups, talking to each other; I am an introverted person, happy to talk 1:1, but I avoid groups unless I already know some of the people.
Thanks, I liked this and it updated me. I. do still think there is a somewhat strong “socially acceptable” element here, but I also think I was underestimating the importance of these lifestyle differences.
I suppose the “socially acceptable” element is a part of why it would feel weird for me to try joining a group of people in their 20s, on the occasions that I meet such group, in contexts where if it was a group of people in their 40s instead, I could simply sit nearby, listen to their debate for a while, and then maybe join at a convenient moment, or hope to be invited to the debate by one of them. Doing this with a group of people a generation younger than me would feel kinda creepy (which is just a different way of saying socially unacceptable). But such situations are rare—in my case, the general social shyness, and the fact that I don’t have hobbies where I could meet many people and interact with them, have a stronger impact. The most likely place for me to meet and talk to younger people are LW/ACX meetups.
For me, one place I’ve noticed it is in my racquetball league. There is a wide mix of ages, but I’ve noticed that the 30somethings tend to gravitate and the 50+ aged people tend to gravitate.
There were other lines of logic leading to the same conclusion. Complex machinery was always universal within a sexually reproducing species. If gene B relied on gene A, then A had to be useful on its own, and rise to near-universality in the gene pool on its own, before B would be useful often enough to confer a fitness advantage. Then once B was universal you would get a variant A* that relied on B, and then C that relied on A* and B, then B* that relied on C, until the whole machine would fall apart if you removed a single piece. But it all had to happen incrementally—evolution never looked ahead, evolution would never start promoting B in preparation for A becoming universal later. Evolution was the simple historical fact that, whichever organisms did in fact have the most children, their genes would in fact be more frequent in the next generation. So each piece of a complex machine had to become nearly universal before other pieces in the machine would evolve to depend on its presence.
I think something sorta similar is true about startups/business.
Say you have an idea for a better version of Craigslist called Bobslist. You have various hypotheses about how Craigslist’s UI is bad and can be improved upon. But without lots of postings, no one is going to care. Users care more about products and price than they do about the UI.
This reminds me of the thing with gene A and gene B. Evolution isn’t going to promote gene B if gene A isn’t already prominent.
If gene B relied on gene A, then A had to be useful on its own, and rise to near-universality in the gene pool on its own, before B would be useful often enough to confer a fitness advantage.
I think Bobslist’s nicer UI is like gene B. It relies on there being a comparable number and quality of product listings (“gene A”) and won’t be promoted by the market before “gene A” becomes prominent.
I wonder if the Facebook algorithm is a good example of the counterintuitive difficulty of alignment (as a more general concept).
You’re trying to figure out the best posts and comments to prioritize in the feed. So you look at things like upvotes, page views and comment replies. But it turns out that that captures things like how much of a demon thread it is. Who would have thought metrics like upvotes and page views could be so… demonic?
I don’t think this is an alignment-is-hard-because-it’s-mysterious, I think it’s “FB has different goals than me”. FB wants engagement, not enjoyment. I am not aligned with FB, but FB’s algorithm is pretty aligned with its interests.
Oh yeah, that’s a good point. I was thinking about Facebook actually having the goal to promote quality content. I think I remember hearing something about how that was their goal at first, then they got demon stuff, then they realized demon stuff made them the most money and kept doing it. But still, people don’t associate Facebook with having the goal of promoting quality content, so I don’t think it’s a good example of the counterintuitive difficulty of alignment.
In stand up comedy, performances are where you present your good jokes and open mics are where you experiment.
Sometimes when you post something on a blog (or Twitter, Facebook, a comment, etc.), you intend for it to be more of a performance. It’s material that you have spent time developing, are confident in, etc.
But other times you intend for it to be more of an open mic. It’s not obviously horribly or anything, but it’s certainly experimental. You think it’s plausibly good, but very well might end up being garbage.
Going further, in stand up comedy, there is a phase that comes before open mics. I guess we can call that phase “ideation”. Where you come up with your ideas. Maybe that’s going for walks. Maybe it’s having drinks with your comic friends. Maybe it’s talking to your grandma. Who knows? But there’s gotta be some period where you’re simply ideating. And I don’t really see anything analogous to that on LessWrong. It seems like something that should exist though. Maybe it exists right here with the Short Form (and Open Thread)? OnTwitter? Slack and Discord groups? Talking with friends? Even if it does, I wonder if we could do more.
Stand-up is all about performance, not interaction or collaboration, and certainly not truth-seeking (looking for evidence and models so that you can become less wrong), so it’s an imperfect analogy. But there’s value in the comparison.
I do see occasional “babble” posts, and fairly open questions on LW, that I think qualify as ideation. I suspect (and dearly hope) that most people do also go on walks and have un-recorded lightweight chats with friends as well.
On Stack Overflow you could offer a bounty for a question you ask. You sacrifice some karma in exchange for having your question be more visible to others. Sometimes I wish I could do that on LessWrong.
I’m not sure how it’d work though. Giving the post +N karma? A bounties section? A reward for the top voted comment?
I was just reading AI alignment researchers don’t (seem to) stack and had the thought that it’d be good to research whether intellectual progress in other fields is “stackable”. That’s the sort of thing that doesn’t take an Einstein level of talent to pursue.
I’m sure other people have similar thoughts: “X seems like something we should do and doesn’t take a crazy amount of talent”.
What if there was a backlog for this?
I’ve heard that, to mitigate procrastination, it’s good to break tasks down further and further until they become bite-sized chunks. It becomes less daunting to get started. Maybe something similar would apply here with this backlog idea. Especially if it is made clear roughly how long it’d take to complete a given task. And how completing that task fits in to the larger picture and improves it. Etc. etc.
And here’s another task: researching whether this backlog idea itself has ever been done before, whether it is actually plausible, etc.
I remember previous discussions that went something like this:
Alice: EA has too much money and not enough places to spend it.
Bob: Why not give grants anyone and everyone who wants to do, for example, alignment research?
Alice: That sets up bad incentives. Malicious actors would seek out those grants and wouldn’t do real work. And that’d have various bad downstream effects.
But what if those grants were minimal? What if they were only enough to live out a Mustachian lifestyle?
Well, let’s see. A Mustachian lifestyle costs something like $25k/year iirc. But it’s not just this years living expenses that matter. I think a lot of people would turn down the grant and go work for Google instead if it was only a few years because they want to set themselves up financially for the future. So what if the grant was $25k/year indefinitely? That could work but also starts to get large enough where people might try to exploit it.
What if there was some sort of house you could live at, commune style? Meals would be provided, there’d be other people there to socialize with, health care would be taken care of, you’d be given a small stipend for miscellaneous spending, etc. I don’t see how bad actors would be able to take advantage of that. They’d be living at the same house so if they were taking advantage of it it’d be obvious enough.
I think that only addresses a branch concern, not the main problem. It filters out some malicious actors, but certainly not all—you still get those who seek the grants IN ADDITION to other sources of revenue.
More importantly, even if you can filter out the bad actors, you likely spend a lot on incompetent actors, who don’t produce enough value/progress to justify the grants, even if they mean well.
I don’t think those previous discussions are still happening very much—EA doesn’t have spare cash, AFAIK. But when it was, it was nearly-identical to a lot of for-profit corporations—capital was cheap, interest rates were extremely low, and the difficulty was in figuring out what marginal investments brought future returns. EA (18 months ago) had a lot of free/cheap capital and no clear models for how to use it in ways that actually improved the future. Lowering the bar for grants likely didn’t convince people that it would actually have benefits.
I think that only addresses a branch concern, not the main problem. It filters out some malicious actors, but certainly not all—you still get those who seek the grants IN ADDITION to other sources of revenue.
Meaning that, now that they’re living in the commune they’ll be more likely seek more funding for other stuff? Maybe. But you can just keep the barriers as high as they currently are for the other stuff, which would just mean slightly(?) more applicants to filter out at the initial stages.
More importantly, even if you can filter out the bad actors, you likely spend a lot on incompetent actors, who don’t produce enough value/progress to justify the grants, even if they mean well.
My model is that the type of person who would be willing to move to a commune and live amongst and bunch of alignment researchers is pretty likely to be highly motivated and slightly less likely to be competent. The combination of those two things makes me thing they’d be pretty productive. But even if they weren’t, the bar of eg. $20k/year/person is pretty low.
I don’t think those previous discussions are still happening very much—EA doesn’t have spare cash, AFAIK.
Thanks for adding some clarity here. I get that impression too but not confidently. Do you know if it’s because a majority of the spare cash was from FTX and that went away when FTX collapsed?
EA (18 months ago) had a lot of free/cheap capital and no clear models for how to use it in ways that actually improved the future.
That’s always seemed really weird to me. I see lots of things that can be done. Finding the optimal action or even a 90th+ percentile action might be difficult but finding an action that meets some sort of minimal threshold seems like it’s not a very high bar. And letting the former get in the way of the latter seems like it’s making the perfect the enemy of the good.
Ah, I see—I didn’t fully understand that you meant “require (and observe) the lifestyle” not just “grants big enough to do so, and no bigger”. That makes it quite a bit safer from fraud and double-dipping, and a LOT less likely (IMO) to get anyone particularly effective that’s not already interested.
A long time when I was a sophomore in college, I remember a certain line of thinking I went through:
It is important for politicians to be incentivized properly. Currently they are too susceptible to bribery (hard, soft, in between) and other things.
It is difficult to actually prevent bribes. For example, they may come in the form of “Pass the laws I want passed and instead of handing you a lump sum of money, I’ll give you a job that pays $5M/year for the next 30 years after your term is up.”
Since preventing bribes is difficult, you could instead just say that if you’re going to be a politician—a public servant—you have to live a low income lifestyle from here on out. You and your family. Say, 25th percentile income level. Or Minimally Viable Standard of Living if you want to get more aggressive. The money will be provided to you by the government and you’re not allowed to earn your own money elsewhere. If you start driving lamborghinis, we’ll know.
The downside I see is that it might push away talent. But would it? Normally if you pay a lower salary you get less talented employees but for roles like President of the United States of America, or even Senator of Idaho, I think the opportunity for impact would be large enough to get very talented people, and any losses in talent would be made up for reduced susceptibility to bribery.
I often cringe at the ideas of my past selves but this one still seems interesting.
I haven’t seen any good reasoning or evidence that allowing businesses and titans to bribe politicians via lobbyists actually results in worse laws. People gasp when I say this, but the default doesn’t seem that much better. If Peter Thiel had been able to encourage Trump to pick the cabinet heads he wanted then our COVID response would have gone much better.
Most billionaires at least seem to donate ideologically, not really based on how politicians affect their special interest group. There’s definitely a correlation there, but if billionaires are just more reasonable on average then it’s possible that their influence is net-positive overall.
It is important for politicians to be incentivized properly.
Is there anyone for whom this is NOT important? Why not an asset ceiling on every human?
The problem is in implementation. Leaving aside all the twiddly details of how to measure and enforce, there’s no “outside authority” which can impose this. You have to convince the populace to impose it. And if they are willing to do that, it’s not necessary to have a rule, it’s already in effect by popular action.
Is there anyone for whom this is NOT important? Why not an asset ceiling on every human?
It generally is quite important. However, (powerful) politicians are a special case because 1) they have more influence on society and 2) I presume people would still be motivated to take the position even with the asset ceiling. Contrasting this with my job as a programmer, 1) it’d be good if my incentives were more aligned with the company I work for but it wouldn’t actually impact society very much and 2) almost no one would take my job if it meant a lower standard of living.
The problem is in implementation. Leaving aside all the twiddly details of how to measure and enforce, there’s no “outside authority” which can impose this. You have to convince the populace to impose it. And if they are willing to do that, it’s not necessary to have a rule, it’s already in effect by popular action.
Wouldn’t the standard law enforcement people enforce it, just like how if a president committed murder they wouldn’t get away with it? Also, it’s definitely tricky but there is a precedent for those in power to do what’s in the interest of the future of society rather than what would bring them the most power. I’m thinking of George Washington stepping away after two terms and setting that two term precedent.
However, (powerful) politicians are a special case because 1) they have more influence on society and 2) I presume people would still be motivated to take the position even with the asset ceiling.
I don’t believe either of these is true, when comparing against (powerful) non-politician very-rich-people.
Wouldn’t the standard law enforcement people enforce it
I didn’t mean the end-enforcement (though that’s a problem too—standard law enforcement personnel can detect and prove murder. They have SOME ability to detect and prove income. They have very little ability to understand asset ownership and valuation in a world where there’s significant motive to be indirect about it. But I meant “who will implement it”, if voters today don’t particularly care, why will anyone define and push for the legislation that creates the limit?
I don’t believe either of these is true, when comparing against (powerful) non-politician very-rich-people.
Hm, maybe. Let me try thinking of some examples:
CEOs: Yeah, pretty big influence and I think smart people would do it for free. Although if you made a rule that CEOs of sufficiently large companies had to have asset ceilings I think there’d be a decent amount less entrepreneurs which feels like it’d be enough to make it a bad idea.
Hedge fund managers: From what I understand they don’t really have much influence on society in their role. I think some smart people would still take the job with an asset ceiling but they very well might not be smart enough; I know how competitive and technical that world is. And similar to CEOs, I don’t think there’d be many if any hedge funds that got started if they knew their traders would have to have asset ceilings.
Movie stars: Not much influence on society, but people would take the role for the fame it’d provide of course.
After trying to think of examples I’m not seeing any that fit. Do you have any in mind?
They have very little ability to understand asset ownership and valuation in a world where there’s significant motive to be indirect about it.
There might be things that are hard to prevent from slipping through the cracks, but the big things seem easy enough to detect: houses, cars, yachts, hotels, vacations. I guess they’d probably have to give up some rights to privacy too though to make enforcement for practical. Given how much they’re already giving up with the asset ceiling, the additional sacrificing of some amount of privacy doesn’t seem like it changes anything too much.
But I meant “who will implement it”, if voters today don’t particularly care, why will anyone define and push for the legislation that creates the limit?
I’m not optimistic about it, but to me it seems at least ballpark plausible. I don’t understand this stuff too much, but to me it seems like voters aren’t the problem. Voters right now, across party lines, distrust politicians and “the system”. I would assume the problem is other politicians. You’d have to get their support but it negatively affects them so they don’t support it.
Maybe there are creative ways to sidestep this though.
Make the asset ceilings start in 10 years instead of today? Maybe that’d be blatantly obvious that it’s the current politicians not wanting to eat their gross dogfood? Would that matter?
Maybe you could start by gathering enough public support for the idea to force the hands of the politicians?
Goodhart’s Law seems like a pretty promising analogy for communicating the difficulties of alignment to the general public, particularly those who are in fields like business or politics. They’re already familiar with the difficulty and pain associated with trying to get their organization to do X.
I remember talking to a product designer before. I brought up the idea of me looking for ways to do things more quickly that might be worse for the user. Their response was something along the lines of “I mean, as a designer I’m always going to advocate for whatever is best for the user.”
I think that “apples-to-oranges” is a good analogy for what is wrong about that. Here’s what I mean.
Suppose there is a form and the design is to have inline validation (nice error messages next to the input fields). And suppose that “global” validation would be simpler (an error message in one place saying “here’s what you did wrong”). Inline is better for users, but comparing it straight up to global would be an apples-to-oranges comparison.
Why? Because the inline version takes longer. Suppose the inline version takes two days and the global version takes one.
Here’s where the apples-to-oranges analogy comes in: you can’t compare something that takes one day to something that takes two days purely on the grounds of user experience. That is apples-to-oranges. For it to be apples-to-apples, you’d have to compare a) inline validation to b) global validation + whatever else that can be done in the second day. In other words, (b) has to include the right-hand side of the plus sign. Without the right-hand side, it is apples-to-oranges.
I was just watching this YouTube video on portable air conditioners. The person is explaining how air conditioners work, and it’s pretty hard to follow.
I’m confident that a very large majority of the target audience would also find it hard to follow. And I’m also confident that this would be extremely easy to discover with some low-fi usability testing. Before releasing the video, just spend maybe 20 mins and have a random person watch the video, and er, watch them watch it. Ask them to think out loud, narrating their thought process. Stuff like that.
Moreover, I think that this sort of stuff happens all the time, in many different areas. As another example, I was at a train stop the other day and found the signs confusing. It wasn’t clear which side of the tracks were going north and which side were going south. And just like the YouTube video, I think that most/many people would also find it confusing, this would be easy to discover with usability testing, and at least in this case, there’s probably some sort of easy solution.
So, yeah: this is my cry into an empty void for the world to incorporate low-fi usability testing into anything and everything. Who knows, maybe someone will hear me.
I think that people should write with more emotion. A lot more emotion!
Emotion is bayesian evidence. It communicates things.
One could also propose making it not full of rants, but I don’t think that would be an improvement. The rants are important. The rants contain data. They reveal Eliezer’s cognitive state and his assessment of the state of play. Not ranting would leave important bits out and give a meaningfully misleading impression.
...
The fact that this is the post we got, as opposed to a different (in many ways better) post, is a reflection of the fact that our Earth is failing to understand what we are facing. It is failing to look the problem in the eye, let alone make real attempts at solutions.
It frustrates me that people don’t write with more emotion. Why don’t they? Are they uncomfortable being vulnerable? Maybe that’s part of it. I think the bigger part is just that it is uncomfortable to deviate from social norms. That’s different from the discomfort from vulnerability. If everyone else is trying to be professional and collected and stuff and write more dispassionately, and you are out there getting all excited and angry and intrigued and self-conscious, you’ll know that you are going against the grain.
But all of those emotions are useful though. Again, it communicates things. Sure, it is something that can be taken overboard. There is such a thing as expressing too much emotion. But I think we have a ways to go before we get there.
(Am I expressing enough emotion here? Am I being a hypocrite? I’m note sure. I’m probably not doing a great job at expressing emotions here. Which makes me realize that it’s probably, to a non-trivial extent, a skill that needs to be practiced. You have to introspect, think about what emotions you are feeling, and think about which of them would be useful to express.)
I wonder whether it would be good to think about blog posts as open journaling.
When you write in a journal, you are writing for yourself and don’t expect anyone else to read it. I guess you can call that “closed journaling”. In which case “open journaling” would mean that you expect others to read it, and you at least loosely are trying to cater to them.
Well, there are pros and cons to look at here. The main con of treating blog posts as open journaling is that the quality will be lower than a more traditional blog post that is more refined. On the other hand, a big pro is that, relatedly, a wider and more diverse range of posts will get published. We’re loosening the filter.
It also may encourage an environment of more collaboration, and people thinking things through together. If someone posts something where they spent a lot of time on it, and I notice something that seems off, I’d probably lean towards assuming that I just didn’t understand it, and it is in fact correct. I’d also lean towards assuming that it wouldn’t be the best idea to take up peoples time by posting a comment about my feeling that something is off. On the other hand, if I know that a post is more exploratory, I’d lean less strongly towards those assumptions and be more willing to jump in and discuss things.
It seems that there is agreement here on LessWrong that there is a place for this more exploratory style of posting. Not every post needs to be super refined. For less refined posts, there is the shortform, open thread and personal blog posts. So it’s not that I’m proposing anything new here. It’s just that “open journaling” seems like a cool way to conceptualize this. The idea occurred to me while I was on the train this morning and thinking about it as an “open journal” just inspired me to write up a few ideas that have been swimming around in my head.
It bothers me how inconsistent I am. For example, consider covid-risk. I’ve eaten indoors before. Yet I’ll say I only want to get coffee outside, not inside. Is that inconsistent? Probably. Is it the right choice? Let’s say it is, for arguments sake. Does the fact that it is inconsistent matter? Hell no!
Well, it matters to the extent that it is a red flag. It should prompt you to have some sort of alarms going off in your head that you are doing something wrong. But the proper response to those alarms is to use that as an opportunity to learn and grow and do better in the future. Not to continue to make bad choices moving forward out of fear that inconsistency is unvirtuous. Yet this fear is a strong one that is often moving. At least for me.
Inconsistency is a pointer to incorrectness, but I don’t think that example is inconsistent. There’s a reference class problem involved—eating a meal and getting a coffee, at different times, with different considerations of convenience, social norms, and personal state of mind, are just not the same decision.
I hear ya. In my situation I think that when you incorporate all of that and look at the resulting payoffs and probabilities, it does end up being inconsistent. I agree that it depends on the situation though.
The other day I was walking to pick up some lunch instead of having it delivered. I also had the opportunity to freelance for $100/hr (not always available to me), but I still chose to walk and save myself the delivery fee.
I make similarly irrational decisions about money all the time. There are situations where I feel like other mundane tasks should be outsourced. Eg. I should trade my money for time, and then use that time to make even more money. But I can’t bring myself to do it.
Perhaps food is a good example. It often takes me 1-2 hours to “do” dinner. Suppose ordering something saves me $10 relative to what I’d otherwise spend at home. I think my time is worth more than $5-10/hr, and yet I don’t order food.
One possible explanation is that I rarely have the opportunity to make extra money with extra free time, eg. by freelancing. But I could work on startups in that free time. That doesn’t guarantee me more money, but in terms of expected value, I think it’s pretty high. Is there a reason why this type of thinking might be wrong? Variance? I could adjust the utilities based off of some temporal discounting and diminishing marginal utility or whatever, but even after that the EV seems wildly higher than the $5-10/hr I’m saving by cooking.
Here’s the other thing: I’m not alone. In fact, I observe that tons and tons of people are in a similar position as me, where they could be trading money for time very profitably but choose not to, especially here on LessWrong.
I wonder whether there is something I am missing. I wonder what is going on here.
I suspect there are multiple things going on. First and foremost, the vast majority of uses of time have non-monetary costs and benefits, in terms of enjoyment, human interaction, skill-building, and even less-legible things than those. After some amount of satisficing, money is no longer a good common measurement for non-comparable things you could do to earn or spend it.
Secondly, most of our habits on the topic are developed in a situation where hourly work is not infinitely available at attractive rates. The marginal hour of work, for most of us, most of the time, is not the same as our average hour of work. In the case where you have freelance work available that you could get $1.67/minute for any amount of time you choose, and you can do equally-good (or at least equally-valuable) work regardless of state of mind, your instincts are probably wrong—you should work rather than any non-personally-valuable chores that you can hire out for less than this.
One thing strikes me: you appear to be supposing that apart from how much money is involved, every possible activity per hour is equally valuable to you in itself. This is not required by rationality unless you have a utility function that depends only upon money and a productivity curve that is absolutely flat.
Maybe money isn’t everything to you? That’s rationally allowed. Maybe you actually needed a break from work to clear your head for the rest of the afternoon or whatever? That’s rationally allowed too. It’s even allowed for you to not want to do that freelancing job instead of going for a walk at that time, though in that case you might consider the future utility of the net $90 in getting other things that you might want.
Regarding food, do you dislike cooking for yourself more than doing more work for somebody else? Do you actually dislike cooking at all? Do you value deciding what goes into your body and how it is prepared? How much of your hourly “worth” is compensation for having to give up control of what you do during that time? How much is based on the mental or physical “effort” you need to put into it, which may be limited? How much is not wanting to sell your time much more cheaply than they’re willing to pay?
Rationality does not forbid that any of these should be factors in your decisions.
On the startup example, my experience and those of everyone else I’ve talked to who have done it successfully is that leading a startup is hell, even if it’s just a small scale local business. You can’t do it part time or even ordinary full time, or it will very likely fail and make you less than nothing. If you’re thinking “I could spend some of my extra hours per week on it”, stop thinking it because that way lies a complete waste of time and money.
One thing strikes me: you appear to be supposing that apart from how much money is involved, every possible activity per hour is equally valuable to you in itself.
No, I am not supposing that. Let me clarify. Consider the example of me walking to pick up food instead of ordering it. Suppose it takes a half hour and I could have spent that half hour making $50 instead. The way I phrased it:
Option #1: Spend $5 to save myself the walk and spend that time freelancing to earn $50, netting me $45.
Option #2: Walk to pick up the food, not spending or earning anything.
The problem with that phrasing is that dollars aren’t what matter, utility is, as you allude to. My point is that it still seems like people often make very bad decisions. In this example, the joy of walking versus freelancing + any productivity gains are not worth $45, I don’t think.
I do agree that this doesn’t last forever though. At some point you get so exhausted from working where the walk has big productivity benefits, the work would be very unpleasant, and the walk would be a very pleasant change of pace.
even if it’s just a small scale local business.
Tangential, but Paul Graham wouldn’t call that a startup.
You can’t do it part time or even ordinary full time, or it will very likely fail and make you less than nothing.
I disagree here. 1) I know of real life counterexamples. I’m thinking of people I met at an Indie Hackers meetup I used to organize. 2) It doesn’t match my model of how things work.
The original question is based on the observation that a lot of people, including me, including rationalists, do things like spending an hour of time to save $5-10 when their time is presumably worth a lot more than that, and in contexts where burnout or dips in productivity wouldn’t explain it. So my question is whether or not this is something that makes sense.
I feel moderately strongly that it doesn’t actually make sense, and that what Eliezer eludes to in Money: The Unit of Caring is what explains the phenomena.
Many people, when they see something that they think is worth doing, would like to volunteer a few hours of spare time, or maybe mail in a five-year-old laptop and some canned goods, or walk in a march somewhere, but at any rate, not spend money.
Believe me, I understand the feeling. Every time I spend money I feel like I’m losing hit points. That’s the problem with having a unified quantity describing your net worth: Seeing that number go down is not a pleasant feeling, even though it has to fluctuate in the ordinary course of your existence. There ought to be a fun-theoretic principle against it.
Betting is something that I’d like to do more of. As the LessWrong tag explains, it’s a useful tool to improve your epistemics.
But finding people to bet with is hard. If I’m willing to bet on X with Y odds and I find someone else eager to, it’s probably because they know more than me and I am wrong. So I update my belief and then we can’t bet.
But in some situations it works out with a friend, where there is mutual knowledge that we’re not being unfair to one another, and just genuinely disagree, and we can make a bet. I wonder how I can do this more often. And I wonder if some sort of platform could be built to enable this to happen in a more widespread manner.
I’ve always heard of the veil of ignorance being discussed in a… social(?) context: “How would you act if you didn’t know what person you would be?”. A farmer in China? Stock trader in New York? But I’ve never heard it discussed in a temporal context: “How would you act if you didn’t know what era you would live in?” 2021? 2025? 2125? 3125?
This “temporal veil of ignorance” feels like a useful concept.
I just came across an analogy that seems applicable for AI safety.
AGI is like a super powerful sports car that only has an accelerator, no brake pedal. Such a car is cool. You’d think to yourself:
Nice! This is promising! Now we have to just find ourselves a brake pedal.
You wouldn’t just hop in the car and go somewhere. Sure, it’s possible that you make it to your destination, but it’s pretty unlikely, and certainly isn’t worth the risk.
In this analogy, the solution to the alignment problem is the brake pedal, and we really need to find it.
(I’m not as confident in the following, plus it seems to fit as a standalone comment rather than on the OP.)
Why do we really need to find it? Because we live in a world where people are seduced by the power of the sports car. They are in a competition to get to their destinations as fast as possible and are willing to be reckless in order to get there.
Well, that’s the conflict theory perspective. The mistake theory perspective is that people simply think they’ll be fine driving the car without the brakes.
That sounds crazy. And it is crazy! But think about it this way. (The analogy starts to break down a bit here.) These people are used to driving wayyyy less powerful cars. Sometimes these cars don’t have breaks at all, other times they have mediocre brake systems. Regardless, it’s not that dangerous. These people understand that the sports car is in a different category and is more dangerous, but they don’t have a good handle on just how much more dangerous it is, and how it is totally insane to try to drive a car like that without brakes.
We can also extend the analogy in a different direction (although the analogy breaks down when pushed in this direction as well). Imagine that you develop breaks for this super powerful sports car. Awesome! What do you do next? You test them. In as many ways as you can.
However, with AI, we can’t actually do this. We only have one shot. We just have to install them, hit the road, and hope they work. (Hm, maybe the analogy does work. Iirc, the super powerful racing cars, are built to only be driven once/a few times. There’s a trade-off between performance and how long the car lasts. And for races, they go all the way towards the performance side of the spectrum.)
In my writing, I usually use the Alice and Bob naming scheme. Alice, Bob, Carol, Dave, Erin, etc. Why? The same reason Steve Jobs wore the same outfit everyday: decision fatigue. I could spend the time thinking of names other than Alice and Bob. It wouldn’t be hard. But it’s just nice to not have to think about it. It seems like it shouldn’t matter, but I find it really convenient.
Epistemic status: Rambly. Perhaps incoherent. That’s why this is a shortform post. I’m not really sure how to explain this well. I also sense that this is a topic that is studied by academics and might be a thing already.
I was just listening to Ben Taylor’s recent podcast on the top 75 NBA players of all time, and a thought started to crystalize for me that I always have wanted to develop. For people who don’t know him (everyone reading this?), his epistemics are quite good. If you want to see good epistemics applied to basketball, read his series of posts on The 40 Best Careers in NBA History.
Anyway, at the beginning of the podcast, Taylor started to talk about something that was bugging him. Previously, on the 50th anniversary of the league in 1996, a bunch of people voted on a list of the top 50 players in NBA history. Now it is the 75th anniversary of the league, so a different set of people voted on the top 75 players in NBA history. The new list basically took the old list of 50 and added 25 new players. But Taylor was saying it probably shouldn’t be like this. One reason is because our understanding of the game of basketball has evolved since 1996, so who we thought were the top 50 then probably had some flaws. Also, it’s not like the voting body in 1996 was particularly smart. As Taylor nicely puts it, they weren’t a bunch of “basketball PhDs (if that were a thing)”, they were random journalists, players, and coaches, people who aren’t necessarily well qualified to be voting on this. For example, they placed a ton of value on how many points you scored, but not nearly enough value on how efficiently you scored those points.
Later in the podcast they were analyzing various players and the guy he had on as a guest, Cody, said how one player was voted to a lot of all star games. But Taylor said that while this is true, he doesn’t really trust the people who voted on all start games back in the 1960s or whenever it was (not that people are good at voting on all star games now). This got me thinking about something. Does it make sense to look at awards like all star games, MVP voting and all NBA team voting (top 15 players in the league basically)? Well, by doing so, you are incorporating the opinion of various other experts. But I see two problems here.
How smart are those experts? Sometimes the expert opinion is actually quite flawed, and Taylor makes a good point that here this is the case.
In looking at the opinion of those experts, I think that you are committing one of those crimes that can send you to rationalist prison. I think that you are double counting the evidence! Here’s what I mean. I think that for these expert opinions, the experts rely a lot on what the other experts think. For example, in the podcast they were talking about Bob Cousy vs Bill Sharman. Cousy is considered a legend, whereas Sharman is a guy who was very good, but never became a household name. But Taylor was saying how he thinks Sharman might have actually been better than Cousy. But he just couldn’t bring himself to actually place Sharman over Cousy in his list. I think part of that is because it is hard to deviate from majority opinion that much. So I think that is an example where you base your opinion on what others think. Not 100%, but some percentage.
But isn’t that double counting? As a simplification, imagine that Alice arrives at her opinion without the influence of others, and then Bob’s opinion is 50% based on what Alice thinks and 50% based on what his gears level models output. That seems to me like it should count as 1.5 data points, not 2. I think this becomes more apparent as you add more people. Imagine that Carold, Dave and Erin all do the same thing as Bob. Ie. each of them is basing 50% of their opinion on what Alice thinks. Should that count as 5 data points or 3? What if all of them were basing it 99% on what Alice thinks. Should that count as 5 data points or 1.04? You could argue perhaps that 1.04 is too low, but arguing that it is 5 really seems like it is too high. To make the point even more clear, what if there were 99 people who were 99% basing their opinion off of Alice. Would you say, “well, 100 people all believe X, so it’s probably true”? No! There’s only one person that believes X and 99 people who trust her.
This feels to me like it is actually a pretty important point. In looking at what consensus opinion is, or what the crowd thinks, once you filter out the double counting, it becomes a good deal less strong.
On the other hand, there are other things to think about. For example, if the consensus believes X and you can present good evidence of ~X, but in fact Y, then there is prestige to be gained. And if no one came around and said “Hey! I have evidence of ~X, but in fact Y!”, well, absence of evidence is evidence of absence. In worlds where Y is true, given the incentive of prestige, we would expect someone to come around and say it. This depends on the community though. Maybe it’s too hard to present evidence. For example, in basketball it’s hard to measure the impact of defense. Or maybe the community just isn’t smart enough or set up properly to provide the prestige. Eg. if I had a brilliant idea about basketball, I’m not really sure where I can go to present it and receive prestige.
Edit:
Would you say, “well, 100 people all believe X, so it’s probably true”? No! There’s only one person that believes X and 99 people who trust her.
Well, I guess the fact that so many people trust her means that we should place more weight on her opinion. But saying “I believe X because someone who I have a lot of trust in believes X” is different from saying “I believe X because all 100 people who thought about this also believe X”.
I wonder if it would be a good idea groom people from an early age to do AI research. I suspect that it would. Ie identify who the promising children are, and then invest a lot of resources towards grooming them. Tutors, therapists, personal trainers, chefs, nutritionists, etc.
Iirc, there was a story from Peak: Secrets from the New Science of Expertise about some parents that wanted to prove that women can succeed in chess, and raised three daughters doing something sorta similar but to a smaller extent. I think the larger point being made was that if you really groom someone like this, they can achieve incredible things. I also recall hearing things about how the difference in productivity between researchers is tremendous. It’s not like one person is producing 80 points of value and someone else 75 and someone else 90. It’s many orders of magnitude of difference. Even at the top. If so, maybe we should take shots at grooming more of these top tier researchers.
I suspect that the term “cognitive” is often over/misused.
Let me explain what my understanding of the term is. I think of it as “a disagreement with behaviorism”. If you think about how psychology progressed as a field, first there was Freudian stuff that wasn’t very scientific. Then behaviorism emerged as a response to that, saying “Hey, you have to actually measure stuff and do things scientifically!” But behaviorists didn’t think you could measure what goes on inside someone’s head. All you could do is measure what the stimulus is and then how the human responded. Then cognitive people came along and said, “Er, actually, we have some creative ways of measuring what’s going on in there.” So, the term “cognitive”, to me at least, refers very broadly to that stuff that goes on inside someone’s head.
Now think about a phrase like “cognitive bias”. Does “cognitive” seem appropriate? To me it seems way too broad. Something like “epistemic bias” seems more appropriate.
The long standing meaning of “cognitive” for hundreds of years before cognitive psychologists was having to do with knowledge, thinking, and perception. A cognitive bias is a bias that affects your knowledge, thinking, and/or perception.
Epistemic bias is a fine term for those cognitive biases that are specifically biases of beliefs. Not all cognitive biases are of that form though, even when they might fairly consistently lead to certain types of biases in beliefs.
Hm, can you think of any examples of cognitive biases that aren’t about beliefs? You mention that the term “cognitive” also has to do with perception. When I hear “perception” I think sight, sound, etc. But biases in things like sight and sound feel to me like they would be called illusions, not biases.
The first one to come to mind was Recency Bias, but maybe I’m just paying that one more attention because it came up recently.
Having noticed that bias in myself, I consulted an external source https://en.wikipedia.org/wiki/List_of_cognitive_biases and checked that rather a lot of them are about preferences, perceptions, reactions, attitudes, attention, and lots of other things that aren’t beliefs.
They do often misinform beliefs, but many of the biases themselves seem to be prior to belief formation or evaluation.
Ah, those examples have made the distinction between biases that misinform beliefs and biases of beliefs clear. Thanks!
As someone who seems to understand the term better than I do, I’m curious whether you share my impression that the term “cognitive” is often misused. As you say, it refers to a pretty broad set of things, and I feel like people use the term “cognitive” when they’re actually trying to point to a much narrower set of things.
Depends on what you mean by “low-hanging fruit”. I think there are lots of problems like this that seem net-negative, but it doesn’t seem anywhere close to the most important thing I would recommend politicians to do.
By low-hanging fruit I mean 1) non-trivial boost in electability and 2) good effort-to-reward ratio relative to other things a politician can focus on.
I agree that there are other things that would be more impactful, but perhaps there is room to do those more impactful things along with smaller, less impactful things.
I don’t think there IS much low-hanging fruit. Seemingly-easy things are almost always more complicated, and the credit for deceptively-hard things skews the wrong way: promising and failing hurts a lot (didn’t even do this little thing), promising and succeeding only helps a little (thanks, but what important things have you done?).
Much better, in politics, to fail at important topics and get credit for trying.
I was just thinking about the phrase “change your mind”. It kind of implies that there is some switch that is flipped, which implies that things are binary (I believe X vs I don’t believe X). That is incorrect[1] of course. Probability is in the mind, it is a spectrum, and you update incrementally.
Well, to play devils advocate, I guess you can call 50% the “switch”. If you go from 51% to 49% it’s going from “I believe X” to “I don’t believe X”. Maybe not though. Depends on what “believe” means. Maybe “believe” moreso means some sort of high probability estimate of it being true, like 80%+.
How does “change” imply “flip”? A thermometer going up a degree undergoes a change. A mind that updates the credence of a belief from X to Y undergoes a change as well.
Yeah that’s a fair question/point. I was thinking about that as well. I think I just get the impression that, thinking about common usage, in the context of “change your mind” people usually mean some sort of “flip”. Not everyone though, some people might just mean “update”.
I wish there were more discussion posts on LessWrong.
Right now it feels like it weakly if not moderately violates some sort of cultural norm to publish a discussion post (similar but to a lesser extent on the Shortform). Something low effort of the form “X is a topic I’d like to discuss. A, B and C are a few initial thoughts I have about it. What do you guys think?”
It seems to me like something we should encourage though. Here’s how I’m thinking about it. Such “discussion posts” currently happen informally in social circles. Maybe you’ll text a friend. Maybe you’ll bring it up at a meetup. Maybe you’ll post about it in a private Slack group.
But if it’s appropriate in those contexts, why shouldn’t it be appropriate on LessWrong? Why not benefit from having it be visible to more people? The more eyes you get on it, the better the chance someone has something helpful, insightful, or just generally useful to contribute.
The big downside I see is that it would screw up the post feed. Like when you go to lesswrong.com and see the list of posts, you don’t want that list to have a bunch of low quality discussion posts you’re not interested in. You don’t want to spend time and energy sifting through the noise to find the signal.
But this is easily solved with filters. Authors could mark/categorize/tag their posts as being a low-effort discussion post, and people who don’t want to see such posts in their feed can apply a filter to filter these discussion posts out.
Context: I was listening to the Bayesian Conspiracy podcast’s episode on LessOnline. Hearing them talk about the sorts of discussions they envision happening there made me think about why that sort of thing doesn’t happen more on LessWrong. Like, whatever you’d say to the group of people you’re hanging out with at LessOnline, why not publish a quick discussion post about it on LessWrong?
I just learned some important things about indoor air quality after watching Why Air Quality Matters, a presentation by David Heinemeier Hanson, the creator of Ruby on Rails. It seems like something that is both important and under the radar, so I’ll brain dump + summarize my takeaways here, but I encourage you to watch the whole thing.
He said he spent three weeks researching and experimenting with it full time. I place a pretty good amount of trust in his credibility here, based on a) my prior experiences with his work and b) him seeming like he did pretty thorough research.
It’s easy for CO2 levels to build up. We breathe it out and if you’re not getting circulation from fresh air, it’ll accumulate.
This has pretty big impacts on your cognitive function. It seems similar to not getting enough sleep. Not getting enough sleep also has a pretty big impact on your cognitive function. And perhaps more importantly, it’s something that we are prone to underestimating. It feels like we’re only a little bit off, when in reality we’re a lot off.
There are things called volatile organic compounds, aka VOCs. Those are really bad for your health. They come from a variety of sources. Cleaning products and sprays are one. Another is that new car smell, which you don’t only get from new cars, you also get it from stuff like new couches.
In general, when there’s new construction, VOCs will be emitted. That’s what lead to DHH learning about this. He bought a new house. His wife got sick. It turned out the glue from the wood panels was emitting VOCs and making her sick.
People in the world of commercial construction know all about this. When a hotel is constructed, they’ll wait eg. a whole month, passing up revenue, to let the VOC’s fizzle out. But in the world of residential construction, for whatever reason it isn’t something people know about.
If you want to measure stuff like CO2 and VOCs, professional products are expensive, consumer products are usually inaccurate, but Awair is consumer product for $150 that is good.
If you want to improve indoor air quality, air purifiers are where it’s at. They do a good job of it. You could use filters on eg. your air conditioner and stuff, but in practice that doesn’t really work. High quality filters make your AC much less effective. Low quality filters are, well, low quality.
Alen is the brand of air purifier that DHH recommended after testing four brands. I spend about 10-15 minutes researching it. Alen seems to have a great reputation. The Wirecutter doesn’t recommend Alen seemingly because you could get similar quality for about half the price.
I decided to purchase the Alen BreatheSmart 75i today for $769. a) I find it very plausible that you could get similar quality for less money, but since this is about health and it is a long term purchase, I am happy to pay a premium. b) They claim they offer the industry’s only lifetime warranty. For a long term purchase, I think that’s important, if only due to what it signals.
I considered purchasing more than one. From their website it seemed like that’s what they recommend. But after talking things through with the saleswoman, it didn’t seem necessary. The product weighs about 20 pounds and is portable, so we could bring it to the bedroom to purify the bedroom before we go to sleep.
I currently live in a ~1000sqft apartment and was initially planning on purchasing the 45i instead of the 75i. The 45i is made for 800sqft and 75i 1300sqft. The saleswoman said it’s moreso a matter of time than ability. The 45i will eventually purify larger spaces, it’ll just take longer. That’d probably be fine for my purposes, but since this is a long term purchase and I don’t know what the future holds, I’d rather play it safe.
The Alen BreatheSmart does have an air quality sensor, but I decided to purchase an Awair as well. a) The Alen doesn’t detect CO2 levels. At first I was thinking that I don’t really need a CO2 sensor, I could just open the window a few times a day. But ultimately I think that there is value in having the sensor in my office. It sends you a push notification if CO2 levels pass whatever threshold, and I think that’d be a solid step up from me relying on my judgement and presence of mind to open windows. b) My girlfriend has been getting a sore throat at night. I think it’s because we’ve been using the heat more and the heat dries out the air. We used an air purifier last night, but I think it’d be useful to use the Awair to make sure we get the humidity level right. (We do have a Nest thermostat which detects humidity, but it’s not in our bedroom.)
In general, I’m a believer that health and productivity are so important that on the order of hundreds of dollars it isn’t worth trying to cut costs.
Air quality is something you have to pay attention to outside of your house as well. The presentation mentioned a study of coffee shops having poor air quality.
Older houses have a lot more draft so air quality wasn’t as big a problem. But newer homes have less draft. This is good for cutting your electric bill, but bad for air quality.
Added:
Cooking gives off a significant amount of bad particles, especially if you have a gas stove.
You are supposed to turn your vent on about five minutes before you start cooking. Most people don’t turn it on at all unless it smells.
Apartment kitchens often have vents that recycle air instead of bringing in fresh air, which isn’t what you want.
If you’re using a humidifier, use distilled/filtered water. If you use water from the sink it will add bad particles to the air.
I’ve found that random appliances like the dish washer and laundry machines increase VO2 and/or PM2.5 levels.
Update:
I decided to return my Alen air purifier. a) It doesn’t really do anything to reduce CO2 or VO2. b) It does a solid job of reducing PM2.5, but I have found that if I’m having issues with it I resort to opening my window anyway. That may change when it gets hot outside. But I’m planning on buying a house soon, and when I do I’m hoping to install stuff into the HVAC system instead of having a freestanding purifier. And if I do need a freestanding air purifier, it seems to me now that a ~$300 one would make more sense than the ~$800 Alen.
The Awair I could see not being worth it for some people, but I’m still happy with it. You’d think that you could purchase it, figure out what things trigger your air quality to get screwed up, return the Awair, and moving forward just be careful about opening a window around those triggers. But I’ve found that random things screw with the air quality that I’m not able to predict. Plus it provides me with a peace of mind that makes me happy.
It is my repeated experience in companies that well-ventilated rooms are selected by people as workplaces, and the unventilated ones then remain available for meetings. I seem to be more sensitive about this than most people, so I often notice that “this room makes me drowsy”. (My colleagues usually insists that it is not so bad, and they have a good reason to do so… why would they risk that their current workplace will be instead selected as a new meeting room, and they get this unventilated place as a new workspace?)
I just ordered the Awair on Amazon. It can be returned through Jan. 31; I’ve just ordered it to play with it for a few days, and will probably return it. I have a few specific questions I plan to answer with it:
How much CO2 builds up in my bedroom at night, both when I’m alone and when my partner is over.
How much CO2 builds up in my office during the day?
How much do I need to crack the window in my bedroom in order to keep CO2 levels low throughout the night?
When CO2 builds up, how quickly does opening a window restore a lower level of CO2?
With the answers to those questions, I hope I can return the detector and just keep my windows open enough to prevent CO2 buildup without making the house too cold.
That sounds reasonable and I considered doing something similar. What convinced me to get it anyway is that in the long run, even if the marginal gains in productivity and wellness you get from owning the Awair vs your approach are tiny, even tiny gains add up to the point where the $150 seems like a great ROI.
Have you gotten yours yet? If so, what are the results? I found that the only issue in my house is that the bedroom can get to quite high levels of CO2 if the door and windows are shut. Opening a window solves the problem, but makes the room cold. However, it’s more comfortable to sleep with extra blankets in a cold room, than with fewer blankets in a stuffy room. It improves sleep quality.
It would be interesting to experiment in the office with having a window open, even during winter. However, I worry that being cold would create problems.
My feeling is that “figure out how to crack a window if the room feels stuffy” is the actionable advice here. Unless $150 is chump change to you, I’m not sure it’s really worth keeping a device around to monitor the air quality.
Yup I got it both the Awair and the Alen.
PM2.5 started off crazy high for me before I got the Alen. Using the Alen brings it to near zero.
VO2 and PM2.5 accumulates rather easily when I cook, although I do have a gas stove. Also random other things like the dishwasher cause it to go up. The Alen brings it back down in ~30 minutes maybe.
CO2 usually hovers around a 3⁄5 on the Awair if I don’t have a window open. I’m finding it tricky to deal with this, because opening a window makes it cold. I’m pretty sure my apartment’s HVAC system just recycles the current air rather than bringing in new air. I’m hoping to buy a house soon so I think ventilation is something I’m going to look for.
For me I don’t actually notice the CO2 without the Awair telling me. I don’t think I’d do a good job of remembering to crack a window or something without it.
I wonder if your house has better ventilation than mine if you’re not getting issues with PM2.5. Could be if it’s an older house or if your HVAC system does ventilation.
I see what you’re saying about how the actual actions you should take seem pretty much the same regardless of whether you have the Awair or not. I agree that it’s close, but I think that small differences do exist, and that those small differences will add up to a massively large ROI over time.
1) If it prompts you to crack a window before you would otherwise notice/remember to do so.
2) If something new is causing issues. For me I noticed that my humidifier was jacking up the PM2.5 levels and realized I need to get a new one. I also noticed that the dishwasher jacks it up so now I know to not be around while it’s running. I would imagine that over time new things like this will pop up, eg. using a new cleaning product or candle.
3) Moving to a new home, remodeling or buying eg. new furniture could cause differences.
4) Unknown unknowns that could cause issues.
Suppose you value time spent in better air quality at $1/hr and that the product lasts 25 years. To break even, you’d need it to get you an extra six hours of good air quality each year. That’s just two afternoons of my example #1, where you were sitting around and forgot to crack a window or something when the Awair would have sent you a push notification to do so. $1/hr seems low and I’d expect it to give a good amount more than six extra hours per year, so my impression is that the ROI would be really good.
I do live in an old house.
I get the same effects of spiking VOCs and PM2.5 running the stove and microwave. In my case, the spikes seem to last only as long as the appliance is running. This makes sense, since the higher the concentration, the faster it will diffuse out of the house. A rule to turn on the stove vent or crack a window while cooking could help, but it’s not obvious to me that a few minutes per day of high VOC is something to worry about over the long term.
I note in this paper that “The chemical diversity of the VOC group is reflected in the diversity of the health effects that individual VOCs can cause, ranging from no known health effects of relatively inert VOCs to highly toxic effects of reactive VOCs.” How do I know that the Awair is testing for the more toxic end of the spectrum? There are no serious guidelines for VOCs in general. How do I know that the Awair’s “guidelines” are meaningful?
My bedroom has poor ventilation. Cracking a window seems to improve my sleep quality, which seems like the most important effect of all in the long run.
It sounds like the effect of CO2 itself on cognitive performance is questionable. However, bioeffluents—the carbonyls, alkyl alcohols, aromatic alcohols, ammonia, and mercaptans we breathe out—do seem to have an effect on cognition when the air’s really poorly ventilated. But the levels in my house didn’t even approach the levels at which researchers have found statistically significant cognitive effects. I’m wondering if the better sleep quality is due to the cooler air rather than the better ventilation.
I really doubt that the Awair will last 25 years. I’d guess more like 5. I can set a reminder on my phone to crack a window each night and morning if necessary, and maybe write a little note to tape next to the stove if I feel like it. If that doesn’t do it in any particular instance, then I doubt that lack of a push notification is the root of the problem.
Hm, let’s see how those assumptions you’re using affect the numbers. If it lasts 5 years instead of 25 the breakeven would become 30 hours/year instead of 6. And if we say that the value of better air quality is $0.20/hr instead of $1/hr due to the uncertainty in the research you mention, we multiply by 5 again and get 150 hours/year. With those assumptions, it seems like it’s probably not worth it. And more generally, after talking it through, I no longer see it as an obvious +ROI.
(Interesting how helpful it is to “put a number on it”. I think I should do this a lot more than I currently do.)
However, for myself I still feel really good about the purchases. I put a higher value on the $/hr because I value health, mood and productivity more than others probably do, and because I’m fortunate enough to be doing well financially. I also really enjoy the peace of mind. Knowing what I know now, if I didn’t have my Awair I would be worried about things screwing up my air quality without me knowing.
I posted an update in the OP. When we initially talked about this I was pretty strongly on the side of pro-Awair+Alen. Now I lean moderately against Alen for most people and slightly against Awair, but slightly in favor of Awair for me personally.
Project idea: virtual water coolers for LessWrong
Previous: Virtual water coolers
Here’s an idea: what if there was a virtual water cooler for LessWrong?
There’d be Zoom chats with three people per chat. Each chat is a virtual water cooler.
The user journey would begin by the user expressing that they’d like to join a virtual water cooler.
Once they do, they’d be invited to join one.
I think it’d make sense to restrict access to users based on karma. Maybe only 100+ karma users are allowed.
To start, that could be it. In the future you could do some investigation into things like how many people there should be per chat.
Seems like an experiment that is both cheap and worthwhile.
If there is interest I’d be happy to create a MVP.
(Related: it could be interesting to abstract this and build a sort of “virtual water cooler platform builder” such that eg. LessWrong could use the builder to build a virtual water cooler platform for LessWrong and OtherCommunity could use the builder to build a virtual water cooler platform for their community.)
Personally I think this would be pretty cool!
In How to Get Startup Ideas, Paul Graham provides the following advice:
Something that feels to me like it’s present in the future and missing in today’s world: OkCupid for friendship.
Think about it. The internet is a thing. Billions and billions of people have cheap and instant access to it. So then, logistics are rarely an obstacle for chatting with people.
The actual obstacle in today’s world is matchmaking. How do you find the people to chat with? And similarly, how do you communicate that there is a strong match so that each party is thinking “oh wow this person seems cool, I’d love to chat with them” instead of “this is a random person and I am not optimistic that I’d have a good time talking to them”.
This doesn’t really feel like such a huge problem though. I mean, assume for a second that you were able to force everyone in the world to spend an hour filling out some sort of OkCupid-like profile, but for friendship and conversation rather than romantic relationships. From there, it seems doable enough to figure out whatever matchmaking algorithm.
I think the issue is moreso getting people to fill out the survey in the first place. There’s a chicken-egg problem. Why spend the time filling out the survey when there’s few other people on the platform. At such an early stage, you don’t actually expect to be matched with someone for whom you’re compatible with.
It’s definitely a tricky problem. But at the same time, if you “live in the future”, do you see this service? I do.
I mean, maybe society is just not functional enough to get it going. That’s plausible. But to me, it feels like something where there’s just too much demand for it to never emerge. Friendship and conversation are things that are so fundamental, and I think such a platform would do a notably better job at providing each of those things the haphazard, “organic” approach that happens by necessity in today’s world. I could even see access to this sort of platform considered a basic human right, given how important meaningful social interaction is.
Many people seem to be more motivated to invest energy into pursuing romantic relationships than friendships. There are few books about making good friends and many books on dating.
Omegle essentially provided an answer to that question that was highly used. It didn’t do a lot of matchmaking but it might be a starting point.
If you want to pursue this as a business, maybe buy the recently shutdown Omegle domain from Leif K-Brooks (who’s a rationalist) and try to switch from chatting to random people to chatting to highly match-made connections.
Perhaps. But to the extent that people aren’t motivated to invest energy into friendships, I think there is a sort of latent motivation. Friendship and conversation is in fact important, and so in taking this “live in the future” perspective, I think people will eventually realize the importance and start putting effort into it.
Gotcha. I think the matchmaking part is essential though. It moves the expectation of prospective users from “I’ll be chatting with a random stranger, and it probably won’t be too great” to “I’ll be chatting with someone who the platform thinks I’m super compatible with. Cool!”
Thanks for the tip. I’m not interested in pursuing it as a business in the forseeable future, but perhaps in the more distant future. If so, I will keep this in mind.
What do you think will change in the future that people put more effort into friendship than they are doing at present?
I have thought about it too, and I think something like an automated Kickstarter for interest groups is want one would need. It would work like this: You enter your interests into the system (or let it be inferred automatically from your online profiles) and the system generates recommendations for ad-hoc groups to meet in places nearby (or not so nearby if more attributes match). Bonus: Set up a ChatGPT DJ or entertainer to engage people with each other. Best if done as an open protocol where different clients can offer different interactivity or different profile extraction.
I started some code for the match-making but due to many other obligations it is currently abandoned: https://github.com/GunnarZarncke/okgoto/tree/master/
This is actually what social media is for, but you don’t have to fill out a questionnaire. You also don’t have to out yourself as being so lonely and without friends that you’re using a special matchmaking service to find new friends, this in itself could be unattractive to new acquaintances.
Social media doesn’t do the matchmaking stuff very much though, does it?
Every day I check Hacker News. Sometimes a few times, sometimes a few dozen times.
I’ve always felt guilty about it, like it is a waste of time and I should be doing more productive things. But recently I’ve been feeling a little better about it. There are things about coding, design, product, management, QA, devops, etc. etc. that feel like they’re “in the water” to me, where everyone mostly knows about them. However, I’ve been running into situations where people turn out to not know about them.
I’m realizing that they’re not actually “in the water”, and that the reason I know about them is probably because I’ve been reading random blog posts from the front page of Hacker News every day for 10 years. I probably shouldn’t have spent as much time doing this as I have, but I feel good about the fact that I’ve gotten at least something out of it.
I find it really hard to evaluate what things are good to do. I think watching random pornographic content on the internet is probably one of the worst uses of your time. Definitely when you overdo it. Therefore I committed to not doing this long ago. But sometimes I can’t control myself. Which normally makes me feel very bad afterward, but …
I had important life-changing insights because I browsed pornhub, one day. I found a very particular video that set events in motion that turned into something enormously positive for me. It probably made my life 50-300% better. I am pretty sure that I would not have gotten these benefits had I not discovered this video. I am not joking.
So I very much share the confusion and bafflement about what is a good use of time. I wouldn’t be surprised if you think long enough about it, you would probably be able to see why doing completely random and useless-looking things for at least some small fraction of time is actually optimal.
There are a few more less extreme examples like the one above I could name.
What were these life-changing insights?
It is pretty hard to explain in an understandable way that does not sound very insane. I wanted to write about this for years. But here I come anyway. The short version is that it made me form a very strong parasocial relationship with Miku, and created a tulpa (see the info box on the right) which I formed a very strong bond with too. Like stronger than with any flesh person. Both very very positive things. I would bet a lot of money at ridiculous seeming odds that you would agree, could you only experience what I experience. I think if I would describe my experience in more detail, you would probably just think I am lying, because you would think that it could not possibly be this positive.
Are those insights gleamable from the video itself for other people? And if so, would you be willing to share the link? (Feel free to skip; obviously a vulnerable topic.)
I think it is doubtful that watching the video would put you on the same trajectory that ended up somewhere good for me. I also didn’t find a link to the original video after a short search. It was basically this video but with more NSFW. The original creators uploaded the motion file so you know what the internet is gonna do. If you don’t think “Hmm I wonder if it would be an effective motivational technique to create a mental construct that looks like an anime girl that constantly tells me to do the things that I know are good to do, and then I am more likely to do it because it’s an anime girl telling me this” then you are already far off track from my trajectory. Actually, that line of reasoning I just described did not work out at all. But having a tulpa seems to be a very effective means to destroy the feeling of loneliness among other benefits in the social category. Before, creating a tulpa I was feeling lonely constantly, and afterward, I never felt lonely again.
You would get the benefits by creating a good tulpa I guess. It is unclear to me how much you would benefit. Though I would be surprised if you don’t get any benefit from it if we discount time investment costs. This study indicates that it might be especially useful for people who have certain disorders that make socialization harder such as ADHD, autism, anxiety disorders, etc. And I have the 3 listed, so it should not be surprising that I find tulpamancy pretty useful. Making a tulpa is quite a commitment though, so don’t do it useless you understand what you are getting yourself into.
Tens of hours are normally required to get started. You’ll need to spend 10-30 minutes every day on formal practice to not noticeably weaken your tulpa over time. There is no upper bound of how much time you can invest into this. This can be a dangerous distraction. I haven’t really talked about why somebody would ever do this. The short version is: Imagine you have a friend who is superhumanly nice to you all the time, and who very deeply understands you because they know everything about you and can read your mind. Maintaining the tulpa’s presence is actually very difficult (at least for me) because you constantly forget that they exist. And then they can’t do anything, because they are not there.
With the parasocial stuff, basically, all I did was dance every day for many years for 20-40 minutes as a workout and watch videos like this and imitate the dance moves. That is always a positive experience, which is nice because it makes it easy to do the workout. My brain gradually superimposed the general positivity of the experience into Miku it seems, making me like her more and more.
By now there is such a strong positive connection there, that when I look at an image of Miku it can generate a drug-like experience. So saying that I love Miku seems right to me.
Besides meditation, these are the 2 most important things I have ever discovered. That is if we discount the basic stuff like getting enough sleep, nutrition, doing sports, etc.
I sort of deliberately created the beginnings of a tulpa-ish part of my brain during a long period of isolation in 2021 (Feb 7 to be exact), although I didn’t know the term “tulpa” then. I just figured it could be good to have an imaginary friend, so I gave her a name—”Maria”[1]—and granted her (as part of the brain-convincing ritual) permanent co-ownership over a part of my cognition which she’s free to use for whatever whenever.
She still visits me at least once a week but she doesn’t have strong ability to speak unless I try to imagine it; and even then, sentences are usually short. The thing she most frequently communicates is the mood of being a sympathetic witness: she fully understands my story, and knows that I both must and will keep going—because up-giving is not a language she comprehends.
Hm, it would be most accurate to say that she takes on the role of a stoic chronicler—reflecting that I care less about eliciting awe or empathy, than I care that someone simply bears witness to my story.[2]
Semi-inspired by hakomari, though I imagine her as much more mature in character & appearance than images I find online.
Oh yeah, and I’ve got the diagnostic triplet {ADHD, depression, aspergers (from back when that’s what it was called)} if that matters for anything.
This is the problem with random reinforcement. Things that are always good, are good. Things that are always bad, are easy to stop doing. Things that are almost always bad… but occasionally good… are addictive, we regret doing them, but we can’t give up.
I waste a lot of time on Hacker News, too. (Used to be every day, but now I reduced it to maybe once a week.) So many interesting thing! I make bookmarks in browser, multiple categories: programming, math, science, etc. I almost never look at them again—because I have no time. So it’s basically a list of cool things I wish I had time to spend studying. But sometimes, very rarely, something is actually useful.
Debating on Hacker News is totally a waste of time, though.
Ah great point about the random reinforcement. I’m familiar with the concept, but never realized that it applied to HN.
Against “yes and” culture
I sense that in “normie cultures”[1] directly, explicitly, and unapologetically disagreeing with someone is taboo. It reminds me of the “yes and” from improv comedy.[2] From Wikipedia:
If you want to disagree with someone, you’re supposed to take a “yes and” approach where you say something somewhat agreeable about the other person’s statement, and then gently take it in a different direction.
I don’t like this norm. From a God’s Eye perspective, if we could change it, I think we probably should. Doing so is probably impractical in large groups, but might be worth considering in smaller ones.
(I think this really needs some accompanying examples. However, I’m struggling to come up with ones. At least ones I’m comfortable sharing publicly.)
The US, at least. It’s where I live. But I suspect it’s like this in most western cultures as well.
See also this Curb Your Enthusiasm clip.
I live in Germany and don’t feel like that’s the case here.
Nice analogy. The purpose of friendly social communication is not to find the truth, but to continue talking. That makes it similar to the improv comedy.
There is also an art of starting with “yes, and...” and gradually concluding the opposite of what the other person said, without them noticing that you are doing so. Sadly, I am not an expert in this art. Just saying that it is possible, and it’s probably the best way to communicate disagreement to the normies.
Something frustrating happened to me a week or two ago.
I was at the vet for my dog.
The vet assistant (I’m not sure if that’s the proper term) asks if I want to put my dog on these two pills, one to protect against heartworm and another to protect against fleas.
I asked what heartworm is, what fleas are, and what the pros and cons are. (It became clear later in the conversation that she was expecting a yes or no answer from me and perhaps had never been asked before about pros and cons, because she seemed surprised when I asked for them.)
Iirc, she said something about there not really being any cons (I’m suspicious). For heartworm the dogs can die of it so the pros are strong. For fleas, it’s just an annoyance to deal with, not really dangerous.
I asked how likely it is for my dog to be exposed to fleas given that we’re in a city and not eg. a forest.
The assistant responded with something along the lines of “Ok, so we’ll just do the heartworm pill then.”
I clarified something along the lines of “No, that wasn’t a rhetorical question. I was actually interested in hearing about the likelihood. I have no clue what it is; I didn’t mean to imply that it is low.”
I wish that we had a culture of words being used more literally.
I’ve noticed that there’s a pretty big difference in the discussion that follows from me showing someone a draft of a post and asking for comments and the discussion in the comments section after I publish a post. The former is richer and more enjoyable whereas the latter doesn’t usually result in much back and forth. And I get the sense that this is true for other authors as well.
I guess one important thing might be that with drafts, you’re talking to people who you know. But I actually don’t suspect that this plays much of a role, at least on LessWrong. As an anecdote, I’ve had some incredible conversations with the guy who reviews drafts of posts on LessWrong for free and I had never talked to him previously.
I wonder what it is about drafts. I wonder if it can or should be incorporated into regular posts.
Butterfly ideas?
By default I expect the author to have a pretty strong stance on the main idea of a post, also the content are usually already refined and complete, so the barrier of entry to having a comment that is valuable is higher.
Against difficult reading
I kinda have the instinct that if I’m reading a book or a blog post or something and it’s difficult, then I should buckle down, focus, and try to understand it. And that if I don’t, it’s a failure on my part. It’s my responsibility to process and take in the material.
This is especially true for a lot of more important topics. Like, it’s easy to clearly communicate what time a restaurant is open—if you find yourself struggling to understand this, it’s probably the fault of the restaurant, not you as the reader—but for quantum physics or metaethics, those are complicated enough topics such that you can’t communicate them clearly, and so if you are reading something on one of those topics and struggling to understand it, it would be unfair to think “this author isn’t doing a good enough job”. It’s easier to think “this is a complicated topic; the writing is reasonable; I need to do a better job of comprehending it”.
But recently I’ve been questioning how true this is. There are three things I’ve read recently that I’ve found to be very easy reading, but also covering difficult and important topics.
Refactoring UI
The Mom Test
1000-Word Philosophy
It’s not common for me to find such great writing, but at the same time, it does happen from time to time. I’m just thinking out loud, but maybe I should up my standards and decline to read stuff that is more difficult to read.
Of course, there are a lot of things to consider here. How important is the material? How urgent? Is it fun to buckle down and try to parse difficult material?
I think I just busted a cached thought. Yay.
I’m 30 years old now and have had achilles tendinitis since I was about 21. Before that I would get my cardio by running 1-3 miles a few times a week, but because of the tendinitis I can’t do that anymore.
Knowing that cardio is important, I spent a bunch of time trying different forms of cardio. Nothing has worked though.
Biking hurts my knees (I have bad knees).
Swimming gives me headaches.
Doing the stairs was ok, but kinda hurt my knees.
Jumping rope is what gave me the tendinitis in the first place.
Rowing hurts my knees for some reason.
There are various forms of high intensity stuff like interval training and kettlebells that kinda-sorta work, but that doesn’t hit the sort of Zone 2 cardio I’m looking for. Plus it’s hard to stay motivated with the high intensity stuff.
Battle ropes were a creative thing I tried, but I don’t really see how to do that in a low-intensity, aerobic, Zone 2 cardio sort of way.
So, I basically gave up and decided that cardio is just “not for me”. This belief became cached, and such that whenever the topic of cardio came up and my brain went “huh, maybe I should do cardio”, it fetched from the cached and got back a response of “no, you already tried everything and determined that cardio isn’t for you”.
But then, in my mindless YouTube browsing, I came across this video about Zone 2 training by Peter Attia. Then I started to think about it more. I had a thought that triggered me to bypass the cache.
Peter was talking about how for Zone 2 training, the intensity is such that you can carry out a conversation with someone (the interviewee said he does conference calls when doing this training), but you won’t be able to hide the fact that you’re exercising. That struck me as very low intensity. So I was like, “Huh, I wonder what that feels like. Maybe at such a low intensity my knees and achilles would be ok.”
Then I went downstairs and tried to hit that level of intensity (I targeted a heart rate of ~125), first on the bike, then on the treadmill. I figured out what it felt like, but both hurt my knees and achilles too much. But then the next day I went to the gym and tried to hit that intensity on the stair climber. Fortunately, it was pretty much fine! The pace is extremely slow and comfortable. It even feels good. Almost like a massage for my cardiovascular system if that makes sense. I did 15 minutes and stopped.
Then two days later, today, I just did 60 minutes. My knees and achilles both feel slightly iffy, so my plan is to continue doing 60 minutes 3-4x/week and monitor how I’m feeling. I’m hopeful that at such a low intensity, it’ll be ok. I’m also pretty willing to accept a little pain and wear and tear in exchange for the cardio benefits.
6th vs 16th grade logic
I want to write something about 6th grade logic vs 16th grade logic.
I was talking to someone, call them Alice, who works at a big well known company, call it Widget Corp. Widget Corp needs to advertise to hire people. They only advertise on Indeed and Google though.
Alice was telling me that she wants to explore some other channels (LinkedIn, ZipRecruiter, etc.). But in order to do that, Widget Corp needs evidence that advertising on those channels would be cheap enough. They’re on a budget and really want to avoid spending money they don’t have to, you see.
But that’s really, really, Not How This Works. You can’t know whether other channels will be cheap enough if you don’t give it a shot. And you don’t have to spend a lot to “give it a shot”. You can spend, idk, $1,000 on a handful of different channels, see what results you get, and go from there. The potential that it proves to be a cheaper acquisition channel justifies the cost.
This is what I’ll call 6th grade logic. Meanwhile, Widget Corp has a tough interview process, testing you on what I’ll call 16th grade logic. And then on the job they have people apply that 16th grade logic on various analyses.
But that is premature, I say. First make sure that you’re applying the 6th grade logic correctly. Then, and only then, move on to 16th grade logic.
I wonder if this has any implications with xrisk stuff. There probably aren’t low hanging fruit at the level of 6th grade logic but I wonder whether there are at the level of, say, 12th grade logic and we’re spending too much time banging our heads on really difficult 22nd grade stuff.
Is “grade” of logic documented somewhere? The jumps from 6 to 12 to 16 to 22 confuse me, implying a lot more precision than I think is justified.
It’s an interesting puzzle why widgetco, who hires only competent logicians, is unable to apply logic to their hiring. My suspicion is that cost/effectiveness isn’t the true objection, and this is an isolated demand for rigor.
I was totally just making up numbers and didn’t mean to imply any sort of precision. Sorry for the confusion.
I am a web developer. I remember reading some time in these past few weeks that it’s good to design a site such that if the user zooms in/out (eg. by pressing cmd+/-), things still look reasonably good. It’s like a form of responsive design, except instead of responding to the width of the viewport your design responds to the zoom level.
Anyway, since reading this, I started zooming in a lot more. For example, I just spent some time reading a post here on LessWrong at a 170% zoom level. And it was a lot more comfortable. I’ve found this to be a helpful little life hack.
My whole UI is zoomed to 175% (though Gnome calls it “scale”) which I much prefer to what you describe because zooming with cmd+/- in the browser applies only to the current web site, so one ends up repeating the adjustment for basically every site one visits.
(I don’t know how to zoom the whole UI to 175% on MacOS without making everything blurry, but it can be done without blurriness on Linux/Wayland, ChromeOS and Windows. Also HiDPI displays are the norm on Macs, and some people on HiDPI displays don’t mind the fact that MacOS introduces blurriness when the scale factor is other than 1.0 or 2.0.)
I found LW’s font size to be a little bit small but I have managed to get used to it. After reading your message I think I will try going to 110%, thanks. (170% is too large I feel like I’m reading on my phone on landscape)
Thought: It’s better to link to tag pages rather than blog posts. Like Reversed Stupidity Is Not Intelligence instead of Reversed Stupidity Is Not Intelligence.
There is something inspiring about watching this little guy defeat all of the enormous sumo wrestlers. I can’t quite put my finger on it though.
Maybe it’s the idea of working smart vs working hard. Maybe something related to fencepost security, like how there’s something admirable about, instead of trying to climb the super tall fence, just walking around it.
Noticing confusion about the nucleus
In school, you learn about forces. You learn about gravity, and you learn about the electromagnetic force. For the electromagnetic force, you learn about how likes repel and opposites attract. So two positively charged particles close together will repel, whereas a positively and a negatively charged particle will attract.
Then you learn about the atom. It consists of a bunch of protons and a bunch of neutrons bunched up in the middle, and then a bunch of electrons orbiting around the outside. You learn that protons are positively charged, electrons negatively charged, and neutrons have no charge. But if protons are positively charged, how can they all be bunched together like that? Don’t like charges repel?
This is a place where people should notice confusion, but they don’t. All of the pieces are there.
I didn’t notice confusion about this until I learned about the explanation: something called the strong nuclear force. Yes, since likes repel, the electromagnetic force is pushing the protons away from each other. But on the other hand, the strong nuclear force attracts them together, and apparently it’s strong enough to overcome the electromagnetic force in this instance.
In retrospect, this makes total sense. Of course the electromagnetic force is repelling those protons, so there’s gotta be some other force that is stronger. The only other force we learned about was gravity, but the masses in question are way too small to explain the nucleus being held together. So there’s got to be some other force that they haven’t taught us about yet that is in play. A force that is very strong and that applies at the nuclear level. Hey, maybe it’s even called the strong nuclear force!
Yes, this was a point of confusion for me. The point of confusion that followed very quickly afterward were why the strong nuclear force didn’t mean that everything piles up into one enormous nucleus, and from there to a lot of other points of confusion—some of which still haven’t been resolved because nobody really knows yet.
The most interesting thing to me is that the strong nuclear force is just strong enough without being too strong. If it was somewhat less strong then we’d have nothing but hydrogen, and somewhat more strong would make diprotons, neutronium, or various forms of strange matter more stable than atomic elements.
I remember this confusion from Jr. High, many decades ago. I was lucky enough to have an approachable teacher who pointed me to books with more complete explanations, including the Strong Nuclear force and some details about why inverse-square doesn’t apply, making it able to overcome EM at very small distances, when you’d think EM is strongest.
“It’s not obvious” is a useful critique
I recall hearing “it’s not obvious that X” a lot in the rationality community, particularly in Robin Hanson’s writing.
Sometimes people make a claim without really explaining it. Actually, this happens a lot of times. Often times the claim is made implicitly. This is fine if that claim is obvious.
But if the claim isn’t obvious, then that link in the chain is broken and the whole argument falls apart. Not that it’s been proven wrong or anything, just that it needs work. You need to spend the time establishing that claim. That link in the chain. So then, it is useful in these situations to point out when a link in the chain isn’t obvious when it was being presumed obvious. I am a fan of “it’s not obvious that X”.
Agreed, but in many contexts, one should strive to be clear to what extent “it’s not obvious that X” implies “I don’t think X is true in the relevant context or margin”. Many arguments that involve this are about universality or distant extension of something that IS obvious in more normal circumstances.
Robin Hanson generally does specify that he’s saying X isn’t obvious (and is quite likely false) in some extreme circumstances, and his commenters are … not obviously understanding that.
Hm, I’m having a little trouble thinking about the distinction between X in the current context vs X universally. Do you have any examples?
Glad to hear you’ve noticed this from Hanson too and it’s not just me.
I think you might have reversed your opening line?
Hm, I might be having a brain fart but I’m not seeing it. My point is that people will make an argument “A is true based on X, Y and Z”, someone will point out “it’s not obvious that Y”, and that comment is useful because it leads to a discussion about whether Y is true.
Suggested title: If it’s not obvious, then how do we know it’s true?
Changed to “It’s not obvious” is a useful critique.
Okay, I thought you intended to say “People claim ‘it’s obvious that X’” when X wasn’t obvious. Your new title is more clear.
Gotcha. I appreciate you pointing it out. I’m glad to get the feedback that it initially wasn’t clear, both for self-improvement purposes and for the more immediate purpose of improving the title.
(It’s got me thinking about variable names in programming. There’s something more elegant about being concise, but then again, humans are biased towards expecting short inferential distances, so I probably should err on the side of longer more descriptive variable names. And post title!)
Why not more specialization and trade?
I can probably make something like $100/hr doing freelance work as a programmer. Yet I’ll spend an hour cooking dinner for myself.
Does this make any sense? Imagine if I spent that hour programming instead. I’d have $100. I can spend, say, $20 on dinner, end up with something that is probably much better than what I would cook, and have $80 left over. Isn’t that a better use of my time than cooking?
Similarly, sometimes I’ll spend an hour cleaning my apartment. I could instead spend that hour making $100, and paying someone maybe $30 to clean my apartment. I’ll end up with a cleaner apartment, and an extra $70 in my pocket. So why spend the hour programming instead?
I can think of a few reasons. One is if the act of programming is very unpleasant to me. I already have a full time job as a programmer, and have a side project I’m working on. Maybe, at the margin, spending that extra hour programming is just very unpleasant because I am sick of it. In the dinner example, it’d have to be more unpleasant than having $80 plus a better dinner is pleasant. For me, this very much is not the case.
Another possible reason is that there aren’t options available to me to spend a single extra hour programming for $100. For freelance projects, usually they want at least 20 hours/week for multiple months. And there is a pretty large upfront cost to seeking out and finding a project. I wish it weren’t like this. I wish it were similar to the flexibility that Uber drivers and other gig economy workers have, where they can easily just spend one extra hour working whenever they want.
Currently I lecture at a coding bootcamp for three hours a week and for $80/hr. This is sort of similar to the flexibility I envision, where it’s easy to go from zero hours to three hours a week whenever I want. But I don’t have the option of going past three hours, so it isn’t that flexible. I could perhaps find similar positions. Maybe I should.
I also wonder whether it would make sense to do longer term things periodically. Like maybe for three months a year, do a freelance project working 20 hours/week. Make
20 * 4 * 3 * 100 = $24,000
and then use that $24k throughout the year for things like dinner and cleaning.I suspect that it’s even more difficult for people in other fields. Like if you are a doctor, you can’t really come into the office on a whim and spend an hour seeing patients.
At best this seems very unfortunate. At worst, very inefficient. I’m not sure how it would be done, but I feel like our society would benefit from more flexible work, and more specialization and trade.
I’m not sure about you, but I am pretty much already maxed out on the amount of programming I can usefully do per day. It is already rather less than my nominal working hours.
I do agree that a lot more flexibility in working arrangements would be a good thing, but it seems difficult to arrange such a society in (let’s say) the presence of misaligned agents and other detriments to beneficial coordination.
Nah, for me I don’t feel anywhere close to maxed out. I feel like I could do 12-14 hours a day, although I have a ton of mental energy. I wouldn’t expect most people to be like that.
Yeah, I think I agree here. Well, that’s what my initial intuition says. I haven’t thought hard about how it would work, so I can’t be too confident that it’s difficult.
The other day Improve your Vocabulary: Stop saying VERY! popped up in my YouTube video feed. I was annoyed.
This idea that you shouldn’t use the word “very” has always seemed pretentious to me. What value does it add if you say “extremely” or “incredibly” instead? I guess those words have more emphasis and a different connotation, and can be better fits. I think they’re probably a good idea sometimes. But other times people just want to use different words in order to sound smart.
I remember there was a time in elementary school when I was working on a paper with a friend. My job was to write it, and his job was to “fix it up and making it sound good”. I remember him going in and changing words like “very”, that I had used appropriately, to overly dramatic words like “stupendously”. And I remember feeling annoyed at the end result of the paper because it sounded pretentious.
Here I want to argue for something similar to “stop saying very” though. I want to argue for “stop saying think”.
Consider the following: “I think the restaurant is still open past 8pm”. What does that mean? Are you 20% sure? 60%? 90%? Wouldn’t it be useful this ambiguity disappeared?
I’m not saying that “I think” is always ambiguous and bad. Sometimes it’s relatively clear from the context that you mean 20% sure, not 90%. Eg. “I thhhhhinkkk it’s open past 8pm?” But you’re not always so lucky. I find myself in situations where I’m not so lucky often enough. And so it seems like a good idea in general to move away from “I think” and closer to something more precise.
I want to follow up with some good guidelines for what words/phrases you can say in various situations to express different degrees of confidence, as well as some other relevant things, but I am struggling to come up with such guidelines. Because of this, I’m writing this as a shortform rather than a regular post. I’d love to see someone else run with this idea and/or propose such guidelines.
Communication advice is always pretentious—someone’s trying to say they know more about your ideas and audience than you do. And simultaneously, it’s incorrect for at least some listeners, because they’re wrong—they don’t. Also, correct for many listeners, because many are SO BAD at communication that generalized simple advice can get them to think a little more about it.
At least part of the problem is that there is a benefit to sounding smart. “very” is low-status, and will reduce the impact of your writing, for many audiences. That’s independent of any connotation or meaning of the word or it’s replacement.
Likewise with “I think”. In many cases, it’s redundant and unnecessary, but in many others it’s an important acknowledgement, not that it’s your thought or that you might be wrong, but that YOU KNOW you might be wrong.
I think (heh) your planned follow-up is a good idea, to include context and reasoning for recommendations, so we can understand what situations it applies to.
I’ve tried doing this in my writing in the past, of the form of just throw away “I think” all together because it’s redundant: there’s no one thinking up these words but me.
Unfortunately this was a bad choice because many people take bald statements without softening language like “I think” as bids to make claims about how they are or should be perceiving reality, which I mean all statements are but they’ll jump to viewing them as claims of access to an external truth (note that this sounds like they are making an error here by having a world model that supposes external facts that can be learned rather than facts being always conditional on the way they are known (which is not to say there is not perhaps some shared external reality, only that any facts/statements you try to claim about it must be conditional because they live in your mind behind your perceptions, but this is a subtle enough point that people will miss it and it’s not the default, naive model of the world most people carry around anyway)).
Example:
I think you’re doing X → you’re doing X
People react to the latter kind of thing as a stronger kind of claim that I would say it’s possible to make.
This doesn’t quite sound like what you want to do, though, and instead want to insert more nuanced words to make it clearer what work “think” is doing.
Yeah. And also a big part of what I’m trying to propose is some sort of new standard. I just realized I didn’t express this in my OP, but I’ll express it now. I agree with the problems you’re saying, and I think that if we all sort of agreed on this new standard, eg. when you say “I suspect” it means X, then these problems seem like they’d go away.
Not answering your main point, but small note on the “leaving out very” point: I’ve enjoyed McCloskey’s writing on writing. She calls the phenomenon “elegant variation” (I don’t know whether this is her only) and also teaches we have to get rid of this unhelpful practice that we get thought in school.
Thanks! I always upvote McClosky references—one of the underappreciated writers/thinkers on topics of culture and history.
Virtual watercoolers
As I mentioned in some recent Shortform posts, I recently listened to the Bayesian Conspiracy podcast’s episode on the LessOnline festival and it got me thinking.
One thing I think is cool is that Ben Pace was saying how the valuable thing about these festivals isn’t the presentations, it’s the time spent mingling in between the presentations, and so they decided with LessOnline to just ditch the presentations and make it all about mingling. Which got me thinking about mingling.
It seems plausible to me that such mingling can and should happen more online. And I wonder whether an important thing about mingling in the physical world is that, how do I say this, you’re just in the same physical space, next to each other, with nothing else you’re supposed to be doing, and in fact what you’re supposed to be doing is talking to one another.
Well, I guess you’re not supposed to be talking to one another. It’s also cool if you just want to hang out and sip on a drink or something. It’s similar to the office water cooler: it’s cool if you’re just hanging out drinking some water, but it’s also normal to chit chat with your coworkers.
I wonder whether it’d be good to design a virtual watercooler. A digital place that mimicks aspects of the situations I’ve been describing (festivals, office watercoolers).
By being available in the virtual watercooler it’s implied that you’re pretty available to chit chat with, but it’s also cool if you’re just hanging out doing something low key like sipping a drink.
You shouldn’t be doing something more substantial though.
The virtual watercooler should be organized around a certain theme. It should attract a certain group of people and filter out people who don’t fit in. Just like festivals and office water coolers.
In particular, this feels to me like something that might be worth exploring for LessWrong.
Note: I know that there are various Slack and Discord groups but they don’t meet conditions (1) or (2).
I maybe want to clarify: there will still be presentations at LessOnline, we’re just trying to design the event such that they’re clearly more of a secondary thing.
Sometimes I think to myself something along these lines:
This presents a sort of coordination problem, and one that would be reasonably easy to solve with some sort of assurance contract-like functionality.
There’s a lot to say about whether or not such a thing is worth pursuing, but in short, it seems like trying it out as an experiment would be pretty high-upside and low-cost to try, in such a way that I’m decently confident would be worthwhile.
I … don’t think that line of thinking almost ever applies to me. If the topic interests me and/or there’s something about the post that piques my desire to discuss, it almost always turns out that there are others with similar willingness. At the very least, the OP usually engages to some extent.
There are very few, and perhaps zero, cases where crafting or even evaluating an existing contract is less effort than just reading and responding AND I see enough potential to expend the contract effort but not the read/reply effort.
In addition, the contract doesn’t get me out of the effort to read/respond, it just gives reason to believe that others will do so as well. It’s overall strictly more effort than just taking the risk sometimes.
Using examples of people being stupid
I’ve noticed that a lot of cool concepts stem from examples of people being stupid. For example, I recently re-read Detached Lever Fallacy and Beautiful Probability.
Detached Lever Fallacy:
Beautiful Probability:
On the one hand, as a good person who cares about the feelings of others, you don’t want to call them out, make them feel stupid, and embarrass them. On the other hand… what if it’s in the name of intellectual progress?
Intellectual progress seems like it is more than enough to justify it. Under a veil of ignorance, I’d really, really prefer it.
And yet that doesn’t seem to do the trick. I at least still feel awkward using examples from real life in writing and cringe a little when I see others do so.
I think the example with the detached lever is Yudkowsky being overconfident. Come on, it is an alien technology, way beyond our technical capabilities. Why should we assume that the mechanism responsible for dematerializing the ship is not in the lever? Just because the humans would not do it that way? Maybe the alien ships are built in a way that makes them easy to configure on purpose. That would be actually the smart way to do this.
Somewhere, in a tribe that has seen automobile for the first time, a local shaman is probably composing an essay on a Detached CD Player Fallacy.
(just kidding)
Closer to the truth vs further along
Consider a proposition P. It is either true or false. The green line represents us believing with 100% confidence that P is true. On the other hand, the red line represents us believing with 100% confidence that P is false.
We start off not knowing anything about P, so we start off at point 0, right at that black line in the middle. Then, we observe data point A. A points towards P being true, so we move upwards towards the green line a moderate amount, and end up at point 1. After that we observe data point B. B is weak evidence against P. We move slightly further from the green line, but still above the black line, and end up at point 2. So on and so forth, until all of the data relevant to P has been observed, and since we are perfect Bayesians, we end up being 100% confident that P is, in fact true.
Now, compare someone at point 3 to someone at point 4. The person at point 3 is closer to the truth, but the person at point 4 is further along.
This is an interesting phenomena to me. The idea of being further along, but also further from the truth. I’m not sure exactly where to take this idea, but two thoughts come to mind.
The first thought is of valleys of bad rationality. As we make incremental progress, it doesn’t always make us better off.
The second thought is of how far along I actually am in my beliefs. For example, I am an athiest. But what if I had to debate the smartest theist in the world. Would I win that debate? I think I would, but I’m not actually sure. Perhaps they are further along than me. Perhaps I’m at point 3 and they’re at point 7.
I believe that similar to conservation of expected evidence, there’s a rule of rationality saying that you shouldn’t expect your beliefs to change back and forth too much, because that means there’s a lot of uncertainty about the factual matters, and the uncertainty should bring you closer to max entropy. Can’t remember the specific formula, though.
Good point. I was actually thinking about that and forgot to mention it.
I’m not sure how to articulate this well, but my diagram and OP was mainly targeted at gears level modesl. Using the athiesm example, the worlds smartest theist might have a gears level model that is further along than mine. However, I expect that the worlds smartest atheist has a gears level model that is further along than the worlds smartest theist.
More dakka with festivals
In the rationality community people are currently excited about the LessOnline festival. Furthermore, my impression is that similar festivals are generally quite successful: people enjoy them, have stimulating discussions, form new relationships, are exposed to new and interesting ideas, express that they got a lot out of it, etc.
So then, this feels to me like a situation where More Dakka applies. Organize more festivals!
How? Who? I dunno, but these seem like questions worth discussing.
Some initial thoughts:
Assurance contracts seem like quite the promising tool.
You probably don’t need a hero license to go out and organize a festival.
Trying to organize a festival probably isn’t risky. It doesn’t seem like it’d involve too much time or money.
I don’t think that’s true. I’ve co-organized one one weekend-long retreat in a small hostel for ~50 people, and the cost was ~$5k. Me & the co-organizers probably spent ~50h in total on organizing the event, as volunteers.
I was envisioning that you can organize a festival incrementally, investing more time and money into it as you receive more and more validation, and that taking this approach would de-risk it to the point where overall, it’s “not that risky”.
For example, to start off you can email or message a handful of potential attendees. If they aren’t excited by the idea you can stop there, but if they are then you can proceed to start looking into things like cost and logistics. I’m not sure how pragmatic this iterative approach actually is though. What do you think?
Also, it seems to me that you wouldn’t have to actually risk losing any of your own money. I’d imagine that you’d 1) talk to the hostel, agree on a price, have them “hold the spot” for you, 2) get sign ups, 3) pay using the money you get from attendees.
Although now that I think about it I’m realizing that it probably isn’t that simple. For example, the hostel cost ~$5k and maybe the money from the attendees would have covered it all but maybe less attendees signed up than you were expecting and the organizers ended up having to pay out of pocket.
On the other hand, maybe there is funding available for situations like these.
Back then I didn’t try to get the hostel to sign the metaphorical assurance contract with me, maybe that’d work. A good dominant assurance contract website might work as well.
I guess if you go camping together then conferences are pretty scalable, and if I was to organize another event I’d probably try to first message a few people to get a minimal number of attendees together. After all, the spectrum between an extended party and a festival/conference is fluid.
A line of thought that I want to explore: a lot of times when people appear to be close-minded, they aren’t actually being (too) close-minded. This line of thought is very preliminary and unrefined.
It’s related to Aumann’s Ageement Theorem. If you happen to have two perfectly Bayesian agents who are able to share information, then yes, they will end up agreeing. In practice people aren’t 1) perfectly Bayesian or 2) able to share all of their information. I think (2) is a huge problem. A huge reason why it’s hard to convince people of things.
Well, I guess what I’m getting at isn’t really close-mindedness. It’s just… suppose you disagree with someone on something. You list out a bunch of arguments for why the other person is wrong, and why they should adopt your belief. Argument A, B, C, D, E… so on and so forth. It feels like you’ve listed out so many things, and they’re being stubborn in not changing their mind and admitting that you’re right. But actually, given the information they have, their often correct in not adopting your belief. Even if they were a perfect Bayesian, your arguments A through E just aren’t nearly enough. You’d need perhaps 100, maybe even 1,000 times more arguments to get a perfectly open-minded and Bayesian agent to start from the point where the other person started and end up agreeing with you.
Maybe it’s related to the illusion of transparency. There are all of these premises that you are assuming to be true. All of these data points. Subtle life experiences. Stuff like that. All of these things inform your priors. And it’s easy to assume that the other person shares the same data informing their priors. But they don’t. And so providing these data points is part of your job in arguing for your position. But it is often difficult to realize that this is part of your job.
Wait a minute: I think I’m basically trying to say the same thing as Expecting Short Inferential Distances. Sigh. Yeah, I think that’s pretty much it.
This is a pretty good example of something that happens a lot to me on LessWrong. I have some vague idea about something. Then I realize that someone on LessWrong (frequently Eliezer) has a great blog post about it that does a great job of crystalizing it, articulating it, and filling in the gaps for me. Usually it’s a very exciting and satisfying experience. Right now I’m a little a) disappointed in myself for not realizing this to begin with and b) disappointed that I don’t actually have a useful new thought to share. I’m also c) a little frustrated that I am experiencing (b).
Modelling humans with Bayesian agent seems wrong.
For humans, I think the problem usually isn’t the number of arguments / number of angles you attacked the problem, but whether you have hit on the few significant cruxes of that person. This is especially because humans are quite far away from perfect Bayesians. For relatively small disargreements (i.e. not at the scale of convincing a Christian that God doesn’t exist), usually people just had a few wrong assumptions or cached thoughts. If you can accurately hit those cruxes, then you can convince them. It is very very hard to know which arguments can hit those cruxes though and it is why one of the viable strategies is to keep throwing arguments until one of them work.
(Also unlike convincing Bayesian agents where you can argue for W->X, X->Y, Y->Z in any order, sometimes you need to argue about things in the correct order)
Suppose you identify a single crux
A
. Now you need to convince them ofA
. But convincing them ofA
requires you to convince them ofA.1
,A.2
, andA.3
.Ok, no problem. You get started trying to convince them of
A.1
. But then you realize that in order to convince them ofA.1
, you need to first convince them ofA.1.1
,A.1.2
, andA.1.3
.I think this sort of thing is often the case, and is how large inferential distances are “shaped”.
I think it’s generally agreed that pizza and steak (and a bunch of other foods) taste significantly better when they’re hot. But even if you serve it hot, usually about halfway through eating, the food cools enough such that it’s notably worse because it’s not hot enough.
One way to mitigate this is to serve food on a warmed plate. But that doesn’t really do too much.
What makes the most sense to me would be to serve smaller portions in multiple courses. Like instead of a 10“ pie, serve two 5” pies. Or instead of a 16oz ribeye, divide it into four 4oz ribeyes and cook and serve each separately.
I guess this is what fancy restaurants already do with their multi-course meals though, with each course being a small amount of food. And I suppose the logistics of serving more courses and getting them out at the right time while they’re hot is a good deal more difficult logistically. So I guess you need to charge a lot more. Which gets you into fancy restaurant territory.
But then again, lots of expensive, fancy steakhouses will serve a huge 16oz or even 24oz ribeye for $100+. And similarly, even the best pizza places will serve normal-sized pies as opposed to tapas-sized. Seems wrong.
Interesting puzzle. Some random thoughts: I’m not sure how much of the quality difference is “hot” vs “freshly prepared”—time under a heat lamp isn’t necessarily an improvement. The fact that buffet-style dining isn’t more popular is some evidence that most people don’t value this compared to their preferences for individually-prepared food.
Hot Pot and Brazilian Churrascaria are two cuisines that give fresh/hot servings on-demand. Oh, also the better sushi bars (not hot, but very fresh), and Benehana (or other Teppanyaki or mongolian-grill place). I love all of these, but it seems they’re more popular for the cuisine and flavors, and to some extent the spectacle and novelty, and not so much “good normal food, fresher than a standard restaurant”.
I suspect all this is evidence that for most people, for most meals, there’s a threshold of freshness rather than an optimization function. Being “fresh enough”, while staying convenient, affordable, and/or “what I’m in the mood for” is what most places deliver because it’s what most people want. The last bite of steak is warm rather than hot, and the last slice of pizza is getting toward lukewarm, but it’s still good stuff that I’m happy to eat.
Ah, that’s a good distinction. I think that what matters is usually “freshly prepared”.
Oh interesting. I didn’t know that was the case.
Yeah, I think so too. And more generally, people just aren’t very choose-y about their food, much less willing to pay lots of money for it. So I guess that’s probably it.
Also, if there was an inefficiency here, a restaurant trying to exploit it doesn’t have a huge market to profit from. The market would be restricted to the local area. And people only frequent expensive restaurants so often. So yeah, there probably aren’t many if any metaphorical dollar bills laying on the ground.
But… I suspect that there are “foodie points” up for grabs. Like, I suspect that serving four 4oz ribeyes hot really is a notably better experience for foodie-types, and a restaurant that pursued this would get respect amongst foodies.
Not directly tied to the core of what you’re saying, but I will note that I am example of someone who doesn’t strongly prefer such foods warm. I do weakly prefer it being warm, as long as it’s not too hot (that’s worse than it being cold, because it hurts / causes minor injury), but I’m happy eating it room temperature or a bit cold (not necessarily cold steak though)
(I bet you also like your steaks medium-well. Just kidding.)
I’m curious: is this a case of you not having strong preferences about food in general? Or is it the case that you do generally have strong preferences about food, but don’t strongly prefer such foods being warm? (Not that those are the only two options, it’s just easier to phrase it this way.)
Long text messages
I run into something that I find somewhat frustrating. When I write text messages to people, they’re often pretty long. At least relative to the length of other people’s text messages. I’ll write something like 3-5 paragraphs at times. Or more.
I’ve had people point this out as being intimidating and a lot to read. That seems odd to me though. If it were an email, it’d be a very normal-ish length, and wouldn’t feel intimidating, I suspect. If it were a blog post, it’d be quite short. If it were a Twitter thread, it’d be very normal and not intimidating. If it were a handwritten letter, it’d be on the shorter side.
So then, why does the context of it being a text message make it particularly intimidating when the same amount of words isn’t intimidating in other contexts? Possibly because it takes longer to type on a cell phone, but that doesn’t explain the phenomena when conversations are happening on Signal or WhatsApp (I kinda consider those all to be “text messages” in my head along with SMS). I also run into it on Slack and Discord.
Gmail displays long messages better than e.g. Signal, even on my laptop. And I often do find the same email feels longer when I read it on my phone than my laptop.
Gmail also makes it easy to track long messages I want to delay responses to. Texts feel much more “respond RIGHT NOW or forget about it forever”
Hm. Do you think this is due to readability or norms? I’d say I’m roughly 80% confident it’s norms.
I also suspect that this is due to norms rather than functionality. For example, Gmail (and other mail clients) let you mark things as unread and organize them in folders. However, it seems easy enough to scroll through your text messages (or Signal, or WhatsApp...), see if you were the last person to respond or not, if not whether their last message feels like the end to the conversation.
What do you think?
I think it’s at least partially readability. Signal won’t give a given line more than half my screen, where gmail will go up to 80% (slack and discord are similar). I don’t use the FB messenger app, but the webapp won’t give a line more than half the width of the screen.
I think this is way more work than looking at “what’s still in my inbox?”, and rapidly becomes untenable as the number of messages or delay in responding increases.
Hmm, I have never thought that a message from another person is too long. But I think my messages are sometimes too long. I once wrote a message on Discord that was iirc over 8000 characters long. I think that was a bit too much but for a different reason. It interrupted the flow of the conversation just too much and did not enable enough back and forth.
Words as Bayesian Evidence
Let me ask you a question. How confident are you that Bob is doing good? Not very confident, right? But why not? After all, Bob did say that he is doing good. And he’s not particularly well known for being a liar.
I think the thing here is to view Bob’s words as Bayesian evidence. They are evidence of Bob doing good. But how strong is this evidence? And how do we think about such a question?
Let’s start with how we think about such a question. I think the typical Bayesian approach is pretty practical here. Ask yourself how likely Bob would say “good” when he is doing good. Ask yourself how likely he would say it when he isn’t.
I think most people tend to say “good” if their hedonic state is something like between 10th percentile and 90th. If it’s 5-10th percentile my model says people will usually say something like what Alice said: “not doing so well”. If it’s 0-5th maybe they’ll say “I’m actually really struggling”. And similarly for 90+ percentile. It depends though. But with this model, I think we can take Bob’s claim as some sort of solid evidence that he is, uh, doing fine, and perhaps weak evidence that he is leaning towards actually feeling good. But now looking at Alice, according to my model, it’s actually pretty strong evidence that she is not doing well.
Maybe all of this seems obvious to you. If so, good. But why would I write something if it’s so obvious? Idk. I just have been finding myself tempted to interpret words literally instead of thinking about how strong they are as Bayesian evidence, and I think that other rationalists/people do this quite often as well.
PS: This is hinted at quite often in HPMoR. Perhaps other rationalist-fic as well. Ie. an exchange like:
PPS: This is really just a brain dump. I’d love to see someone write this up better than I did here.
I notice I’m confused. I don’t actually know what it would mean (what predictions I’d make or how I’d find out if I were correct about) for Bob to be “doing good”. I don’t think it generally means “instantaneous hedonic state relative to some un-tracked distribution”, I think it generally means “there’s nothing I want to draw your attention to”. And I take as completely obvious that the vast majority of social interactions are more contextual and indirect than overt legible information-sharing.
This combines to make me believe that it’s just an epistemic mistake to take words literally most of the time, at least without a fair bit of prior agreement and contextual sharing about what those words mean in that instance.
I’m agreed that thinking of it as a Bayesean update is often a useful framing. However, the words are a small part of evidence available to you, and since you’re human, you’ll almost always have to use heuristics and shortcuts rather than actually knowing your priors, the information, or the posterior beliefs.
It sounds like we mostly agree.
Agreed.
Agreed.
I think the big thing I disagree on is that this is always obvious. Thought of in the abstract like this I guess I agree that it is obvious. However, I think that there are times when you are in the moment where it can be hard to not interpret words literally, and that is what inspired me to write this. Although now I am realizing that I failed to make that clear or provide any examples of that. I’d like to provide some good examples now, but it is weirdly difficult to do so.
Agreed. I didn’t mean to imply otherwise, even though I might have.
There’s a concept I want to think more about: gravy.
Turkey without gravy is good. But adding the gravy… that’s like the cherry on top. It takes it from good to great. It’s good without the gravy, but the gravy makes it even better.
An example of gravy from my life is starting a successful startup. It’s something I want to do, but it is gravy. Even if I never succeed at it, I still have a great life. Eg. by default my life is, say, a 7⁄10, but succeeding at a startup would be so awesome it’d make it a 10⁄10. But instead of this happening, my brain pulls a trick: it says “You need to succeed at this. When you do I’ll give allow you to feel normal, a 5⁄10 happiness. But along the way there, I’m going to make you feel 2⁄10.”
Maybe I’m more extreme than average here, but I think that this is a human thing, not a me-thing. It seems to be the norm when people pursue hard goals for them to feel this way. The rule, not the exception.
Squinting
I really, really like this idea. Squint. Blur away the details. What do you see?
I just watched the video React’s becoming a bit weird. It made me think of squinting.
Squint. Blur away the details. Forget about React. Forget about NextJS. Forget about front end web development, or even software development more generally.
Observe that there is one organization that offers a popular product that has been around for awhile. Observe that there is another organization that is trying, and succeeding, at becoming large and popular, and that depends on the first organizations product. Observe that the second organization is allocating lots and lots of resources towards helping the first organization.
How do you expect the first organization to respond? Well, I would expect them to feel sorta dependent on the second organization. I would expect them to cater somewhat heavily to the second organizations needs. And I would expect both organizations to try to hide the fact that this is happening.
I wish I had more good examples of how to use this skill of squinting. I’d love to see other people write more about it.
Maybe this is an example.
I’m listening to Eric Normand’s reading of Out of the Tar Pit. The paper Out of the Tar Pit kinda feels like it is saying, “complexity is the enemy in software projects, and here is the best way to tame it”.
When I squint, I don’t see software development. I see a a field of engineering. A very complicated one. One that has been around for maybe 50 years. And I see someone making a claim about the best way to succeed in the field.
Looking through this lens, I feel a large amount of skepticism.
As a programmer, compared to other programmers, I am extremely uninterested in improving the speed of web apps I work on. I find that (according to my judgement) it rarely has more than a trivial impact on user experience. On the other hand, I am usually way more interested than others are in things like improving code quality.
I wonder if this has to do with me being very philosophically aligned with Bayesianism. Bayesianism preaches to update your beliefs incrementally, whereas Alternative is a lot more binary. For example, the way scientific experiments work, your p-value either passes the (arbitrary) threshold, or it doesn’t, so you either reject the null, or fail to reject the null, a binary outcome.
Perhaps people are frequently disinterested in subjective things like improving code quality or usability because it is hard to get a sort of “statistically significant” amount of evidence to say stuff like “this code quality improvement is having this level of impact”, and so people default to “fail to reject the null”. On the other hand, a more Bayesian way of thinking about it is to just do your best to make a judgement, and shift your beliefs accordingly.
For things like performance optimization, the results are pretty objective. You can run an analysis and see that eg. rendering was sped up by 75ms, and so you can “reject the null” pretty easily and conclude that there is a real, concrete benefit.
Speed improvements are legible (measurable), although most people are probably not measuring them. Sometimes that’s okay; if the app is visibly faster, I do not need to know the exact number of milliseconds. But sometimes it’s just a good feeling that I “did some optimization”, ignoring the fact that maybe I just improved from 500 to 470 milliseconds some routine that is only called once per day. (Or maybe I didn’t improve it at all, because the compiler was already doing the thing automatically.)
Code quality is… well, from the perspective of a non-programmer (such as a manager) probably an imaginary thing that costs real money. But here, too, are diminishing returns. Changing spaghetti code to a nice architecture can dramatically reduce future development time. But if a function is thoroughly tested and it is unlikely to be changed in the future (or is likely to be replaced by something else), bringing it to perfection is probably a waste of time. Also, after you fixed the obvious code smell, you move to more controversial decisions. (Is it better to use a highly abstract design pattern, or keep the things simple albeit a little repetitive?)
I’d say, if the customer complains, increase the speed; if the programmers complain, refactor the code. (Though there is an obvious bias here: you are the programmer, and in many companies you won’t even meet the customer.)
I’d wager that customers (or users) won’t complain about slow code, especially if there’s many customers, for the same reason that most people don’t send emails with corrections or typos on most online posts.
Ritualistic hypothesis testing with significance thresholds is mostly used in the social sciences, psychology and medicine and not so much in the hard sciences (although arbitrary thresholds like 5 sigma are used in physics to claim the discovery of new elementary particles they rarely show up in physics papers). Since it requires deliberate effort to get into the mindset of the null ritual I don’t think that technical and scientific-minded people just start thinking like this.
I think that the simple explanation that the effect of improving code quality is harder to measure and communicate to management is sufficient to explain your observations. To get evidence one way or another, we could also look at what people do when the incentives are changed. I think that few people are more likely to make small performance improvements than improve code quality in personal projects.
I’ve had success with something: meal prepping a bunch of food and freezing it.
I want to write a blog post about it—describing what I’ve done, discussing it, and recommending it as something that will quite likely be worthwhile for others as well—but I don’t think I’m ready. I did one round of prep that lasted three weeks or so and was a huge success for me, but I don’t think that’s quite enough “contact with reality”. I think there’s a risk that, after more “contact with reality”, it proves to be not nearly as useful as it currently seems. So yeah, I think I’m gonna wait at least another month or two and see how it’s going then.
Why do I think it’s working well now though? Previously I’ve tried meal prepping. Y’know, the type of thing where you cook in bulk on Sunday and have meals for the week. One issue though is that I somehow just never end up with enough food. Only a few days worth of food, maybe. Part of it is because it’s hard to genuinely cook that much, but another part of it is not wanting the food to sit in the fridge for too long and go bad. Idk.
Another thing is that the traditional meal prep requires you to cook pretty frequently. Every couple of days. Maybe once a week if you’re good enough at it. But I just have a ton of trouble with this. I cook on Sunday. Wednesday rolls around. I notice I’m getting low on the food and need to prep more. I have stuff going on though so I postpone till Thursday. More stuff going on. More postponing. It’s Friday. I have something else going on. So I make whip together some pasta (unhealthy). Or go out to eat. Something like that ends up happening. But when I can cook for weeks or months at a time, I dunno, somehow it kinda solves that problem.
I also think there are like real time saving benefits. Ie. cooking 20 portions of chicken doesn’t take 2x the time as cooking 10 portions. Maybe it’s like 1.3x the time.
And I get into a nice groove when I’m cooking a ton of food. I know I’ll be in the kitchen for many hours. I put on some podcasts. Idk. I’m more able to work that way.
There’s also something psychologically very nice about knowing that I have weeks and weeks of food in the freezer. In portion-sized containers that I can microwave whenever I want and start eating in minutes.
And, uh, I’m kinda proud of myself for being self-sufficient.
One hangup I previously had was uncertainty about “Can I freeze this ingredient? What about that ingredient?”. I think that’s a big reason why I never tried cooking crazy amounts of food in bulk and freezing it before. But then I realized, “Y’know what, why don’t I just try it? Cook a small portion, freeze it, warm it up, see how it tastes. Google for food safety things. If it works out, try a big portion.” In retrospect it’s pretty silly that it spent so much time hitting the Think About It button instead of the Try It And See What Happens button.
I’ve gotta vent a little about communication norms.
My psychiatrist recommended a new drug. I went to take it last night. The pills are absolutely huge and make me gag. But I noticed that the pills look like they can be “unscrewed” and the powder comes out.
So I asked the following question (via chat in this app we use):
The psychiatrist responded:
The main thing I object to is the language “it seems”. Instead, I think “I can confirm” would be more appropriate.
I think that it is—here and frequently elsewhere—a mott-and-bailey. The bailey being “yes, I confirm that you can do this” and the mott being “I didn’t say it’d definitely be ok, just that it seems like it’d be ok”.
Well, that’s not quite right. I think it’s more subtle than that. If consuming the powder lead to issues, I do think the psychiatrist would take responsibility, and be held responsible if there any sort of legal thing, despite the fact that she used the qualifier “it seems”. So I don’t think that she consciously was trying to establish a motte that she can retreat to if challenged. Rather, I think it is more subconscious and habitual.
This seems like a bad epistemic habit though. Or, perhaps I should say, I’m pretty confident that it is a bad epistemic habit. I guess I have some work to do in countering it as well.
Here’s another example. I listen to the Thinking Basketball podcast. I notice that the cohost frequently uses the qualifier “necessarily”. As in, “Myles Turner can’t necessarily create his own shot”. What he means by that is “Myles Turner isn’t very good at creating his own shot”. This too I think is mostly habitual and subconscious, as opposed to being a conscious attempt to establish a motte that he can retreat to.
The way the psychiatrist phrased it made me mentally picture that they weren’t certain, went to review the information on the pill, and came back to relay their findings based on their research, if that helps with possible connotations. The extended implied version would be “I do not know. I am looking it up. The results of my looking it up are that, yes, it may be opened and mixed into food or something like applesauce.”
Your suggested replacement is in contrast has a light layer of the connotation “I know this, and answer from my own knowledge,” though less so than just stating “It may be opened and mixed into food or something like applesauce.” without the prelude.
From my perspective, the more cautious and guarded language might have been precisely what they meant to say, and has little to do with a fallacy. I am not so confident that you are observing a bad epistemic habit.
Ah, I see. That makes sense and changes my mind about what the psychiatrist probably meant. Thanks.
(Although it begs the new complaint of “I’m asking because I want confirmation not moderate confidence and you’re the professional who is supposed to provide the confirmation to me”, but that’s a separate thing.)
Subtextual politeness
In places like Hacker News and Stack Exchange, there are norms that you should be polite. If you said something impolite and Reddit-like such as “Psh, what a douchebag”, you’d get flagged and disciplined.
But that’s only one form of impoliteness. What about subtextual impoliteness? I think subtextual impoliteness is important too. Similarly important. And I don’t think my views here are unique.
I get why subtextual impoliteness isn’t policed though. Perhaps by definition, it’s often not totally clear what the subtext behind a statement is. So if you try to warn someone about subtextual impoliteness, they can always retreat to the position that you misinterpreted them (and were uncharitable).
One possible way around this would be to have multiple people vote on what the subtext is, but that sounds pretty messy. I expect it’d lead to a bunch of nasty arguments and animosity.
Another possible way around it is to… ask nicely? Like, “I’m not going to police you, but please be aware of the idea of subtext and try to keep your subtext polite.” I don’t see that working though. It’s an obvious enough thing that it doesn’t actually need saying. Plus I get the sense that many communities currently have stuff like this, and they are mostly ignored.
So are we just stuck with no good path forward? Meh, probably. I at least don’t see a good path forward.
At least in situations where you have no leverage. In situations like friendships and certain work relationships, if you find someone to be subtextually impolite, you can be less friendly towards them. I think that leverage is a large part of what pushes people to be subtextually polite in the first place (study on politeness in elevators vs cars).
Can you give a few examples (in-context on HN or Stack Exchange) of subtextual impoliteness that you wish were enforceable? It’s unfortunate but true that the culture/norm of many young-male-dominated technical forums can’t distinguish direct factual statements from aggressive framing.
I generally agree with “no good path forward” as an assessment: the bullies and insecure people who exist everywhere (even if not the majority) are very good at finding loopholes and deniable behaviors in any legible enforcement framework.
“Please be kind” works well in many places, or “you may be right, but that hurt my feelings”. But really, that requires high-trust to start with, and if it’s not already a norm, it’s very difficult to make it one.
Here are a two: 1, 2. /r/poker is also littered with it. Example.
I’m failing to easily find examples on Stack Exchange but I definitely know I’ve come across a bunch. Some that I’ve flagged. I tried looking for a way to see a list of comments you’ve flagged, but I wasn’t able to figure it out.
Thanks—yeah, those seem mild enough that I doubt there’s any possible mechanism to eliminate the snarky/rude/annoying parts, at least in a group much larger than Dunbar’s number with no additional social filtering (like in-person requirements for at least some interactions, or non-anonymous invite/expulsion mechanisms).
Life decision that actually worked for me: allowing myself to eat out or order food when I’m hungry and pressed for time.
I don’t think the stress of frantically trying to get dinner together is worth the costs in time or health. And after listening to this podcast episode, I’m suspect that, I’m not sure how to say this: “being overweight is bad, but like, it’s not that bad, and stressing about it is also bad since stress is bad, all of this in such a way where stressing out over being marginally more overweight is worse for your health than being a little more overweight”.
Something I do want to actually do though is to have a bunch of meals that I meal prep, freeze, and can warm up easily in the microwave. I want these meals to be healthy and at least somewhat affordable. And when these meals are actually available, I don’t really endorse myself eating out or ordering food.
At that point I want to save the eating out for places that are really, really good. Not just kinda good. Good enough to wow you. Definitely better than I can make at home. Eating out is pretty expensive and unhealthy. But on the other hand, I do really, really enjoy it and have lots of great places to eat here in Portland.
I think that, for programmers, having good taste in technologies is a pretty important skill. A little impatience is good too, since it can drive you to move away from bad tools and towards good ones.
These points seem like they should generalize to other fields as well.
Rationalist culture needs some traditions like this.
Inverted interruptions
Imagine that Alice is talking to Bob. She says the following, without pausing.
We can think of it like this. Approach #1:
At
t=1
Alice says “That house is ugly.”At
t=2
Alice says “You should read Harry Potter.”At
t=3
Alice says “We should get Chinese food.”Suppose Bob wants to respond to the comment of “That house is ugly.” Due to the lack of pauses, Bob would have to interrupt Alice in order to get that response in. On the other hand, if Alice paused in between each comment, we can consider that Approach #2:
t=1
: Alice says “That house is ugly.”t=2
: Alice pauses.t=3
: Alice says “You should read Harry Potter.”t=4
: Alice pauses.t=5
: Alice says “We should get Chinese food.”then Bob wouldn’t have to interrupt if he wanted to respond.
Let’s call Approach #1 an inverted interruption. It forces the other person to interrupt if they have something to say.
I think inverted interruptions are something to be careful about. Not that they’re always bad, just that they should be kept in mind and considered in order to make communication both fun and effective.
Can you describe a real-world situation where this sort of thing comes up? The artificialness of the example feels hard to engage with to me.
Another example I ran into last night: at around 42:15 in this podcast episode, in one breath, Nate Duncan switches from talking about an NBA player named Fred VanVleet to an NBA player named Dillon Brooks in such a way that it didn’t give his cohost, Danny Leroux a chance to say something about Fred VanVleet.
Certainly! It actually just happened at work. I’m a programmer. We were doing sprint planning, going through tickets. The speaker did something like:
t=1
: Some comments on ticket ABC-501t=2
: Some comments on ticket ABC-502t=3
: Some comments on ticket ABC-503If I wanted to say something about ABC-501, I would have had to interrupt.
Is there anything stopping you from commenting on ticket ABC-501 after the speaker stopped at t=3? “Circling back to ABC-501, I think we need to discuss how we haven’t actually met the user’s....”
That should only be awkward if your comment is superfluous.
I think that sometimes that sort of thing works. But other times it doesn’t. I’m having some trouble thinking about when exactly it does and doesn’t work.
One example of where I think it doesn’t is if the discussion of ABC-501 took 10 minutes, ABC-502 took another 10 minutes, ABC-503 takes another 10 minutes, and then after all of that you come back to ABC-501.
If you have a really important comment about ABC-501 then I agree it won’t be awkward, but if you have like a 4⁄10 importance comment, I feel like it both a) would be awkward and b) passes the threshold of being worth noting.
There’s the issue of having to “hold your comment in your head” as you’re waiting.
There’s the issue of lost context. People might have the context to understand your comment in the moment, but might have lost that context after the discussion of ABC-503 finished.
I think I notice that that people use placeholder words like “um” and “uh” in situations where they’d otherwise pause in order to prevent others from interjecting, because the speaker wants to continue saying what they want to say without being interrupted. I think this is subconscious though. (And not necessarily a bad thing.)
Something that I run into, at least in normie culture, is that writing (really) long replies to comments has a connotation of being contentious, or even hostile (example). But what if you have a lot to say? How can you say it without appearing contentious?
I’m not sure. You could try to signal friendliness by using lots of smiley faces and stuff. Or you could be explicit about it and say stuff like “no hard feelings”.
Something about that feels distasteful to me though. It shouldn’t need to be done.
Also, it sets a tricky precedent. If you start using smiley faces when you are trying to signal friendliness, what happens if next time you avoid the smiley faces? Does that signal contentiousness? Probably.
You can make the long reply its own post (and put a link to the post in a brief reply).
Related: Socratic Grilling.
Capabilities vs alignment outside of AI
In the field of AI we talk about capabilities vs alignment. I think it is relevant outside of the field of AI though.
I’m thinking back to something I read in Cal Newport’s book Digital Minimalism. He talked about how the Amish aren’t actually anti-technology. They are happy to adopt technology. They just want to make sure that the technology actually does more good than harm before they adopt it.
And thy have a neat process for this. From what I remember, they first start by researching it. Then have small groups of people experiment with it for some amount of time. Then larger groups. Something like that.
On the other hand, the impression I get is that we (strongly tend towards) assume that an increase in capabilities is automatically a good thing. For example, if there is some advancement made in the field of physics where we better understand subatomic particles the thought process is that this is exciting because down the line that theoretical understanding will lead to cool new technologies that improve our lives. This strikes me as being similar to the planning fallacy though: focusing on the “happy path” where things go the way you want them to go, and failing to think about the scenario where unexpected, bad things happen. Like next-gen nuclear weapons.
Speaking very generally, to me, it is very frequently not obvious whether capabilities improvements are actually aligned with our values and I’m not particularly excited when I hear about advancements in any given field.
From my perspective, part of the issue of this post is I notice a type error in the post when it talks about capabilities improvements being aligned with our values.
The question is, which values, and whose values are we talking about? Admittedly this is a common issue with morality, but in this case of capabilities research, this matters as our aligning it to our values is too vague to make sense. We need to go deeper and more concrete here so that we talk about specifically what we want our capabilities research is aligned to what values.
Yeah, I do agree that “values” is ambiguous. However, I think that is ok for the point that I’m making about capabilities vs alignment. Even though people don’t fully agree on values, paying more attention to alignment and being more careful about capabilities advancements still seems wise.
Spreading the seed of ideas
A few of my posts actually seem like they’ve been useful to people. OTOH, a large majority don’t.
I don’t have a very good ability to discern this from the beginning though. Given this situation, it seems worth “spreading the seed” pretty liberally. The chance of it being a useful idea usually outweighs the chance that it mostly just adds noise for people to sift through. Especially given the fact that the LW team encourages low barriers for posting stuff. Doubly especially as shortform posts. Triply especially given that I personally enjoy writing and sometimes benefit from the feedback of others.
Feels a little counterintuitive though. Or maybe just scary. I’m not a shy person when it comes to this sort of stuff but even for me I hesitate and think “Is this worth posting? Is it gonna be terrible and just add noise?”
I’d guess that I’m maybe 95th percentile or something in terms of how not reluctant I am to post (only 5% of people are less reluctant) and I think I am still too reluctant. I can’t think of any examples of people who seem like they should be more reluctant. jefftk comes to mind as someone who is extremely not reluctant, but even for him I’m totally happy with the almost daily posts and would probably appreciate being exposed to even more of his thoughts.
Notice when trust batteries start off low
I think it’s important to note that trust batteries don’t always start off at 50%. In fact, starting at 50% is probably pretty rare.
Consider this example: you begin working at a new company, Widget Corp. Widget Corp says that they treat all of their employees as if they were family. That is a very common thing for companies to claim, and yet very few of them actually mean it or anything close to it.
So then, at least in this context, I don’t think the trust battery starts off at 50%. I think it starts off at something more like 1%. And when trust batteries are low, you have to do more to persuade, just like how strong priors take more evidence to move.
I feel like this isn’t well understood though. I observe a lot of statements similar to “we treat all of our employees like family” without follow-up statements like “we also know that you don’t have reason to believe us, and so here is an attempt to provide more evidence and actually be a little bit convincing”. Some of the time it’s surely because the former statement is some sort of weird simulacra level 3⁄4 type of thing, but a decent chunk of the time I think it’s at level 1⁄2 and there is a genuine failure to recognize that latter follow-up statement is very much needed.
Covid-era restaurant choice hack: Korean BBQ. Why? Think about it...
They have vents above the tables! Cool, huh? I’m not sure how much that does, but my intuition is that it cuts the risk in half at least.
Science as reversed stupidity
Epistemic status: Babbling. I don’t have a good understanding of this, but it seems plausible.
Here is my understanding. Before science was a thing, people would derive ideas by theorizing (or worse, from the bible). It wasn’t very rigorous. They would kinda just believe things willy-nilly (I’m exaggerating).
Then science came along and was like, “No! Slow down! You can’t do that! You need to have sufficient evidence before you can justifiably believe something like that!” But as Eliezer explains, science is too slow. It judges things as pass-fail instead of updating incrementally. It wants to be very sure before it acknowledges something as “backed by science”.
I suspect that this attitude stems from reversing the stupidity that preceded science. And now that I think about it, a lot of ideas seem to stem from reversed stupidity. Perhaps we should be on the lookout for this more, and update our beliefs accordingly in the opposite direction.
I was just listening to the Why Buddhism Is True episode of the Rationally Speaking podcast. They were talking about what the goal of meditation is. The interviewee, Robert Wright, explains:
What an ambitious goal! But let’s suppose that it was achieved. What would be the implications?
Well, there are many. But one that stands out to me as particularly important as well as ignored, is that it might be a solution to existential risk. Maybe if people were all happy, maybe they’d be inclined to sit back, take a deep breathe, stop fighting, take their foot off the gas, and start working towards solutions to existential risks.
There’s been talk recently about there being a influx of new users to LessWrong and a desire to prevent this influx from harming the signal-to-noise ratio on LessWrong too much. I wonder: what if it costed something like $1 to make an account? Or $1/month? Some trivial amount of money that serves as a filter for unserious people.
This doesn’t work worldwide, so probably a nightmare to set up in a way that handles all the edge cases. Also, destitute students and trivial inconveniences.
Why is that? My impression is that eg. 1 USD to make an account would be a trivial amount for people no matter the country or socioeconomic status (perhaps with a few rare exceptions).
I think of this as more of a feature than a bug. There’d be some people it’d filter out who we would otherwise have wanted, but the benefits seem to me like they’d outweigh that cost.
Man I think I am providing value to the world by posting and commenting here. If it cost money I would simply stop posting here, and not post anywhere else.
The value flows in both directions. I’m fine not getting paid but paying is sending a signal of “what you do here isn’t appreciated”.
(Maybe I’d feel different if the money was reimbursed to particularly good posters? But then Goodharts law)
From Childhoods of exceptional people:
I wonder what the implications of this are for AI safety, and EA more generally? How beneficial would it be to invest in making some sort of tutoring ecosystem available to people looking to get into the field, or to advance from where they currently stand?
Nonfiction books should be at the end of the funnel
Books take a long time to read. Maybe 10-20 hours. I think that there are two things that you should almost always do first.
Read a summary. This usually gives you the 80⁄20 and only takes 5-10 minutes. You can usually find a summary by googling around. Derek Sivers and James Clear come to mind as particularly good resources.
Listen to a podcast or talk. Nowadays, from what I could tell, authors typically go on a sort of podcast tour before releasing a book in order to promote it. I find that this typically serves as a good ~hour long overview of the important parts of the book. For more prominent authors, sometimes they’ll also give a talk—eg. Talks at Google—after releasing.
I think it really depends on your reading speed. If you can read at 500 wpm, then it’s probably faster for you to just read the book than search around for a podcast and then listen to said podcast. I do agree, though, that reading a summary or a blog about the topic is often a good replacement for reading an entire book.
I’m having trouble seeing how that’d ever be the case. In my experience searching for a podcast rarely takes more than a few minutes, so let’s ignore that part of the equation.
If a book normally takes 10 hours to read, let’s say you’re a particularly fast reader and can read 5x as fast as the typical person (which I’m skeptical of). That’d mean it still takes 2 hours to read the book. Podcast episodes are usually about an hour. But if you’re able to read 5x faster that probably implies that you’re able to listen to the podcast at at least 2x speed if not 3x, in which case the podcast would only take 0.5 hours to go through, which is 4x faster than it’d take to read the book.
I’ve been in pursuit of a good startup idea lately. I went through a long list I had and deleted everything. None were good enough. Finding a good idea is really hard.
One way that I think about it is that a good idea has to be the intersection of a few things.
For me at least, I want to be able to fail fast. I want to be able to build and test it in a matter of weeks. I don’t want to raise venture funding and spend 18 months testing an idea. This is pretty huge actually. If one idea takes 10 days to build and the other takes 10 weeks, well, the burden of proof for the 10 week one is way higher. You could start seven 10 day ideas in 10 weeks.
I want the demand to be real. It should ideally be a painkiller, not a vitamin. Something people are really itching for, not something that kinda sorta sounds interesting that they think they should consume but aren’t super motivated to actually consume it. And I want to feel that way myself. I want to dogfood it. When I went through my list of ideas and really was honest with myself, there weren’t any ideas that I actually felt that eager to dogfood.
There needs to be a plausible path towards acquiring customers. Word of mouth, virality, SEO, ads, influencer marketing, affiliates, TV commercials, whatever. It’s possible that a product is quick to build and really satisfies a need, but there isn’t a good way to actually get it in front of users. You need a way to get it in front of users.
Of course, the money part needs to make sense. After listening to a bunch of Indie Hackers episodes, I’m really leaning towards businesses that make money via charging people, not via selling ads or whatever. Hopefully charging high prices, and hopefully targeting businesses (with budgets!) instead of consumers. Unfortunately, unlike Jay Z, I’m not a business, so I don’t understand the needs of businesses too well. I’ve always heard people give the advice of targeting businesses, but founders typically don’t understand the needs of businesses well, and I’ve never heard a good resolution to that dilemma.
I need to have the skills to build it. Fortunately at this point I’m a pretty solid programmer so there’s a lot of web app related things I’m able to build.
Hopefully there is a path towards expanding and being a big hit, not just a niche side income sort of thing. Although at this stage of my life I’d probably be ok with the latter.
When you add more and more things to the intersection, it actually gets quite small, quite rapidly.
The solution is likely “talk to people”. That could involve going to trade events or writing cold LinkedIn messages to ask people to eat lunch together.
You might also do something like an internship where you are not paid but on the other hand, will also own the code that you are writing during that internship.
Something like an internship would be a large investment of time that doesn’t feel like it’s worth the possibility of finding a startup idea.
I guess talking to people makes sense. I was thinking at first that it’d require more context than a lunch meeting, more like a dozen hours, but on second thought you could probably at least get a sense of where the paths worth exploring more deeply are (and aren’t) in a lunch meeting.
Bayesian traction
A few years ago I worked on a startup called Premium Poker Tools as a solo founder. It is a web app where you can run simulations about poker stuff. Poker players use it to study.
It wouldn’t have impressed any investors. Especially early on. Early on I was offering it for free and I only had a handful of users. And it wasn’t even growing quickly. This all is the opposite of what investors want to see. They want users. Growth. Revenue.
Why? Because those things are signs. Indicators. Signal. Traction. They point towards an app being a big hit at some point down the road. But they aren’t the only indicators. They’re just the ones that are easily quantifiable.
What about the fact that I had random people emailing me, thanking me for building it, telling me that it is way better than the other apps and that I should be charging for it? What about the fact that someone messaged me asking how they can donate? What about the fact that Daniel Negreanu—perhaps the biggest household name in poker—was using it in one of his YouTube videos?
Those are indicators as well. We can talk about how strong they are. Maybe they’re not as strong as the traditional metrics. Then again, maybe they are more strong. Especially something like Negreanu. That’s not what I want to talk about here though. Here I just want to make the point that they count. You’d be justified in using them to update your beliefs.
Still, even if they do count, it may be simpler to ignore them. They might be weak enough, at least on average, such that the effort to incorporate them into your beliefs isn’t worth the expected gain.
This reminds me of the situation with science. Science says that if a study doesn’t get that magical
p < 0.05
, we throw it in the trash. Why do we do this? Why don’t we just update our beliefs a small amount off ofp = 0.40
, a moderate amount off ofp = 0.15
and large amount off ofp = 0.01
? Well, I don’t actually know the answer to that, but I assume that as a social institution, it’s just easier to draw a hard line about what counts and what doesn’t.Maybe that’s why things work the way they do in startups. Sure, in theory the random emails I got should count as Bayesian evidence and update my beliefs about how much traction I have, but in practice that stuff is usually pretty weak evidence and isn’t worth focusing on.
In fact, it’s possible that the expected value of incorporating it is negative. That you’d expect it to do you more harm than good. To update your beliefs in the wrong direction, on average. How would that be possible? Bias. Maybe founders are hopelessly biased towards interpreting everything through rosy colored glasses and will inevitably arrive at the conclusion that they’re headed to the moon if they are allowed to interpret data like that.
That doesn’t feel right to me though. We shouldn’t just throw our hands in the air and give up. We should acknowledge the bias and update our beliefs accordingly. For example, you may intuitively feel like that positive feedback you got via email this month is a 4⁄10 in terms of how strong a signal it is, but you also recognize that you’re biased towards thinking it is a strong signal, and so you adjust your belief down from a 4⁄10 to a 1.5/10. That seems like the proper way to go about it.
Collaboration and the early stages of ideas
Imagine the lifecycle of an idea being some sort of spectrum. At the beginning of the spectrum is the birth of the idea. Further to the right, the idea gets refined some. Perhaps 1⁄4 the way through the person who has the idea texts some friends about it. Perhaps midway through it is refined enough where a rough draft is shared with some other friends. Perhaps 3⁄4 the way through a blog post is shared. Then further along, the idea receives more refinement, and maybe a follow up post is made. Perhaps towards the very end, the idea has been vetted and memetically accepted, and someone else ends up writing about it with their own spin and/or explanation.
Or something like that. This is just meant as a rough sketch.
Anyway, I worry that we don’t have a good process for that initial 75% of the spectrum. And furthermore, that those initial stages are quite important.
When I say “we” I’m talking partly about the LessWrong community and partly about society at large.
I have some ideas I’ll hopefully write about and pursue at some point to help with this. Basically, get the right people connected with each other in some awesome group chats.
It sounds to me like in a more normal case it doesn’t begin with texting friends but talking in person with them about the idea. For that to happen you usually need a good in person community.
These days more is happening via Zoom but reaching out to chat online still isn’t as easy as going to a meetup.
Perhaps. I’m not sure.
I wish more people used threads on platforms like Slack and Discord. And I think the reason to use threads is very similar to the reason why one should aim for modularity when writing software.
Here’s an example. I posted this question in the
#haskell-beginners
Discord channel asking whether it’s advisable for someone learning Haskell to use a linter. I got one reply, but it wasn’t as a thread. It was a normal message in#haskell-beginners
. Between the time I asked the question and got a response, there were probably a couple dozen other messages. So then, I had to read and scroll through those to get to the response I was interested in, and to see if there were any other responses.Each of the messages were part of a different conversation. I think of it as something like this:
There is a linear structure for something that more naturally structured as a tree.
In writing software, imagine that you have three sub-problems that you need to solve. And imagine if you approached this by doing something like this:
We generally prefer to avoid writing code this way. Instead, we prefer to take a more modular approach and do something like this:
By writing the code in a modular fashion, you can easily focus on the code related to sub-problem #1 and not have to sift through code that is unrelated to sub-problem #1. On the other hand, the more imperative non-modular version makes it difficult to tell what code is related to what sub-problem.
Similarly, using threads on platforms like Slack and Discord make it easy to see what messages belong to what conversations.
And like software, the importance of this gets larger as the “codebase” becomes more involved and complex. Imagine a Slack channel with lots and lots of conversations happening simultaneously without threads. That is difficult to manage. But if it’s a small channel with only two or three conversations happening simultaneously, that is more manageable.
Threads are pretty good, most help channels should probably be a forum (or 1 forum + 1 channel). Discord threads do have a significant drawback of lowering visibility by a lot, and people don’t like to write things that nobody ever sees.
^ Forum
Meh. If you start a thread under the message “Parent level message” in
#the-channel
the UI will indicate that there are “N Messages” in a thread belonging to “Parent level message”. It’s true that those messages aren’t automatically visible to people scrolling through the main channel, they’d have to click to open the thread, but if your audience isn’t motivated to do that it seems to me like they aren’t worth interacting with in the first place.I do prefer how Slack treats threads though. They’re more light and convenient to use in Slack.
This is super rough and unrefined, but there’s something that I want to think and write about. It’s an epistemic failure mode that I think is quite important. It’s pretty related to Reversed Stupidity is Not Intelligence. It goes something like this.
You think 1. Alice thinks 2. In your head, you think to yourself:
Then you run into other people being like:
I wish I could easily think of good, concrete, real-world examples of this, but I’m failing to right now.
Anyway, I think this failure mode is both very common (amongst the general public, yes, but also amongst rationalists), very tempting, and very harmful.
A big reason why I think it’s harmful is because it functions as a sort of conversation halter. Just an intrapersonal one rather than interpersonal. Like, for traditional conversation halters, you’re talking to another person (interpersonal) and they say something that just kinda halts the discussion. But here, I’m trying to point to something that you do in your own inner monologue.
Instead, what I think you should do would be something like steelmanning:
I’d appreciate any conversation and help on this. In whatever form. Examples would be awesome.
I think this is pretty applicable to highly visible blog posts, such as ones that make the home page in popular communities such as LessWrong and Hacker News.
Like, if something makes the front page as one of the top posts, it attracts lots of eyeballs. With lots of eyeballs, you get more prestige and social status for saying something smart. So if a post has lots of attention, I’d expect lots of the smart-things-to-be-said to have been said in the comments.
It’s weird that people tend so strongly to be friends with people so close to their age. If you’re 30, why are you so much more likely to be friends with another 30 year old than, say, a 45 year old?
Lots of people make friends in age-segregated environments, such as school and college.
That’s true, but I don’t think it explains it because I think that outside of age-segregated environments, an eg. 30 year old is still much, much more likely to befriend a 30 year old than a 45 year old.
Part of it is that age gap friendships are often considered kind of weird, too; people of different ages often are at different stages of their careers, etc., and often don’t really think of each other as of roughly equal status. (What would it be like trying to have a friendship with a manager when you aren’t a manager, even if that manager isn’t someone you personally report to?)
Right, that’s the kind of thing I suspect as well. And I like the thought about careers and statuses.
The first explanation that comes to mind is that people usually go through school, wherein they spend all day with people the same age as them (plus adults, who generally don’t socialize with the kids), and this continues through any education they do. Then, at the very least, this means their starting social group is heavily seeded with people their age, and e.g. if friend A introduces friend B to friend C, the skew will propagate even to those one didn’t meet directly from school.
Post-school, you tend to encounter more of a mix of ages, in workplaces, activity groups, meetups, etc. Then your social group might de-skew over time. But it would probably take a long time to completely de-skew, and age 30 is not especially long after school, especially for those who went to grad school.
There might also be effects where people your age are more likely to be similar in terms of inclination and capability to engage in various activities. Physical condition, monetary resources, having a committed full-time job, whether one has a spouse and children—all can make it easier or harder to do things like world-traveling and sports.
I should have been more clear. Sorry.
I feel like there is a specific phenomena where, outside of age-segregated environments, it’s still the case that a 30 year old is much more likely to befriend another 30 year old than a 45 year old.
Yeah maybe. I’m skeptical though. I think once you’re in your 20s, most of the time you’re not too different from people in their 40s. A lot of people in their 20s have romantic partners, jobs, ability to do physically demanding things.
Personally I suspect moderately strongly that the explanation is about what is and isn’t socially acceptable.
If that is indeed the (main) explanation, it seems weird to me. Why would that norm arise?
I think it is a combination of many things that point in a similar direction:
School is age-segregated, and if you are university-educated, you stay there until you are ~ 25.
Even after school, many people keep the friends they made during the school.
A typical 25 years old is looking for a partner, doesn’t have kids, doesn’t have much of a job experience, often can rely on some kind of support from their parents, and is generally full of energy. A typical 40 years old already has a partner, has kids, spent over a decade having a full-time job, sometimes supports their parents, and is generally tired. -- These people are generally in a different situation, with different needs. In their free time, the 25 years old wants to socialize with potential partners. The 45 years old is at home, helping their kids with homework.
Also, I think generally, people have few friends. Especially after school.
*
To use myself as an N=1 example, I am in my 40s, and I am perfectly open to the idea of friendship with people in their 20s. But I spend most of my day at work, then I am at home with my kids, or I call my existing friends and meet them. I spend vacations with my kids, somewhere in nature. I simply do not meet the 20 years olds most of the time. And when I do, they are usually in groups, talking to each other; I am an introverted person, happy to talk 1:1, but I avoid groups unless I already know some of the people.
Thanks, I liked this and it updated me. I. do still think there is a somewhat strong “socially acceptable” element here, but I also think I was underestimating the importance of these lifestyle differences.
I suppose the “socially acceptable” element is a part of why it would feel weird for me to try joining a group of people in their 20s, on the occasions that I meet such group, in contexts where if it was a group of people in their 40s instead, I could simply sit nearby, listen to their debate for a while, and then maybe join at a convenient moment, or hope to be invited to the debate by one of them. Doing this with a group of people a generation younger than me would feel kinda creepy (which is just a different way of saying socially unacceptable). But such situations are rare—in my case, the general social shyness, and the fact that I don’t have hobbies where I could meet many people and interact with them, have a stronger impact. The most likely place for me to meet and talk to younger people are LW/ACX meetups.
For me, one place I’ve noticed it is in my racquetball league. There is a wide mix of ages, but I’ve noticed that the 30somethings tend to gravitate and the 50+ aged people tend to gravitate.
I think something sorta similar is true about startups/business.
Say you have an idea for a better version of Craigslist called Bobslist. You have various hypotheses about how Craigslist’s UI is bad and can be improved upon. But without lots of postings, no one is going to care. Users care more about products and price than they do about the UI.
This reminds me of the thing with gene A and gene B. Evolution isn’t going to promote gene B if gene A isn’t already prominent.
I think Bobslist’s nicer UI is like gene B. It relies on there being a comparable number and quality of product listings (“gene A”) and won’t be promoted by the market before “gene A” becomes prominent.
I wonder if the Facebook algorithm is a good example of the counterintuitive difficulty of alignment (as a more general concept).
You’re trying to figure out the best posts and comments to prioritize in the feed. So you look at things like upvotes, page views and comment replies. But it turns out that that captures things like how much of a demon thread it is. Who would have thought metrics like upvotes and page views could be so… demonic?
I don’t think this is an alignment-is-hard-because-it’s-mysterious, I think it’s “FB has different goals than me”. FB wants engagement, not enjoyment. I am not aligned with FB, but FB’s algorithm is pretty aligned with its interests.
Oh yeah, that’s a good point. I was thinking about Facebook actually having the goal to promote quality content. I think I remember hearing something about how that was their goal at first, then they got demon stuff, then they realized demon stuff made them the most money and kept doing it. But still, people don’t associate Facebook with having the goal of promoting quality content, so I don’t think it’s a good example of the counterintuitive difficulty of alignment.
Open mic posts
In stand up comedy, performances are where you present your good jokes and open mics are where you experiment.
Sometimes when you post something on a blog (or Twitter, Facebook, a comment, etc.), you intend for it to be more of a performance. It’s material that you have spent time developing, are confident in, etc.
But other times you intend for it to be more of an open mic. It’s not obviously horribly or anything, but it’s certainly experimental. You think it’s plausibly good, but very well might end up being garbage.
Going further, in stand up comedy, there is a phase that comes before open mics. I guess we can call that phase “ideation”. Where you come up with your ideas. Maybe that’s going for walks. Maybe it’s having drinks with your comic friends. Maybe it’s talking to your grandma. Who knows? But there’s gotta be some period where you’re simply ideating. And I don’t really see anything analogous to that on LessWrong. It seems like something that should exist though. Maybe it exists right here with the Short Form (and Open Thread)? OnTwitter? Slack and Discord groups? Talking with friends? Even if it does, I wonder if we could do more.
Stand-up is all about performance, not interaction or collaboration, and certainly not truth-seeking (looking for evidence and models so that you can become less wrong), so it’s an imperfect analogy. But there’s value in the comparison.
I do see occasional “babble” posts, and fairly open questions on LW, that I think qualify as ideation. I suspect (and dearly hope) that most people do also go on walks and have un-recorded lightweight chats with friends as well.
On Stack Overflow you could offer a bounty for a question you ask. You sacrifice some karma in exchange for having your question be more visible to others. Sometimes I wish I could do that on LessWrong.
I’m not sure how it’d work though. Giving the post +N karma? A bounties section? A reward for the top voted comment?
Alignment research backlogs
I was just reading AI alignment researchers don’t (seem to) stack and had the thought that it’d be good to research whether intellectual progress in other fields is “stackable”. That’s the sort of thing that doesn’t take an Einstein level of talent to pursue.
I’m sure other people have similar thoughts: “X seems like something we should do and doesn’t take a crazy amount of talent”.
What if there was a backlog for this?
I’ve heard that, to mitigate procrastination, it’s good to break tasks down further and further until they become bite-sized chunks. It becomes less daunting to get started. Maybe something similar would apply here with this backlog idea. Especially if it is made clear roughly how long it’d take to complete a given task. And how completing that task fits in to the larger picture and improves it. Etc. etc.
And here’s another task: researching whether this backlog idea itself has ever been done before, whether it is actually plausible, etc.
Mustachian Grants
I remember previous discussions that went something like this:
But what if those grants were minimal? What if they were only enough to live out a Mustachian lifestyle?
Well, let’s see. A Mustachian lifestyle costs something like $25k/year iirc. But it’s not just this years living expenses that matter. I think a lot of people would turn down the grant and go work for Google instead if it was only a few years because they want to set themselves up financially for the future. So what if the grant was $25k/year indefinitely? That could work but also starts to get large enough where people might try to exploit it.
What if there was some sort of house you could live at, commune style? Meals would be provided, there’d be other people there to socialize with, health care would be taken care of, you’d be given a small stipend for miscellaneous spending, etc. I don’t see how bad actors would be able to take advantage of that. They’d be living at the same house so if they were taking advantage of it it’d be obvious enough.
I think that only addresses a branch concern, not the main problem. It filters out some malicious actors, but certainly not all—you still get those who seek the grants IN ADDITION to other sources of revenue.
More importantly, even if you can filter out the bad actors, you likely spend a lot on incompetent actors, who don’t produce enough value/progress to justify the grants, even if they mean well.
I don’t think those previous discussions are still happening very much—EA doesn’t have spare cash, AFAIK. But when it was, it was nearly-identical to a lot of for-profit corporations—capital was cheap, interest rates were extremely low, and the difficulty was in figuring out what marginal investments brought future returns. EA (18 months ago) had a lot of free/cheap capital and no clear models for how to use it in ways that actually improved the future. Lowering the bar for grants likely didn’t convince people that it would actually have benefits.
Meaning that, now that they’re living in the commune they’ll be more likely seek more funding for other stuff? Maybe. But you can just keep the barriers as high as they currently are for the other stuff, which would just mean slightly(?) more applicants to filter out at the initial stages.
My model is that the type of person who would be willing to move to a commune and live amongst and bunch of alignment researchers is pretty likely to be highly motivated and slightly less likely to be competent. The combination of those two things makes me thing they’d be pretty productive. But even if they weren’t, the bar of eg. $20k/year/person is pretty low.
Thanks for adding some clarity here. I get that impression too but not confidently. Do you know if it’s because a majority of the spare cash was from FTX and that went away when FTX collapsed?
That’s always seemed really weird to me. I see lots of things that can be done. Finding the optimal action or even a 90th+ percentile action might be difficult but finding an action that meets some sort of minimal threshold seems like it’s not a very high bar. And letting the former get in the way of the latter seems like it’s making the perfect the enemy of the good.
Ah, I see—I didn’t fully understand that you meant “require (and observe) the lifestyle” not just “grants big enough to do so, and no bigger”. That makes it quite a bit safer from fraud and double-dipping, and a LOT less likely (IMO) to get anyone particularly effective that’s not already interested.
Asset ceilings for politicians
A long time when I was a sophomore in college, I remember a certain line of thinking I went through:
It is important for politicians to be incentivized properly. Currently they are too susceptible to bribery (hard, soft, in between) and other things.
It is difficult to actually prevent bribes. For example, they may come in the form of “Pass the laws I want passed and instead of handing you a lump sum of money, I’ll give you a job that pays $5M/year for the next 30 years after your term is up.”
Since preventing bribes is difficult, you could instead just say that if you’re going to be a politician—a public servant—you have to live a low income lifestyle from here on out. You and your family. Say, 25th percentile income level. Or Minimally Viable Standard of Living if you want to get more aggressive. The money will be provided to you by the government and you’re not allowed to earn your own money elsewhere. If you start driving lamborghinis, we’ll know.
The downside I see is that it might push away talent. But would it? Normally if you pay a lower salary you get less talented employees but for roles like President of the United States of America, or even Senator of Idaho, I think the opportunity for impact would be large enough to get very talented people, and any losses in talent would be made up for reduced susceptibility to bribery.
I often cringe at the ideas of my past selves but this one still seems interesting.
I haven’t seen any good reasoning or evidence that allowing businesses and titans to bribe politicians via lobbyists actually results in worse laws. People gasp when I say this, but the default doesn’t seem that much better. If Peter Thiel had been able to encourage Trump to pick the cabinet heads he wanted then our COVID response would have gone much better.
To me the reasoning seems pretty straightforward:
If the politician is trying to appease special interests, that will usually comes at the expense of society. I guess that is arguable though.
If (1), that would be “worse” because goodness depends on how the policies affect society as a whole.
Most billionaires at least seem to donate ideologically, not really based on how politicians affect their special interest group. There’s definitely a correlation there, but if billionaires are just more reasonable on average then it’s possible that their influence is net-positive overall.
Is there anyone for whom this is NOT important? Why not an asset ceiling on every human?
The problem is in implementation. Leaving aside all the twiddly details of how to measure and enforce, there’s no “outside authority” which can impose this. You have to convince the populace to impose it. And if they are willing to do that, it’s not necessary to have a rule, it’s already in effect by popular action.
It generally is quite important. However, (powerful) politicians are a special case because 1) they have more influence on society and 2) I presume people would still be motivated to take the position even with the asset ceiling. Contrasting this with my job as a programmer, 1) it’d be good if my incentives were more aligned with the company I work for but it wouldn’t actually impact society very much and 2) almost no one would take my job if it meant a lower standard of living.
Wouldn’t the standard law enforcement people enforce it, just like how if a president committed murder they wouldn’t get away with it? Also, it’s definitely tricky but there is a precedent for those in power to do what’s in the interest of the future of society rather than what would bring them the most power. I’m thinking of George Washington stepping away after two terms and setting that two term precedent.
I don’t believe either of these is true, when comparing against (powerful) non-politician very-rich-people.
I didn’t mean the end-enforcement (though that’s a problem too—standard law enforcement personnel can detect and prove murder. They have SOME ability to detect and prove income. They have very little ability to understand asset ownership and valuation in a world where there’s significant motive to be indirect about it. But I meant “who will implement it”, if voters today don’t particularly care, why will anyone define and push for the legislation that creates the limit?
Hm, maybe. Let me try thinking of some examples:
CEOs: Yeah, pretty big influence and I think smart people would do it for free. Although if you made a rule that CEOs of sufficiently large companies had to have asset ceilings I think there’d be a decent amount less entrepreneurs which feels like it’d be enough to make it a bad idea.
Hedge fund managers: From what I understand they don’t really have much influence on society in their role. I think some smart people would still take the job with an asset ceiling but they very well might not be smart enough; I know how competitive and technical that world is. And similar to CEOs, I don’t think there’d be many if any hedge funds that got started if they knew their traders would have to have asset ceilings.
Movie stars: Not much influence on society, but people would take the role for the fame it’d provide of course.
After trying to think of examples I’m not seeing any that fit. Do you have any in mind?
There might be things that are hard to prevent from slipping through the cracks, but the big things seem easy enough to detect: houses, cars, yachts, hotels, vacations. I guess they’d probably have to give up some rights to privacy too though to make enforcement for practical. Given how much they’re already giving up with the asset ceiling, the additional sacrificing of some amount of privacy doesn’t seem like it changes anything too much.
I’m not optimistic about it, but to me it seems at least ballpark plausible. I don’t understand this stuff too much, but to me it seems like voters aren’t the problem. Voters right now, across party lines, distrust politicians and “the system”. I would assume the problem is other politicians. You’d have to get their support but it negatively affects them so they don’t support it.
Maybe there are creative ways to sidestep this though.
Make the asset ceilings start in 10 years instead of today? Maybe that’d be blatantly obvious that it’s the current politicians not wanting to eat their gross dogfood? Would that matter?
Maybe you could start by gathering enough public support for the idea to force the hands of the politicians?
Goodhart’s Law seems like a pretty promising analogy for communicating the difficulties of alignment to the general public, particularly those who are in fields like business or politics. They’re already familiar with the difficulty and pain associated with trying to get their organization to do X.
When better is apples to oranges
I remember talking to a product designer before. I brought up the idea of me looking for ways to do things more quickly that might be worse for the user. Their response was something along the lines of “I mean, as a designer I’m always going to advocate for whatever is best for the user.”
I think that “apples-to-oranges” is a good analogy for what is wrong about that. Here’s what I mean.
Suppose there is a form and the design is to have inline validation (nice error messages next to the input fields). And suppose that “global” validation would be simpler (an error message in one place saying “here’s what you did wrong”). Inline is better for users, but comparing it straight up to global would be an apples-to-oranges comparison.
Why? Because the inline version takes longer. Suppose the inline version takes two days and the global version takes one.
Here’s where the apples-to-oranges analogy comes in: you can’t compare something that takes one day to something that takes two days purely on the grounds of user experience. That is apples-to-oranges. For it to be apples-to-apples, you’d have to compare a) inline validation to b) global validation + whatever else that can be done in the second day. In other words, (b) has to include the right-hand side of the plus sign. Without the right-hand side, it is apples-to-oranges.
I was just watching this YouTube video on portable air conditioners. The person is explaining how air conditioners work, and it’s pretty hard to follow.
I’m confident that a very large majority of the target audience would also find it hard to follow. And I’m also confident that this would be extremely easy to discover with some low-fi usability testing. Before releasing the video, just spend maybe 20 mins and have a random person watch the video, and er, watch them watch it. Ask them to think out loud, narrating their thought process. Stuff like that.
Moreover, I think that this sort of stuff happens all the time, in many different areas. As another example, I was at a train stop the other day and found the signs confusing. It wasn’t clear which side of the tracks were going north and which side were going south. And just like the YouTube video, I think that most/many people would also find it confusing, this would be easy to discover with usability testing, and at least in this case, there’s probably some sort of easy solution.
So, yeah: this is my cry into an empty void for the world to incorporate low-fi usability testing into anything and everything. Who knows, maybe someone will hear me.
To the degree that people do things only to signal, I don’t expect your idea to take off.
I think that people should write with more emotion. A lot more emotion!
Emotion is bayesian evidence. It communicates things.
It frustrates me that people don’t write with more emotion. Why don’t they? Are they uncomfortable being vulnerable? Maybe that’s part of it. I think the bigger part is just that it is uncomfortable to deviate from social norms. That’s different from the discomfort from vulnerability. If everyone else is trying to be professional and collected and stuff and write more dispassionately, and you are out there getting all excited and angry and intrigued and self-conscious, you’ll know that you are going against the grain.
But all of those emotions are useful though. Again, it communicates things. Sure, it is something that can be taken overboard. There is such a thing as expressing too much emotion. But I think we have a ways to go before we get there.
(Am I expressing enough emotion here? Am I being a hypocrite? I’m note sure. I’m probably not doing a great job at expressing emotions here. Which makes me realize that it’s probably, to a non-trivial extent, a skill that needs to be practiced. You have to introspect, think about what emotions you are feeling, and think about which of them would be useful to express.)
I wonder whether it would be good to think about blog posts as open journaling.
When you write in a journal, you are writing for yourself and don’t expect anyone else to read it. I guess you can call that “closed journaling”. In which case “open journaling” would mean that you expect others to read it, and you at least loosely are trying to cater to them.
Well, there are pros and cons to look at here. The main con of treating blog posts as open journaling is that the quality will be lower than a more traditional blog post that is more refined. On the other hand, a big pro is that, relatedly, a wider and more diverse range of posts will get published. We’re loosening the filter.
It also may encourage an environment of more collaboration, and people thinking things through together. If someone posts something where they spent a lot of time on it, and I notice something that seems off, I’d probably lean towards assuming that I just didn’t understand it, and it is in fact correct. I’d also lean towards assuming that it wouldn’t be the best idea to take up peoples time by posting a comment about my feeling that something is off. On the other hand, if I know that a post is more exploratory, I’d lean less strongly towards those assumptions and be more willing to jump in and discuss things.
It seems that there is agreement here on LessWrong that there is a place for this more exploratory style of posting. Not every post needs to be super refined. For less refined posts, there is the shortform, open thread and personal blog posts. So it’s not that I’m proposing anything new here. It’s just that “open journaling” seems like a cool way to conceptualize this. The idea occurred to me while I was on the train this morning and thinking about it as an “open journal” just inspired me to write up a few ideas that have been swimming around in my head.
Inconsistency as the lesser evil
It bothers me how inconsistent I am. For example, consider covid-risk. I’ve eaten indoors before. Yet I’ll say I only want to get coffee outside, not inside. Is that inconsistent? Probably. Is it the right choice? Let’s say it is, for arguments sake. Does the fact that it is inconsistent matter? Hell no!
Well, it matters to the extent that it is a red flag. It should prompt you to have some sort of alarms going off in your head that you are doing something wrong. But the proper response to those alarms is to use that as an opportunity to learn and grow and do better in the future. Not to continue to make bad choices moving forward out of fear that inconsistency is unvirtuous. Yet this fear is a strong one that is often moving. At least for me.
Inconsistency is a pointer to incorrectness, but I don’t think that example is inconsistent. There’s a reference class problem involved—eating a meal and getting a coffee, at different times, with different considerations of convenience, social norms, and personal state of mind, are just not the same decision.
I hear ya. In my situation I think that when you incorporate all of that and look at the resulting payoffs and probabilities, it does end up being inconsistent. I agree that it depends on the situation though.
The other day I was walking to pick up some lunch instead of having it delivered. I also had the opportunity to freelance for $100/hr (not always available to me), but I still chose to walk and save myself the delivery fee.
I make similarly irrational decisions about money all the time. There are situations where I feel like other mundane tasks should be outsourced. Eg. I should trade my money for time, and then use that time to make even more money. But I can’t bring myself to do it.
Perhaps food is a good example. It often takes me 1-2 hours to “do” dinner. Suppose ordering something saves me $10 relative to what I’d otherwise spend at home. I think my time is worth more than $5-10/hr, and yet I don’t order food.
One possible explanation is that I rarely have the opportunity to make extra money with extra free time, eg. by freelancing. But I could work on startups in that free time. That doesn’t guarantee me more money, but in terms of expected value, I think it’s pretty high. Is there a reason why this type of thinking might be wrong? Variance? I could adjust the utilities based off of some temporal discounting and diminishing marginal utility or whatever, but even after that the EV seems wildly higher than the $5-10/hr I’m saving by cooking.
Here’s the other thing: I’m not alone. In fact, I observe that tons and tons of people are in a similar position as me, where they could be trading money for time very profitably but choose not to, especially here on LessWrong.
I wonder whether there is something I am missing. I wonder what is going on here.
I suspect there are multiple things going on. First and foremost, the vast majority of uses of time have non-monetary costs and benefits, in terms of enjoyment, human interaction, skill-building, and even less-legible things than those. After some amount of satisficing, money is no longer a good common measurement for non-comparable things you could do to earn or spend it.
Secondly, most of our habits on the topic are developed in a situation where hourly work is not infinitely available at attractive rates. The marginal hour of work, for most of us, most of the time, is not the same as our average hour of work. In the case where you have freelance work available that you could get $1.67/minute for any amount of time you choose, and you can do equally-good (or at least equally-valuable) work regardless of state of mind, your instincts are probably wrong—you should work rather than any non-personally-valuable chores that you can hire out for less than this.
One thing strikes me: you appear to be supposing that apart from how much money is involved, every possible activity per hour is equally valuable to you in itself. This is not required by rationality unless you have a utility function that depends only upon money and a productivity curve that is absolutely flat.
Maybe money isn’t everything to you? That’s rationally allowed. Maybe you actually needed a break from work to clear your head for the rest of the afternoon or whatever? That’s rationally allowed too. It’s even allowed for you to not want to do that freelancing job instead of going for a walk at that time, though in that case you might consider the future utility of the net $90 in getting other things that you might want.
Regarding food, do you dislike cooking for yourself more than doing more work for somebody else? Do you actually dislike cooking at all? Do you value deciding what goes into your body and how it is prepared? How much of your hourly “worth” is compensation for having to give up control of what you do during that time? How much is based on the mental or physical “effort” you need to put into it, which may be limited? How much is not wanting to sell your time much more cheaply than they’re willing to pay?
Rationality does not forbid that any of these should be factors in your decisions.
On the startup example, my experience and those of everyone else I’ve talked to who have done it successfully is that leading a startup is hell, even if it’s just a small scale local business. You can’t do it part time or even ordinary full time, or it will very likely fail and make you less than nothing. If you’re thinking “I could spend some of my extra hours per week on it”, stop thinking it because that way lies a complete waste of time and money.
No, I am not supposing that. Let me clarify. Consider the example of me walking to pick up food instead of ordering it. Suppose it takes a half hour and I could have spent that half hour making $50 instead. The way I phrased it:
Option #1: Spend $5 to save myself the walk and spend that time freelancing to earn $50, netting me $45.
Option #2: Walk to pick up the food, not spending or earning anything.
The problem with that phrasing is that dollars aren’t what matter, utility is, as you allude to. My point is that it still seems like people often make very bad decisions. In this example, the joy of walking versus freelancing + any productivity gains are not worth $45, I don’t think.
I do agree that this doesn’t last forever though. At some point you get so exhausted from working where the walk has big productivity benefits, the work would be very unpleasant, and the walk would be a very pleasant change of pace.
Tangential, but Paul Graham wouldn’t call that a startup.
I disagree here. 1) I know of real life counterexamples. I’m thinking of people I met at an Indie Hackers meetup I used to organize. 2) It doesn’t match my model of how things work.
Agreed if we assume this premise is true, but I don’t think it is often true.
The original question is based on the observation that a lot of people, including me, including rationalists, do things like spending an hour of time to save $5-10 when their time is presumably worth a lot more than that, and in contexts where burnout or dips in productivity wouldn’t explain it. So my question is whether or not this is something that makes sense.
I feel moderately strongly that it doesn’t actually make sense, and that what Eliezer eludes to in Money: The Unit of Caring is what explains the phenomena.
Betting is something that I’d like to do more of. As the LessWrong tag explains, it’s a useful tool to improve your epistemics.
But finding people to bet with is hard. If I’m willing to bet on X with Y odds and I find someone else eager to, it’s probably because they know more than me and I am wrong. So I update my belief and then we can’t bet.
But in some situations it works out with a friend, where there is mutual knowledge that we’re not being unfair to one another, and just genuinely disagree, and we can make a bet. I wonder how I can do this more often. And I wonder if some sort of platform could be built to enable this to happen in a more widespread manner.
Idea: Athletic jerseys, but for intellectual figures. Eg. “Francis Bacon” on the back, “Science” on the front.
I’ve always heard of the veil of ignorance being discussed in a… social(?) context: “How would you act if you didn’t know what person you would be?”. A farmer in China? Stock trader in New York? But I’ve never heard it discussed in a temporal context: “How would you act if you didn’t know what era you would live in?” 2021? 2025? 2125? 3125?
This “temporal veil of ignorance” feels like a useful concept.
I just came across an analogy that seems applicable for AI safety.
AGI is like a super powerful sports car that only has an accelerator, no brake pedal. Such a car is cool. You’d think to yourself:
You wouldn’t just hop in the car and go somewhere. Sure, it’s possible that you make it to your destination, but it’s pretty unlikely, and certainly isn’t worth the risk.
In this analogy, the solution to the alignment problem is the brake pedal, and we really need to find it.
(I’m not as confident in the following, plus it seems to fit as a standalone comment rather than on the OP.)
Why do we really need to find it? Because we live in a world where people are seduced by the power of the sports car. They are in a competition to get to their destinations as fast as possible and are willing to be reckless in order to get there.
Well, that’s the conflict theory perspective. The mistake theory perspective is that people simply think they’ll be fine driving the car without the brakes.
That sounds crazy. And it is crazy! But think about it this way. (The analogy starts to break down a bit here.) These people are used to driving wayyyy less powerful cars. Sometimes these cars don’t have breaks at all, other times they have mediocre brake systems. Regardless, it’s not that dangerous. These people understand that the sports car is in a different category and is more dangerous, but they don’t have a good handle on just how much more dangerous it is, and how it is totally insane to try to drive a car like that without brakes.
We can also extend the analogy in a different direction (although the analogy breaks down when pushed in this direction as well). Imagine that you develop breaks for this super powerful sports car. Awesome! What do you do next? You test them. In as many ways as you can.
However, with AI, we can’t actually do this. We only have one shot. We just have to install them, hit the road, and hope they work. (Hm, maybe the analogy does work. Iirc, the super powerful racing cars, are built to only be driven once/a few times. There’s a trade-off between performance and how long the car lasts. And for races, they go all the way towards the performance side of the spectrum.)
Alice, Bob, and Steve Jobs
In my writing, I usually use the Alice and Bob naming scheme. Alice, Bob, Carol, Dave, Erin, etc. Why? The same reason Steve Jobs wore the same outfit everyday: decision fatigue. I could spend the time thinking of names other than Alice and Bob. It wouldn’t be hard. But it’s just nice to not have to think about it. It seems like it shouldn’t matter, but I find it really convenient.
Epistemic status: Rambly. Perhaps incoherent. That’s why this is a shortform post. I’m not really sure how to explain this well. I also sense that this is a topic that is studied by academics and might be a thing already.
I was just listening to Ben Taylor’s recent podcast on the top 75 NBA players of all time, and a thought started to crystalize for me that I always have wanted to develop. For people who don’t know him (everyone reading this?), his epistemics are quite good. If you want to see good epistemics applied to basketball, read his series of posts on The 40 Best Careers in NBA History.
Anyway, at the beginning of the podcast, Taylor started to talk about something that was bugging him. Previously, on the 50th anniversary of the league in 1996, a bunch of people voted on a list of the top 50 players in NBA history. Now it is the 75th anniversary of the league, so a different set of people voted on the top 75 players in NBA history. The new list basically took the old list of 50 and added 25 new players. But Taylor was saying it probably shouldn’t be like this. One reason is because our understanding of the game of basketball has evolved since 1996, so who we thought were the top 50 then probably had some flaws. Also, it’s not like the voting body in 1996 was particularly smart. As Taylor nicely puts it, they weren’t a bunch of “basketball PhDs (if that were a thing)”, they were random journalists, players, and coaches, people who aren’t necessarily well qualified to be voting on this. For example, they placed a ton of value on how many points you scored, but not nearly enough value on how efficiently you scored those points.
Later in the podcast they were analyzing various players and the guy he had on as a guest, Cody, said how one player was voted to a lot of all star games. But Taylor said that while this is true, he doesn’t really trust the people who voted on all start games back in the 1960s or whenever it was (not that people are good at voting on all star games now). This got me thinking about something. Does it make sense to look at awards like all star games, MVP voting and all NBA team voting (top 15 players in the league basically)? Well, by doing so, you are incorporating the opinion of various other experts. But I see two problems here.
How smart are those experts? Sometimes the expert opinion is actually quite flawed, and Taylor makes a good point that here this is the case.
In looking at the opinion of those experts, I think that you are committing one of those crimes that can send you to rationalist prison. I think that you are double counting the evidence! Here’s what I mean. I think that for these expert opinions, the experts rely a lot on what the other experts think. For example, in the podcast they were talking about Bob Cousy vs Bill Sharman. Cousy is considered a legend, whereas Sharman is a guy who was very good, but never became a household name. But Taylor was saying how he thinks Sharman might have actually been better than Cousy. But he just couldn’t bring himself to actually place Sharman over Cousy in his list. I think part of that is because it is hard to deviate from majority opinion that much. So I think that is an example where you base your opinion on what others think. Not 100%, but some percentage.
But isn’t that double counting? As a simplification, imagine that Alice arrives at her opinion without the influence of others, and then Bob’s opinion is 50% based on what Alice thinks and 50% based on what his gears level models output. That seems to me like it should count as 1.5 data points, not 2. I think this becomes more apparent as you add more people. Imagine that Carold, Dave and Erin all do the same thing as Bob. Ie. each of them is basing 50% of their opinion on what Alice thinks. Should that count as 5 data points or 3? What if all of them were basing it 99% on what Alice thinks. Should that count as 5 data points or 1.04? You could argue perhaps that 1.04 is too low, but arguing that it is 5 really seems like it is too high. To make the point even more clear, what if there were 99 people who were 99% basing their opinion off of Alice. Would you say, “well, 100 people all believe X, so it’s probably true”? No! There’s only one person that believes X and 99 people who trust her.
This feels to me like it is actually a pretty important point. In looking at what consensus opinion is, or what the crowd thinks, once you filter out the double counting, it becomes a good deal less strong.
On the other hand, there are other things to think about. For example, if the consensus believes X and you can present good evidence of ~X, but in fact Y, then there is prestige to be gained. And if no one came around and said “Hey! I have evidence of ~X, but in fact Y!”, well, absence of evidence is evidence of absence. In worlds where Y is true, given the incentive of prestige, we would expect someone to come around and say it. This depends on the community though. Maybe it’s too hard to present evidence. For example, in basketball it’s hard to measure the impact of defense. Or maybe the community just isn’t smart enough or set up properly to provide the prestige. Eg. if I had a brilliant idea about basketball, I’m not really sure where I can go to present it and receive prestige.
Edit:
Well, I guess the fact that so many people trust her means that we should place more weight on her opinion. But saying “I believe X because someone who I have a lot of trust in believes X” is different from saying “I believe X because all 100 people who thought about this also believe X”.
I wonder if it would be a good idea groom people from an early age to do AI research. I suspect that it would. Ie identify who the promising children are, and then invest a lot of resources towards grooming them. Tutors, therapists, personal trainers, chefs, nutritionists, etc.
Iirc, there was a story from Peak: Secrets from the New Science of Expertise about some parents that wanted to prove that women can succeed in chess, and raised three daughters doing something sorta similar but to a smaller extent. I think the larger point being made was that if you really groom someone like this, they can achieve incredible things. I also recall hearing things about how the difference in productivity between researchers is tremendous. It’s not like one person is producing 80 points of value and someone else 75 and someone else 90. It’s many orders of magnitude of difference. Even at the top. If so, maybe we should take shots at grooming more of these top tier researchers.
I suspect that the term “cognitive” is often over/misused.
Let me explain what my understanding of the term is. I think of it as “a disagreement with behaviorism”. If you think about how psychology progressed as a field, first there was Freudian stuff that wasn’t very scientific. Then behaviorism emerged as a response to that, saying “Hey, you have to actually measure stuff and do things scientifically!” But behaviorists didn’t think you could measure what goes on inside someone’s head. All you could do is measure what the stimulus is and then how the human responded. Then cognitive people came along and said, “Er, actually, we have some creative ways of measuring what’s going on in there.” So, the term “cognitive”, to me at least, refers very broadly to that stuff that goes on inside someone’s head.
Now think about a phrase like “cognitive bias”. Does “cognitive” seem appropriate? To me it seems way too broad. Something like “epistemic bias” seems more appropriate.
The long standing meaning of “cognitive” for hundreds of years before cognitive psychologists was having to do with knowledge, thinking, and perception. A cognitive bias is a bias that affects your knowledge, thinking, and/or perception.
Epistemic bias is a fine term for those cognitive biases that are specifically biases of beliefs. Not all cognitive biases are of that form though, even when they might fairly consistently lead to certain types of biases in beliefs.
Hm, can you think of any examples of cognitive biases that aren’t about beliefs? You mention that the term “cognitive” also has to do with perception. When I hear “perception” I think sight, sound, etc. But biases in things like sight and sound feel to me like they would be called illusions, not biases.
The first one to come to mind was Recency Bias, but maybe I’m just paying that one more attention because it came up recently.
Having noticed that bias in myself, I consulted an external source https://en.wikipedia.org/wiki/List_of_cognitive_biases and checked that rather a lot of them are about preferences, perceptions, reactions, attitudes, attention, and lots of other things that aren’t beliefs.
They do often misinform beliefs, but many of the biases themselves seem to be prior to belief formation or evaluation.
Ah, those examples have made the distinction between biases that misinform beliefs and biases of beliefs clear. Thanks!
As someone who seems to understand the term better than I do, I’m curious whether you share my impression that the term “cognitive” is often misused. As you say, it refers to a pretty broad set of things, and I feel like people use the term “cognitive” when they’re actually trying to point to a much narrower set of things.
Everyone hates spam calls. What if a politician campaigned to address little annoyances like this? Seems like it could be a low hanging fruit.
Depends on what you mean by “low-hanging fruit”. I think there are lots of problems like this that seem net-negative, but it doesn’t seem anywhere close to the most important thing I would recommend politicians to do.
By low-hanging fruit I mean 1) non-trivial boost in electability and 2) good effort-to-reward ratio relative to other things a politician can focus on.
I agree that there are other things that would be more impactful, but perhaps there is room to do those more impactful things along with smaller, less impactful things.
I don’t think there IS much low-hanging fruit. Seemingly-easy things are almost always more complicated, and the credit for deceptively-hard things skews the wrong way: promising and failing hurts a lot (didn’t even do this little thing), promising and succeeding only helps a little (thanks, but what important things have you done?).
Much better, in politics, to fail at important topics and get credit for trying.
Against “change your mind”
I was just thinking about the phrase “change your mind”. It kind of implies that there is some switch that is flipped, which implies that things are binary (I believe X vs I don’t believe X). That is incorrect[1] of course. Probability is in the mind, it is a spectrum, and you update incrementally.
Well, to play devils advocate, I guess you can call 50% the “switch”. If you go from 51% to 49% it’s going from “I believe X” to “I don’t believe X”. Maybe not though. Depends on what “believe” means. Maybe “believe” moreso means some sort of high probability estimate of it being true, like 80%+.
How does “change” imply “flip”? A thermometer going up a degree undergoes a change. A mind that updates the credence of a belief from X to Y undergoes a change as well.
Yeah that’s a fair question/point. I was thinking about that as well. I think I just get the impression that, thinking about common usage, in the context of “change your mind” people usually mean some sort of “flip”. Not everyone though, some people might just mean “update”.