.
just_browsing
Well put and I agree.
Karma is tricky as a measure because subreddits are non-stationary. In particular, I feel like the “vibes” of all the subreddits I listed were different 6+ months ago, and they are becoming more homogenous (in part due to power users such as Kat Woods). I don’t know of a way to view what the “hot” page of any given subreddit would have looked like at some previous point in time, so it’s hard to find data to understand subreddit culture drift. Anyway, the high karma is also consistent with selection effects, where the users who do not like this content bounce off, and only the users that do stick around those subreddits in the long term.
Typically I agree with the underlying facts behind her memes! For example I also think AI safety is a pressing issue. If her memes were funny I would instead be writing a post about how awesome it is that Kat Woods is everywhere. My main objection is that I do not like the packaging of the ideas she is spreading. For example the memes are not funny. (See the outline of this post: content, vibes, conduct.)
You asked for an example of Kat Woods content that aims to convince rather than educate. Here is one recent example. I feel like the packaging of this meme conveys: “all of the objections you might have to the idea of X-risk via AI can actually be easily be debunked, therefore you would be stupid to not believe X-risk via AI”.
In reality, questions regarding likelihood of x-risk via AI are really tricky. Many thoughtful people have thought about these problems at great length and declared them to be hard and full of uncertainty. I feel like this meme doesn’t convey this at all. Therefore, I’m not sure whether it is good for peoples’ brains to consume this content. I will certainly say it’s not good for my brain to consume this content.
Everywhere I Look, I See Kat Woods
Suggestion: could you also transcribe the Q&A? 4 out of the 10 minutes of content is Q&A.
Here I cite reddit posts, not literature, because /r/fasting has a lot of good anecdotal data, and many weight loss studies are limited in scope.
The answers to any of these questions will likely depend on your starting weight.
On Question 2: In theory this is just a function of your BMR (basal metabolic rate) and TDEE (total daily energy expenditure). For example, if you are large enough to have a TDEE of 3000kcal, then you will lose 1lb of body mass per day (how much is muscle vs fat unclear).
In practice this is a bit of an overestimate. For anecdotal success stories you could go to /r/fasting. On Top All I see:
104lbs lost in “4 months” (14 day fasts followed by 5 days keto at a slight deficit) = 104lbs in 90ish days of fasting (morbidly obese man)
90lbs lost in a 40 day water fast (morbidly obese man)
Many (often smaller) people doing OMAD, rolling 48, etc
Searching for “14 day” I see: (keep in mind, about 10+lbs of this is water weight)
Common wisdom on this subreddit is you get 0.5lbs/day of “real fat loss” during an extended fast.
Retrospective: This comment was helpful
Write in order to organize your thoughts [...] then record yourself giving a short explanation of what you’ve learned about the topic [...] Watch the recording and process the emotions/discomforts with your speaking that come up
Haven’t done the “record yourself” part but I have since started deliberately practicing explaining particular concepts. Typically I will practice it 5 times in a row, and after each time think carefully about what went well/poorly. Multiple comments suggested practice but I think this one resonated with me best (even though I’m not into focusing stuff)
Retrospective: I found this particularly helpful
Watch podcast interviews. Pay attention to how the host asks questions.
Retrospective: I found this particularly helpful
The best way to sound smart is to spend hours preparing something and present it as if you made it up on the spot. Really smart people will have a ton of prepared phrases, so many that they can talk on a wide variety of topics by saying something they already know how to say and just modifying it a little.
I think you can 80⁄20 all this stuff by being “moderately active” instead of “an athlete”.
Average BMI in the United States increased from 25.2 in 1975 to 28.9 in 2014, so a 3 point increase. Compare an average 1975 person with an average 2014 person. It’s far more likely that the 3 point increase is due to overeating, rather than other explanations like packing on muscle (3 whole points of muscle is a lot) or variation in bone mass (this is likely negligible). Overeating is the path of least resistance in wealthy Western countries. So yes, technically BMI is not the same thing as fatness, but they are highly correlated.
Also as Rockenots points out, the direction of your height claim is going in the wrong way. BMI is an underestimate for fatness for very tall people. For example, a healthy weight 6′2″ man’s BMI might be 17 or 18, which according to the standard BMI scale is underweight. That’s why measures like better BMI exist.
Notice when you stop reading right before you understand
AI capabilities are advancing rapidly. It’s deeply concerning that individual actors can plan and execute experiments like “give a LLM access to a terminal and/or the internet”. However I need to remember it’s not worth spending my time worrying about this stuff. When I worry about this stuff, I’m not doing anything useful for AI Safety, I am just worrying. This is not a useful way to spend my time. Instead it is more constructive to avoid these thoughts and focus on completing projects I believe are impactful.
[Question] How to become more articulate?
[Question] Tips for productively using short (<30min) blocks of time
Wow thanks for sharing. I might steal the NFC / walk scheduling ideas—those sound like they could be useful.
Long shot but you haven’t happened to figure out how to get Tasker to interface with “Focus Mode” have you? That’s one thing I haven’t managed to get Tasker to detect yet.
- Sep 19, 2021, 8:56 PM; 2 points) 's comment on The Best Software For Every Need by (
“Don’t make us look bad” is a powerful coordination problem which can have negative effects on a movement. Examples:
Veganism has a bad reputation of being holier than thou. It’s hard to be a vegan without getting lumped in with “those vegans”. So, it’s hard to be open about being a vegan, which makes making veganism more socially acceptable tricky.
Ideas perceived as crazy are connected to the EA movement. For example, EAs discuss the possibility that we are living in a simulation seriously. So do flat earthers. Similarly, outsiders could dismiss EA as being too crazy for many other superficial reasons. The NYT’s article on Scott Alexander (https://www.nytimes.com/2021/02/13/technology/slate-star-codex-rationalists.html) sort of acts as an example—juxtaposing “MIRI” and “NRx” implicitly undermines the credibility of AI Safety research. EAs trying to work in public policy for example might not want to publicly identify as “EA” to the same extent because “the other EAs are making them look bad”.
A person who is part of a movement does something controversial. It makes the movement look bad. For example, longevity has been getting negative press due to the Aubrey de Grey scandal.
The coordination problems the US democratic party faces, described by David Shor in this Rationally Speaking podcast episode (http://rationallyspeakingpodcast.org/wp-content/uploads/2020/11/rs248transcript.pdf).
And that’s—coordination’s a very hard thing to do. People have very
strong incentives to defect. If you’re an activist going out and saying a very
controversial thing, putting it out there in the most controversial, least
favorable light so that you get a lot of negative attention. That’s mostly
good for you. That’s how you get attention. It helps your career. It’s how
you get foundation money. [...]And we really noticed that all of these campaigns, other than, I guess, Joe
Biden, were embracing these really unpopular things. Not just stuff around
immigration, but something like half the candidates who ran for president
endorsed reparations, which would have been unthinkable, it would have
been like a subject of a joke four years ago. And so we were trying to figure
out, why did that happen? [...]But we went and we tested these things. It turns out these unpopular
issues were also bad in the primary. The median primary voter is like 58
years old. Probably the modal primary voter is a 58-year-old black woman.
And they’re not super interested in a lot of these radical sweeping policies
that are out there.And so the question was, “Why was this happening?” I think the answer
was that there was this pipeline of pushing out something that was
controversial and getting a ton of attention on Twitter. The people who
work at news stations—because old people watch a lot of TV—read
Twitter, because the people who run MSNBC are all 28-year-olds. And
then that leads to bookings.
And so that was the strategy that was going on. And it just shows that
there are these incredible incentives to defect.One takeaway: a moderate democrat like Joe Biden suffers because the crazier looking democrats like AOC are “making him look bad”, even if his and AOC’s goals are largely aligned. I can only assume that the republican party faces similar issues (not discussed in this podcast episode though)
Are there more examples of “don’t make us look bad” coordination problems like these? Any examples of overcoming this pressure and succeeding as a movement?
How much to extreme people harm movements? What affects this?
For example, in politics, there are a few high-stakes all or nothing elections, where having extreme people quiet down could be beneficial to a particular party. On the other hand, no extreme voices could mean no progress.
In veganism/EA, maybe extreme voices have less of a negative effect because there aren’t as many high-stakes all or nothing opportunities. Instead, a bunch of decentralized actors do stuff. Clearly so far EAs seem to be doing fine interfacing with governments (e.g. CSET) so maybe the “don’t make us look bad” factor is less here.
This seems interesting and important.
Tasker actions which save me time and sanity
This is a good point concerning current gait recognition technology. However, I don’t doubt it will improve. On longer timescales, this should happen naturally as compute gets cheaper and more data gets collected. On shorter timescales, this can be accelerated using techniques such as synthetic data generation.
Perhaps there is a natural limit to gait recognition, if it turns out that people can’t be uniquely identified from their gait, even in the limit of perfect data. But if there isn’t, then in 10 years, “94%” will turn into “99.999%”, or whatever is needed for gait recognition to be worth thinking about.
In this situation (and in the situation where I leave my phone at home), this question becomes relevant again.
Thanks for the thoughtful reply. The post is both venting for the fun of it (which, clearly, landed with absolutely nobody here) and earnestly questioning whether the content is net positive (which, clearly, very few interpreted as being earnest):
There is precedent for brands and/or causes making bad memes and suffering backlash. I mention PETA in the post. Another example is this Pepsi commercial. There is also specifically precedent for memes getting backlash because they are dated, e.g. this Wendy’s commercial. You might say that for brands all press is good press, but this seems less true to me when it comes to causes.
I don’t know a lot about PETA and whether their animal activism is considered net positive. On the one hand a cursory google seems to say they caused some vegetarian options at fast food restaurants to exist. On the other hand it wouldn’t be surprising if they shifted public sentiment negatively towards vegetarianism or veganism. That’s what most people think of when they think of PETA.
Anyway, you could imagine something similar happening with AI safety, where sufficiently bad memes cause people to not take it seriously.