Yeah maybe—I have a ton of calf problems in general when running, and I should probably see a running coach or something.
This pretty clearly did make the calf problems even worse than usual though :p
Benjamin Rachbach
I tried the quick gait:
1. running with a backpack
2. running for exercise without a backpack
I think I’m sold on it for 1, seems better than the long, loping gait I previously used for backpack running
Not sold for 2, seems to wear out my calves quickly
Other things that help you run with a backpack:
1. use both a hip strap and a sternum strap, and tighten both (especially the sternum strap) way tighter than you normally would for walking. In my experience this eliminates most of the jostling of the backpack relative to not using straps
2. instead of carrying water bottle on the outside, put it inside for better balance and no chance of it falling out
3. use a high-quality backpack with good padding, and probably with a rigid back, e.g. (https://smile.amazon.com/North-Face-Router-Meld-Black/dp/B092RJ8G86?sa-no-redirect=1&th=1&psc=1). Also helps a ton with jostling and with not getting poked/smacked by things in the backpack
Have tested all of these a bunch and they help me a ton
Less robustly useful:
1. hold onto the shoulder straps as you run (reduces jostling a bit)
2. smooth your running gait to reduce jostling
Yep that helps a ton! (having tested it many times)
I’d be interested in joining for a Bay Area kickoff!
Test:
Elicit prediction (https://forecast.elicit.org/binary/questions/el3utYd8Z)
Test:
Elicit prediction (
My biggest differences with Rohin’s prior distribution are:
1. I think that it’s much more likely than he does that AGI researchers already agree with safety concerns
2. I think it’s considerably more likely than he does that the majority of AGI researchers will never agree with safety concerns
These differences are explained more on my distribution and in my other comments.
The next step that I think would help the most to make my distribution better would be to do more research.
I thought about how I could most efficiently update my and Rohin’s views on this question.
My best ideas are:
1. Get information directly on this question. What can we learn from surveys of AI researchers or from public statements from AI researchers?2. Get information on the question’s reference class. What can we learn about how researchers working on other emerging technologies that might have huge risks thought about those risks?
I did a bit of research/thinking on these, which provided a small update towards thinking that AGI researchers will evaluate AGI risks appropriately.
I think that there’s a bunch more research that would be helpful—in particular, does anyone know of surveys of AI researchers on their views on safety?
- 23 Jul 2020 1:15 UTC; 2 points) 's comment on Competition: Amplify Rohin’s Prediction on AGI researchers & Safety Concerns by (
I answered the following subquestion to help me answer the overall question: “How likely is it that the condition Rohin specifies will not be met by 2100?”
This could happen due to any of the following non-mutually exclusive reasons:
1. Global catastrophe before the condition is met that makes it so that people are no longer thinking about AI safety (e.g. human extinction or end of civilization): I think there’s a 50% chance
2. Condition is met sometime after the timeframe (mostly, I’m imagining that AI progress is slower than I expect): 5%
3. AGI succeeds despite the condition not being met: 30%
4. There’s some huge paradigm shift that makes AI safety concerns irrelevant—maybe most people are convinced that we’ll never build AGI, or our focus shifts from AGI to some other technology: 10%
5. Some other reason: 20%
I thought about this subquestion before reading the comments or looking at Rohin’s distribution. Based on that thinking, I thought that there was a 60% chance that the condition would not be met by 2100.
I answered the following subquestion to help me answer the overall question: “How likely is it that the condition Rohin specified would already be met (if he went out and talked to the researchers today)?”
Considerations that make it more likely:
1. The considerations identified in ricaz’s and Owain’s comments and their subcomments
2. The bar for understanding safety concerns (question 2 on the “survey”) seems like it may be quite low. It seems to me that researchers entirely unfamiliar with safety could gain the required level of understanding in just 30 minutes of reading (depends on how Rohin would interpret his conversation with the researcher in deciding whether to mark “Yes” or “No”)
Considerations that make it less likely:
1. I’d guess that currently, most AI researchers have no idea what any of the concrete safety concerns are, i.e. they’d be “No”s on question 2
2. The bar for question 3 on the “survey” (“should we wait to build AGI”) might be pretty high. If someone thinks that some safety concerns remain but that we should cautiously move forward on building things that look more and more like AGI, does that count as a “Yes” or a “No”?
3. I have the general impression that many AI researchers really dislike the idea that safety concerns are serious enough that we should in any way slow down AI research
I thought about this subquestion before reading the comments or looking at Rohin’s distribution. Based on that thinking, I thought that there was a 25% chance that the condition Rohin specified would already be met.
Note: I work at Ought, so I’m ineligible for the prizes
I’ve been working on making Elicit search work better for reviews. Would be curious for more detail on how Elicit failed here, if you’d like to share!