Try taking one level at a time and pausing between levels. You might just get frustrated and getting some freshness will help
Tricular
What do you mean by the most? How likely it is that you have no nutritional deficiencies?
It used to be believed that intensity was basically irreplaceable, but more and better studies have shown extremely similar effects from lower intensity, approximately down to 60-65% of your 1 rep max, whereas a 4 or 5 rep scheme is going to be around 80-85% your 1 rep max.
Can you list some of those studies?
I agree with everything you say about how the studies that try to research this issue can go wrong, but I can’t entirely agree with your conclusion that it seems probably harmless. I mean, it depends on what you mean by that. If you mean that the effect of pornography is more or less neutral on average—not sure, but also not sure about the opposite. If you mean that somebody should just start consuming this media—I guess that it would be good to be a little bit more careful. I think there is some evidence that suggests that pornography can negatively impact relationships, and… it seems quite clear to me that starting to consume pornography is easier than stopping. If there is a chance of developing an addiction that negatively influences your life and relationships, maybe you should just be careful.
I’m a little bit surprised by your answer. Do you consider fixing nutritional deficiencies a part of a healthy diet? There is some good evidence that iron deficiency is bad for you here
Looks like a pretty good alternative, thanks! But I just realized that goals actually have some properties that I care about that themes don’t have—they really narrow your focus.
Looking for fundamental alternatives to the concept of goals in life organizing
I’m obsessed with planning and organizing my life, and I also tend to overthink and analyze things. Goals are a fundamental piece of organization for me. I try to make a substantial part of my life focused on achieving goals: I work out to keep my body healthy, and I work to earn money and feel secure. But I often feel anxious, and I ask myself if there is any other way of organizing life that avoids the concept of goals altogether. I also think it’s useful to imagine living a life without some crucial concept.
It seems that it’s quite hard to avoid thinking about goals in general when you define goals as anything that you plan and decide to pursue. It might be possible with some activities when you just try to follow your curiosity and do not think about the long-term effects of your actions. Does art need goals? But then it seems that following curiosity just becomes your next goal.
There is a bunch of news/articles on the internet which are describing “Elon Musk’s rules for productivity” I don’t know if Elon Musk really wrote them, but it’s not the point. One of the rules usually goes like that:
6) Use common senseIf a company rule doesn’t:
- Make sense
- Contribute to progress
- Apply to your specific situation
Avoid following the rule with your eyes closed.
I really don’t agree with it. I think that Rules are usually put into place for some very specific reason that might be hard for us to see, but it is there nevertheless. I’m a software developer and I think that if I listened to some of my colleagues telling me about rules like “don’t try to optimize it when you are still figuring out what you actually want to do” I would be a much better developer right now, but I usually didn’t and spend a lot of time figuring how to optimize things that didn’t really need it.
I found Section 6 particularly interesting! Here’s how I understand it:
Most of our worries about AI stem from catastrophic scenarios, like AI killing everyone.
It seems that to prevent these outcomes, we don’t need to do extremely complex things, such as pointing AI towards the extrapolated values of humanity.
Therefore, we don’t need to focus on instilling a perfect copy of human values into AI systems.
From my understanding, this context relates to the “be careful what you wish for” problem with AI, where AI could optimize in dangerous or unexpected ways. There’s a race here: can we control AI well enough to still gain its benefits?
However, I don’t think you’ve provided enough evidence that this level of control is actually possible. Additionally, there’s the issue of deceptive alignment—I’m not convinced we could manage this “race” without receiving some kind of feedback from AI systems.
Finally, the description of the oracle AI in this section seems quite similar to the idea of corrigible AI.