Thanks for the advice. I don’t want to do alternating days, because doing the same thing every day makes it easier to have as a habit (for me, anyway). More weight with less reps/set and doing a circuit both make sense. I’m sort of combining weight maintenance and strength goals, and I should probably meet with someone who advises on these questions for a living instead of winging it.
Normal_Anomaly
Yes, thank you! I’ll add the link.
“Step 1: I decided to find an activity, sport, hobby where fitness can actually be used. In my case climbing.”
My intention was to give strategies that can be used to build any good habit, not necessarily physical fitness. But within the realm of fitness, you make a good point that a sport where you can see the gains provides additional motivation on top of the desire to be healthier.
I can now do at least two consecutive pull ups and sometimes three. Hardly world class, but I feel great about it. I also succeeded last December at the climbing route that, when I couldn’t complete it, inspired me to start working out. With the cardio I started a few months ago, I’ve gone from panting for air and feeling awful after running a mile to being able to run two miles and start to enjoy it.
How would you suggest we find the right utility function without using machine learning?
If I find out, you’ll be one of the first to know.
I never claimed that evolution did a good job, but I would argue that it gave us a primary directive; to further the human species.
No, it didn’t. That’s why I linked “Adaptation Executers, not Fitness Maximizers”. Evolution didn’t even “try to” give us a primary directive; it just increased the frequency of anything that worked on the margin. But I agree that we shouldn’t rely on machine learning to find the right utility function.
How I changed my exercise habits
There’s little I can change about my beliefs that would improve my mood, aside from becoming implausibly optimistic about my future.
How do you think you know that? Maybe some of your beliefs or aliefs are causing wrong actions that are making you sad. From what you say elsewhere in your comment, it sounds like your depression is triggered by romantic failure, so changes to beliefs that help you relate to people better probably could improve your mood. In fact, your particular case of wanting “a relationship . . . in which nobody’s deceiving anybody” sounds like a good one for CBT. (Or rather for fixing with rationality-type changes in general, I don’t know enough about CBT vs. other therapies to really say.)
The reason in the past was probably disease and/or unintended pregnancy, and both of those can be fixed now. Also concerns about making sure women wouldn’t cheat on their husbands and leave them raising someone else’s kid, I think. The third reason, which is still applicable today, is that hiring a sex worker signals “can’t get sex without paying, therefore undesirable” but that’s probably not too big of a deal.
I can confirm this. I stayed in a hostel in London for a week last month, and got way more social interaction than I was expecting and about as much as my introverted self could stand. Including one invitation to dinner that may or may not have been a date.
Going through in order:
1 is a confession of bad epistemology,
2 is an assertion with no bad epistemology but a wrong premise,
3 is a generic wrong assertion with a “and that’s beautiful” tacked on the front,
4 is a true statement largely independent of religious questions,
5 is good epistemology applied to wrong premises.
Does that engage with what you were asking, or have I misparsed you completely?
I think there’s an open thread once or twice a month. Also, IMO this post would go better in an open thread than a stupid questions thread; the stupid questions thread is for sharing advice.
IAWYC, but disagree on the last sentence: it’s not an interesting question because it’s a wrong question. Superintelligent AI can’t have a “custodian”. Geopolitics of non-superintelligent AI that is smarter than a human but won’t FOOM is a completely different question, probably best debated by people who speculate about cyberwarfare since it’s more their field.
My reaction to the first quoted statement was a big “Huh?”. The only reason it would matter where superintelligent AI is first developed is that the researchers in different countries might do friendliness more or less well. A UFAI is equally catastrophic no matter who builds it; an AI that is otherwise friendly but has a preference for one country would . . . what would that even mean?Create eutopia and label it “The United Galaxy of America”? Only take the CEV of Americans instead of everybody? Either way, getting friendliness right means national politics is probably no longer an issue.
Also: I did not vote for this guy in the Transhumanist Party primaries!
I think this is at bottom a restatement of “determining the right goals with sufficient rigor to program it into an AI is hard; ensuring that these goals are stable under recursive self-modification is also hard.” If I’m right, then don’t worry; we already know it’s hard. Worry, if you like, about how to do it anyway.
In a bit more detail:
the most promising developments have been through imitating the human brain, and we have no reason to believe that the human brain (or any other brain for that matter) can be guaranteed to have a primary directive. One could argue that evolution has given us our prime directives: to ensure our own continued existence, to reproduce and to cooperate with each other; but there are many people who are suicidal, who have no interest in reproducing and who violently rebel against society (for example psychopaths).
Evolution did a bad job. Humans were never given a single primary drive; we have many. If our desires were simple, AI would be easier, but they are not. So evolution isn’t a good example here. Also, I’m not sure of your assertion that the best advances in AI so far came from mimicking the brain. The brain can tell us useful stuff as an example of various kinds of program (belief-former, decision-maker, etc.) but I don’t think we’ve been mimicking it directly. As for machine learning, yes there are pitfalls in using that to come up with the goal function, at least if you can’t look over the resulting goal function before you make it the goal of an optimizer. And making a potential superintelligence with a goal of finding [the thing you want to use as a goal function] might not be a good idea either.
That was why I was curious: presumably they didn’t get here through any of the usual channels, so LW’s reputation has gone somewhere I wouldn’t expect. Ah well, just as well they’re gone, should’ve asked faster.
The quality of argument in this post is awful, but the closest thing to a main point that I can extract from it is “there is no rational reason for human nudity taboos”, which is amusing because it’s probably true. Not important, but still true. Also, hoofwall, how did you even find this website? It’s not the sort of website that people who haven’t picked up a book since 8th grade usually find, let alone care to post on.
Maybe sometime before I die of old age, if I’m very lucky, or sufficiently shortly afterward that it’s worth getting cryonics and hoping. Probably sometime within the next 100-200 years, if something else doesn’t make it unnecessary by then.
I’m taking a class in Haskell, and I’d really like to know this too. Haskell is annoying. It’s billed as “not verbose”, but it’s so terse that reading other people’s code and learning from it is difficult. (Note: the person I’m on a project with likes one-letter variable names, so that’s a bit of a confounder.)
The Lisps, from the Album “Are We at the Movies”.