Good luck! :)
Aay17ush
The way you use intelligence is different from how many people here using that word mean it.
Check this out (for a partial understanding of what they mean): https://www.lesswrong.com/posts/aiQabnugDhcrFtr9n/the-power-of-intelligence
Interesting! I’ve recently been thinking a bunch about “narratives” (frames) and how strongly they shape how/what we think. Making it much harder to see “the” truth since changing the narrative changes things quite a bit.
I’m curious if anyone has an example of how they would go about applying frame-invariance to rationality.
These kinds of explorations (unusual and truth-seeking) are why I love lesswrong :)
I’ve found the post “Reward is not the optimization target” quite confusing. This post cleared the concept up for me. Especially the selection framing and example. Thank you!
Finland too (and I expect quite a few other EU countries to do so as well)
https://mobile.twitter.com/i/lists/1185207859728076800 AGI Safety core by JJ (From AI Safety Support)
Lily: If I was a parent I would change the fifteen minutes to ten minutes. Screen time is kind of bad for kids. I also like having an hour and a half for movies, but I think maybe it’s a bit much?
haha that’s so sweet! :D
Tldr: Love used to be in short supply (for self and others). Read Replacing guilt and tried improv + metta meditation. Now it is in big supply and has lead to significant positive changes in my actions.
I have always been in a single-player and critical mindset, optimizing everything for me. Thinking about what would be a nice thing to do for others (and empathizing with their feelings) hardly ever popped into my awareness.
Over the last year,
Replacing guilt made me realize I didn’t need negative thoughts to motivate me. This led me to incrementally decrease my self-criticism and learn to treat myself as I would a close friend.
Improv acting made me realize on a gut level that everyone has their own subjective experience that feels as true to them as mine feels to myself. This led me to get out of my head and be with the person.
Metta meditation made me realize I can learn to love others and myself far more than I do. A consequence is that deeply loving someone makes it really unlikely to have a bad social experience with them. Problems such as awkwardness, feeling judged, bad conversations, etc just die off and you actively start having very enjoyable social experiences. I’m surprised no self-help book talks about this.
Obviously, the process involved a lot more ups and downs than suggested here. But these are the three big factors I feel comfortable abstracting to that capture the fundamental changes.
I’m incredibly thankful to lesswrong and the wider rationality movement for the mentals tools it provides. My 2020 self would not have predicted this :)
I assume EA student groups have a decent amount of rationalists in them (30%?), so the two categories are not as easily separable. And thus it’s not as bad as it sounds for rationalists.
Will you be approachable for incubating less experienced people (for example student interns), or do you not want to take that overhead right now?
What is the reasoning behind non-disclosure by default? It seems opposite to what EleutherAI does.
This is lovely! I’ve a couple questions (will post them in the AMA as well if this is not a good place to ask)
-
What is the reasoning behind non-disclosure by default? It seems opposite to what EleutherAI does.
-
Will you be approachable for incubating less experienced people (for example student interns), or do you not want to take that overhead right now?
-
Metta (loving-kindness) meditation would be an example practice that tries to focus attention on actively loving others in order to get better at it over time.
I don’t have time to currently point out to concrete research backing it up, but it’s been often discussed positively on Lesswrong and the EA Forum and I have had surprisingly good results from it. In my experience though, it has quite a quick feedback loop so trying it out might be the most efficient way of testing it. The Waking up app by Sam Harris is a good starting point.
This is a great idea! I’m gonna try it out. It fixes quite a lot of things with existing systems, as you point out.
I’m curious though, since when have you been experimenting with it and how has it been? I’m assuming it went well, but I am interested to know more about the details in your process (setbacks, changes, etc) and expect it’ll be helpful for others experimenting with this as well :)
I’ve often thought about this, and this is the conclusion I’ve reached.
There would need to be some criteria that separates morality from immorality. Given that, consciousness (ie self-modelling) seems like the best criteria given our current knowledge. Obviously, there are gaps (like the comatose patient you mention), but we currently do not have a better metric to latch on to.
I put my laptop on a box on top of my desk and use an external keyboard and mouse to operate it.
Love this initiative! I do have a question though. It seems that people with 100+ karma have most likely figured out how to write publicly with a decent quality. So this service would simply be a bonus for them.
Isn’t it more important to enable this service for lurkers/readers on Lesswrong who haven’t yet written many posts due to the reasons you’ve mentioned?
Disclaimer: I don’t have 100+ karma and haven’t written a lot outside as well—just privately in my note taking app.
Thanks for writing this! While reading the post, I was also thinking that this heuristic of building better systems is useful for deciding what to work on in our career as well.
I’m surprised fatebook.io isn’t an answer here. I had in the past tried a bunch of personal prediction tools and felt dis-satisfied. Either because of the complexity, UI, or something else. Anyways, I’ve been using fatebook for a couple weeks now and loving it.
More info: https://www.lesswrong.com/posts/yS3d46m23wRKDQobt/introducing-fatebook-the-fastest-way-to-make-and-track