Align it
sudo
I think I was deceived by the title.
I’m pretty sure that rapid capability generalization is distinct from the sharp left turn.
dedicated to them making the sharp left turn
I believe that “treacherous turn” was meant here.
Wait I’m pretty confident that this would have the exact opposite effect on me.
You can give ChatGPT the job posting and a brief description of Simon’s experiment, and then just ask them to provide critiques from a given perspective (eg. “What are some potential moral problems with this plan?”)
I clicked the link and thought it was a bad idea ex post. I think that my attempted charitable reading of the Reddit comments revealed significantly less constructive data than what would have been provided by ChatGPT.
I suspect that rationalists engaging with this form of content harms the community a non-trivial amount.
I’m a fan of this post, and I’m very glad you wrote it.
I understand feeling frustrated given the state of affairs, and I accept your apology.
Have a great day.
You don’t have an accurate picture of my beliefs, and I’m currently pessimistic about my ability to convey them to you. I’ll step out of this thread for now.
I find the accusation that I’m not going to do anything slightly offensive.
Of course, I cannot share what I have done and plan to do without severely de-anonymizing myself.
I’m simply not going to take humanity’s horrific odds of success as a license to make things worse, which is exactly what you seem to be insisting upon.
Default comment guidelines:
Aim to explain, not persuade
Try to offer concrete models and predictions
If you disagree, try getting curious about what your partner is thinking
Don’t be afraid to say ‘oops’ and change your mind
Your reply does not even remotely resemble good faith engagement.
You can unilaterally slow down AI progress by not working on it. Each additional day until the singularity is one additional day to work on alignment.
“Becoming the fire” because you’re doomer-pilled is maximally undignified.
Why not create non-AI startups that are way less likely to burn capabilities commons?
Random thoughts:
Wouldn’t it be best for the rolling admissions MATS be part of MATS?
Some ML safety engineering bootcamps scare me. Once you’re taking in large groups of new-to-EA/new-to-safety people and teaching them how to train transformers, I’m worried about downside risks. I have heard that Redwood has been careful about this. Cool if true.
What does building a New York-based hub look like?
Illegal to bet? Illegal to host?
What sort of value do you expect to get out of “crossing the theory-practice gap”?
Do you think that this will result in better insights about which direction to focus in during your research, for example?
I filled out an application. This looks like a very promising program.
I was watching some clips of Aaron Gwin’s (American professional mountain bike racer) riding recently. Reflecting on how amazing humans are. How good we can get, with training and discipline.
Did some math today, and remembered what I love about it. Being able to just learn, without the pressure and anxiety of school, is so wonderfully joyful. I’m going back to basics, and making sure that I understand absolutely everything.
I’m feeling very excited about my future. I’m going to learn so much. I’m going to have so much fun. I’m going to get so good.
When I first started college, I set myself the goal of looking, by now, like an absolute wizard to me from a year ago. To be advanced enough to be indistinguishable from magic.
A year in, I now can do things that I couldn’t have done a year ago. I’m more lucid, I’m more skilled, I’m more capable, and I’m mature than I was a year ago. I think I did it.
I’m setting myself the same goal again. I’m so excited to hit it out of the park.
How would you identify a second Yudkowsky? I really don’t like this trope.
By writing ability?
Strong upvoted.
The result is surprising and raises interesting questions about the nature of coherence. Even if this turns out to be a fluke, I predict that it’d be an informative one.