Trying to become stronger.
hath
Are you accepting minors for this program?
Thank you for the post, and thank you for all the editing you’ve done!
I’m an idiot; Blue Bottle is closed. Maybe the park next to it?
The park next to there works as well.
I’ve heard good things about Blue Bottle Coffee. It’s also next to Lightcone.
I second this, I sincerely thought these were thoughts you held.
Yeah, you’re right. Oops.
>Do you have any experience in programming or AI?
Programming yes, and I’d say I’m a skilled amateur, though I need to just do more programming. AI experience, not so much, other than reading (a large amount of) LW.
>Let’s suppose you were organising a conference on AI safety. Can you name 5 or 6 ways that the conference could end up being net-negative?
The conference involves someone talking about an extremely taboo topic (eugenics, say) as part of their plan to save the world from AI; the conference is covered in major news outlets as “AI Safety has an X problem” or something along those lines, and leading AI researchers are distracted from their work by the ensuing twitter storm.
One of the main speakers at the event is very good at diverting money towards him/herself through raw charisma and ends up diverting money for projects/compute away from other, more promising projects; later it turns out that their project actually accelerated the development of an unaligned AI.
The conference on AI safety doesn’t involve the people actually trying to build an AGI, and only involves the people who are already committed to and educated about AI alignment. The organizers and conference attendees are reassured by the consensus of “alignment is the most pressing problem we’re facing, and we need to take any steps necessary that don’t hurt us in the long run to fix it,” while that attitude isn’t representative of the audience the organizers actually want to reach. The organizers make future decisions based on the information that “lead AI researchers already are concerned about alignment to the degree we want them to be”, which ends up being wrong and they should have been more focused on reaching lead AI researchers.
The conference is just a waste of time, and the attendees could have been doing better things with the time/resources spent attending.
There’s a bus crash on the way to the event, and several key researchers die, setting back progress by years.
Similar to #2, the conference convinces researchers that [any of the wrong ways to approach “death with dignity” mentioned in this post] is the best way to try to solve x-risk from AGI, and resources are put towards plans that, if they fail, will fail catastrophically
“If we manage to create an AI smarter than us, won’t it be more moral?” or any AGI-related fallacy disproved in the Sequences is spouted as common wisdom, and people are convinced.
As far as I know, the purpose of the nomination is “provide an incentive for you to share the Atlas Fellowship with those you think might be interested” not “help make our admissions decisions”. I agree that, if the nomination form was weighted heavily in the admissions decisions, we would be incentivized to speak highly of those who don’t deserve it to get 500$.
High charisma/extroversion, not much else I can think of that’s relevant there. (Other than generally being a fast learner at that type of thing.)
Not something I’ve done before.
Enjoy it while it lasts. /s
Are we changing from “payment sent every day at midnight” to “payment sent at end of week”?
Also this comment:
Eliezer, do you have any advice for someone wanting to enter this research space at (from your perspective) the eleventh hour?
I don’t have any such advice at the moment. It’s not clear to me what makes a difference at this point.
If you didn’t already try, I bet Lightcone would let you post more if you asked over Intercom.
Thank you so much! Fixed.
(although, measuring impact on alignment to that degree might be of a similar difficulty as actually solving alignment).
Sure, but it’s dignity in the specific realm of “facing unaligned AGI knowing we did everything we could”, not dignity in general.
Do you have any ideas for how to go about measuring dignity?
I mean this completely seriously: now that MIRI has changed to the Death With Dignity strategy, is there anything that I or anyone on LW can do to help with said strategy, other than pursue independent alignment research? Not that pursuing alignment research is the wrong thing to do, just that you might have better ideas.
I’d like to have the ability to leave Google-Doc style suggestions on normal posts about typos; seems like something that might be superior of our current system of doing it through the comments? Removing the trivial inconvenience might go a long way.