Trying to become stronger.
hath
Cultivating And Destroying Agency
What do you do to deliberately practice?
Intercom doesn’t change in Dark Mode. Also, the boxes around the comment section are faded, and the logo in the top left looks slightly off. Good job implementing it, though, and I’m extremely happy that LW has this feature.
If you are going to downvote this, at least argue why.
Fair. Should’ve started with that.
To the extent that rationality has a purpose, I would argue that it is to do what it takes to achieve our goals,
I think there’s a difference between “rationality is systematized winning” and “rationality is doing whatever it takes to achieve our goals”. That difference requires more time to explain than I have right now.
if that includes creating “propaganda”, so be it.
I think that if this works like they expect, it truly is a net positive.
I think that the whole AI alignment thing requires extraordinary measures, and I’m not sure what specifically that would take; I’m not saying we shouldn’t do the contest. I doubt you and I have a substantial disagreement as to the severity of the problem or the effectiveness of the contest. My above comment was more “argument from ‘everyone does this’ doesn’t work”, not “this contest is bad and you are bad”.
Also, I wouldn’t call this contest propaganda. At the same time, if this contest was “convince EAs and LW users to have shorter timelines and higher chances of doom”, it would be reacted to differently. There is a difference, convincing someone to have a shorter timeline isn’t the same as trying to explain the whole AI alignment thing in the first place, but I worry that we could take that too far. I think that (most of) the responses John’s comment got were good, and reassure me that the OPs are actually aware of/worried about John’s concerns. I see no reason why this particular contest will be harmful, but I can imagine a future where we pivot to mainly strategies like this having some harmful second-order effects (which need their own post to explain).
You didn’t refute his argument at all, you just said that other movements do the same thing. Isn’t the entire point of rationality that we’re meant to be truth-focused, and winning-focused, in ways that don’t manipulate others? Are we not meant to hold ourselves to the standard of “Aim to explain, not persuade”? Just because others in the reference class of “movements” do something doesn’t mean it’s immediately something we should replicate! Is that not the obvious, immediate response? Your comment proves too much; it could be used to argue for literally any popular behavior of movements, including canceling/exiling dissidents.
Do I think that this specific contest is non-trivially harmful at the margin? Probably not. I am, however, worried about the general attitude behind some of this type of recruitment, and the justifications used to defend it. I become really fucking worried when someone raises an entirely valid objection, and is met with “It’s only natural; most other movements do this”.
Can confirm that this is all accurate. Some of it is much less weird in context. Some of it is much, much weirder in context.
Yeah, my reaction to this was “you could have done a much better job of explaining the context” but:
“Your writing would be easier to understand if you explained things,” the student said.
That was me, so I guess my opinion hasn’t changed.
I’d like to have the ability to leave Google-Doc style suggestions on normal posts about typos; seems like something that might be superior of our current system of doing it through the comments? Removing the trivial inconvenience might go a long way.
Are you accepting minors for this program?
Thank you for the post, and thank you for all the editing you’ve done!
I’m an idiot; Blue Bottle is closed. Maybe the park next to it?
The park next to there works as well.
I’ve heard good things about Blue Bottle Coffee. It’s also next to Lightcone.
I second this, I sincerely thought these were thoughts you held.
Yeah, you’re right. Oops.
>Do you have any experience in programming or AI?
Programming yes, and I’d say I’m a skilled amateur, though I need to just do more programming. AI experience, not so much, other than reading (a large amount of) LW.
>Let’s suppose you were organising a conference on AI safety. Can you name 5 or 6 ways that the conference could end up being net-negative?
The conference involves someone talking about an extremely taboo topic (eugenics, say) as part of their plan to save the world from AI; the conference is covered in major news outlets as “AI Safety has an X problem” or something along those lines, and leading AI researchers are distracted from their work by the ensuing twitter storm.
One of the main speakers at the event is very good at diverting money towards him/herself through raw charisma and ends up diverting money for projects/compute away from other, more promising projects; later it turns out that their project actually accelerated the development of an unaligned AI.
The conference on AI safety doesn’t involve the people actually trying to build an AGI, and only involves the people who are already committed to and educated about AI alignment. The organizers and conference attendees are reassured by the consensus of “alignment is the most pressing problem we’re facing, and we need to take any steps necessary that don’t hurt us in the long run to fix it,” while that attitude isn’t representative of the audience the organizers actually want to reach. The organizers make future decisions based on the information that “lead AI researchers already are concerned about alignment to the degree we want them to be”, which ends up being wrong and they should have been more focused on reaching lead AI researchers.
The conference is just a waste of time, and the attendees could have been doing better things with the time/resources spent attending.
There’s a bus crash on the way to the event, and several key researchers die, setting back progress by years.
Similar to #2, the conference convinces researchers that [any of the wrong ways to approach “death with dignity” mentioned in this post] is the best way to try to solve x-risk from AGI, and resources are put towards plans that, if they fail, will fail catastrophically
“If we manage to create an AI smarter than us, won’t it be more moral?” or any AGI-related fallacy disproved in the Sequences is spouted as common wisdom, and people are convinced.
As far as I know, the purpose of the nomination is “provide an incentive for you to share the Atlas Fellowship with those you think might be interested” not “help make our admissions decisions”. I agree that, if the nomination form was weighted heavily in the admissions decisions, we would be incentivized to speak highly of those who don’t deserve it to get 500$.
High charisma/extroversion, not much else I can think of that’s relevant there. (Other than generally being a fast learner at that type of thing.)
Not something I’ve done before.
Enjoy it while it lasts. /s
Not only is this post great, but it led me to read more James Mickens. Thank you for that! (His writings can be found here).