Fixed, thanks!
Ideopunk
Woooo!
Halifax Monthly Meetup: AI Safety Discussion
Halifax Monthly Meetup: Introduction to Effective Altruism
Halifax Monthly Meetup: Moloch in the HRM
Halifax, NS – Monthly Rationalist, EA, and ACX Meetup
Halifax, NS – Monthly Rationalist, EA, and ACX Meetup Kick-Off
This event is no longer cancelled!
This event is cancelled. I will not be there in time due to Hurricane Fiona, and cannot guarantee another host. Big apologies!
Halifax Rationality / EA Coworking Day
Come hang out, shy pals!
The Agent
From my reading, he’s much more scout than postmodern soldier in his lectures https://foucault.info/parrhesia/foucault.DT1.wordParrhesia.en/ -- and as a bonus, a much easier read.
Aha! Thank you.
This is an excellent post. I expect (and hope!) it will shape how I handle disagreements.
”The person counters every objection raised, but the counters aren’t logically consistent with each other.”
Is there a particular term for this? This is something I’ve encountered before, and having a handle for it might help with addressing it.
Come hang out!
This was a rich read, thank you!
This is interesting. Am I wrong in summarizing it as “deontology helps with coordination”?
(Cross-posted from the EA forum)
Hi, I run the 80,000 Hours job board, thanks for writing this out!
I agree that OpenAI has demonstrated a significant level of manipulativeness and have lost confidence in them prioritizing existential safety work. However, we don’t conceptualize the board as endorsing organisations. The point of the board is to give job-seekers access to opportunities where they can contribute to solving our top problems or build career capital to do so (as we write in our FAQ). Sometimes these roles are at organisations whose mission I disagree with, because the role nonetheless seems like an opportunity to do good work on a key problem.
For OpenAI in particular, we’ve tightened up our listings since the news stories a month ago, and are now only posting infosec roles and direct safety work – a small percentage of jobs they advertise. See here for the OAI roles we currently list. We used to list roles that seemed more tangentially safety-related, but because of our reduced confidence in OpenAI, we limited the listings further to only roles that are very directly on safety or security work. I still expect these roles to be good opportunities to do important work. Two live examples:
Infosec
Even if we were very sure that OpenAI was reckless and did not care about existential safety, I would still expect them to not want their model to leak out to competitors, and importantly, we think it’s still good for the world if their models don’t leak! So I would still expect people working on their infosec to be doing good work.
Non-infosec safety work
These still seem like potentially very strong roles with the opportunity to do very important work. We think it’s still good for the world if talented people work in roles like this!
This is true even if we expect them to lack political power and to play second fiddle to capabilities work and even if that makes them less good opportunities vs. other companies.
We also include a note on their ‘job cards’ on the job board (also DeepMind’s and Anthropic’s) linking to the Working at an AI company article you mentioned, to give context. We’re not opposed to giving more or different context on OpenAI’s cards and are happy to take suggestions!