Hey everyone, my name is Soroush and I work full-time on AGI alignment.
As part of my work in alignment, I’ve started a podcast, The AGI Show, to help discuss and promote the best and most important ideas likely to get us to a safe & positive future for humanity with AGI.
In our upcoming episode sequence, we’ll be talking through the implications of AGI & how we should collectively respond to help ensure a positive future for humanity. We’ll be sitting down with technical researchers, governance & policy folks, and others working in the field.
The target audience are people who are fairly technical (e.g. have education or experience equivalent to a technical undergrad degree) but not alignment experts or researchers. As such, we’ll always try to explain jargon and simplify concepts to make them accessible a general technical audience.
The podcast could be interesting to you (or a friend) if:
You’re a technical person just starting to learn about AGI safety
You’re a technical person who wants more ideas on how to apply your time and skills towards AGI safety (coming soon)
You’re not super technical, but keen to move one notch up the technical ladder towards an AGI podcast that discusses technical topics but in a fairly accessible fashion
Note: this podcast could be especially good for technical folks outside the LessWrong and broader rationality / AI alignment community, since it doesn’t assume any prior interest or exposure to this community’s terminology or norms.
Please check it out and provide your honest feedback! I’m keen to make the show as positive as it can towards our future with AGI and as valuable as possible for listeners.
[Linkpost] The AGI Show podcast
This is a linkpost for www.theagishow.com. Also on YouTube at https://www.youtube.com/@theagishow.
Hey everyone, my name is Soroush and I work full-time on AGI alignment.
As part of my work in alignment, I’ve started a podcast, The AGI Show, to help discuss and promote the best and most important ideas likely to get us to a safe & positive future for humanity with AGI.
In our first episode sequence, we sit down with AI experts to talk timelines to AGI. As an example, we sat down in Episode 4 with Ryan Kupyn, the superforecaster who won Astral Codex Ten’s 2022 prediction contest to discuss his timelines to AGI and how he got there.
In our upcoming episode sequence, we’ll be talking through the implications of AGI & how we should collectively respond to help ensure a positive future for humanity. We’ll be sitting down with technical researchers, governance & policy folks, and others working in the field.
The target audience are people who are fairly technical (e.g. have education or experience equivalent to a technical undergrad degree) but not alignment experts or researchers. As such, we’ll always try to explain jargon and simplify concepts to make them accessible a general technical audience.
The podcast could be interesting to you (or a friend) if:
You’re a technical person just starting to learn about AGI safety
You’re a technical person who wants more ideas on how to apply your time and skills towards AGI safety (coming soon)
You’re not super technical, but keen to move one notch up the technical ladder towards an AGI podcast that discusses technical topics but in a fairly accessible fashion
Note: this podcast could be especially good for technical folks outside the LessWrong and broader rationality / AI alignment community, since it doesn’t assume any prior interest or exposure to this community’s terminology or norms.
Please check it out and provide your honest feedback! I’m keen to make the show as positive as it can towards our future with AGI and as valuable as possible for listeners.
Audio: www.theagishow.com/
Video: https://www.youtube.com/@theagishow