I don’t think, for example, there’s a good intro resource you can send somebody that makes a common-sense case for “basic research into agency could be useful for avoiding risks from powerful AI”
My talk for the alignment workshop at the ALIFE conference this past summer was roughly what I think you want. Unfortunately I don’t think it was recorded. Slides are here, but they don’t really do it on their own.
FWIW I also think the “Key Phenomena of AI risk” reading curriculum (h/t TJ) does some of this at least indirectly (it doesn’t set out to directly answer this question, but I think a lot of the answers to the question are comprise in the curriculum).
The workshop talks from the previous year’s ALIFE conference (2022) seem to be published on YouTube, so I’m following up with whether John’s talk from this year’s conference can be released as well.
I mean, I could always re-present it and record if there’s demand for that.
… or we could do this the fun way: powerpoint karaoke. I.e. you make up the talk and record it, using those slides. I bet Alexander could give a really great one.
Title: AI would be a lot less alarming if we understood agents
Description: In this talk, John will discuss why and how fundamental questions about agency—as they are asked, among others, by scholars in biology, artificial life, systems theory, etc. - are important to making progress in AI alignment. John gave a similar talk at the annual ALIFE conference in 2023, as an attempt to nerd-snipe researchers studying agency in a biological context.
--
To be informed about future Speaker Series events by subscribing to our SS Mailing List here. You can also add the PIBBSS Speaker Events to your calendar through this link.
My talk for the alignment workshop at the ALIFE conference this past summer was roughly what I think you want. Unfortunately I don’t think it was recorded. Slides are here, but they don’t really do it on their own.
FWIW I also think the “Key Phenomena of AI risk” reading curriculum (h/t TJ) does some of this at least indirectly (it doesn’t set out to directly answer this question, but I think a lot of the answers to the question are comprise in the curriculum).
(Edit: fixed link)
How confident are you about it not having been recorded? If not very, seems props worth checking again
The workshop talks from the previous year’s ALIFE conference (2022) seem to be published on YouTube, so I’m following up with whether John’s talk from this year’s conference can be released as well.
The video of John’s talk has now been uploaded on YouTube here.
I mean, I could always re-present it and record if there’s demand for that.
… or we could do this the fun way: powerpoint karaoke. I.e. you make up the talk and record it, using those slides. I bet Alexander could give a really great one.
I have no doubt Alexander would shine!
Happy to run a PIBBSS speaker event for this, record it and make it publicly available. Let me know if you’re keen and we’ll reach out to find a time.
To follow up on this, we’ll be hosting John’s talk on Dec 12th, 9:30AM Pacific / 6:30PM CET.
Join through this Zoom Link.
Title: AI would be a lot less alarming if we understood agents
Description: In this talk, John will discuss why and how fundamental questions about agency—as they are asked, among others, by scholars in biology, artificial life, systems theory, etc. - are important to making progress in AI alignment. John gave a similar talk at the annual ALIFE conference in 2023, as an attempt to nerd-snipe researchers studying agency in a biological context.
--
To be informed about future Speaker Series events by subscribing to our SS Mailing List here. You can also add the PIBBSS Speaker Events to your calendar through this link.
FYI this link redirects to a UC Berkeley login page.