This is a linkpost for a talk I gave this past summer for the ALIFE conference. If you haven’t heard of it before, ALIFE (short for “artificial life”) is a subfield of biology which… well, here are some of the session titles from day 1 of the conference to give the gist:
Cellular Automata, Self-Reproduction and Complexity
Evolving Robot Bodies and Brains in Unity
Self-Organizing Systems with Machine Learning
Untangling Cognition: How Information Theory can Demystify Brains
… so you can see how this sort of crowd might be interested in AI alignment.
Rory Greig and Simon McGregor definitely saw how such a crowd might be interested in AI alignment, so they organized an alignment workshop at the conference.
I gave this talk as part of that workshop. The stated goal of the talk was to “nerd-snipe ALIFE researchers into working on alignment-relevant questions of agency”. It’s pretty short (~20 minutes), and aims for a general energy of “hey here’s some cool research hooks”.
If you want to nerd-snipe technical researchers into thinking about alignment-relevant questions of agency, this talk is a short and relatively fun one to share.
Thankyou to Rory and Simon for organizing, and thankyou to Rory for getting the video posted publicly.
Talk: “AI Would Be A Lot Less Alarming If We Understood Agents”
Link post
This is a linkpost for a talk I gave this past summer for the ALIFE conference. If you haven’t heard of it before, ALIFE (short for “artificial life”) is a subfield of biology which… well, here are some of the session titles from day 1 of the conference to give the gist:
Cellular Automata, Self-Reproduction and Complexity
Evolving Robot Bodies and Brains in Unity
Self-Organizing Systems with Machine Learning
Untangling Cognition: How Information Theory can Demystify Brains
… so you can see how this sort of crowd might be interested in AI alignment.
Rory Greig and Simon McGregor definitely saw how such a crowd might be interested in AI alignment, so they organized an alignment workshop at the conference.
I gave this talk as part of that workshop. The stated goal of the talk was to “nerd-snipe ALIFE researchers into working on alignment-relevant questions of agency”. It’s pretty short (~20 minutes), and aims for a general energy of “hey here’s some cool research hooks”.
If you want to nerd-snipe technical researchers into thinking about alignment-relevant questions of agency, this talk is a short and relatively fun one to share.
Thankyou to Rory and Simon for organizing, and thankyou to Rory for getting the video posted publicly.