… If you’re wondering why you just read all that, here’s the juice: often in GSAI position papers there’ll be some reference to expectations that capture “harm” or “safety”. Preexpectations and postexpectations with respect to particular pairs of programs could be a great way to cash this out, cuz we could look at programs as interventions and simulate RCTs (labeling one program control and one treatment) in our world modeling stack. When it comes to harm and safety, Prop and bool are definitely not rich enough.
Guaranteed Safe AI paper club meets again this thursday
Event for the paper club: https://calendar.app.google/2a11YNXUFwzHbT3TA
blurb about the paper in last month’s newsletter: