I’m giving you a reminder about 12 hours early, just to signal how impatient I am to hear what lessons you learned. :) Also, can you please address my questions to cousin_it in your post? (I’m a bit confused about the relative lack of engagement on the part of all three people involved in this prize with people’s questions and suggestions. If AI alignment is important, surely figuring out / iterating on better ways of funding AI alignment research should be a top priority?)
I’m giving you a reminder about 12 hours early, just to signal how impatient I am to hear what lessons you learned. :) Also, can you please address my questions to cousin_it in your post? (I’m a bit confused about the relative lack of engagement on the part of all three people involved in this prize with people’s questions and suggestions. If AI alignment is important, surely figuring out / iterating on better ways of funding AI alignment research should be a top priority?)
Gosh I’m so irritated that you gave the reminder before me, I was looking forward to showing off my basic calendar-use skills ;-)
Anyway, am also looking forward to Zvi’s lessons and updates!