Where is the paradigm for Effective Activism? On a first thought, it doesn’t even seem to be difficult to do better than status quo.
sayan
Sayan’s Braindump
Quick question. Given that now the Conservative Agency paper is available, what am I missing if I just read the paper and not this post? It seems easier to me to follow the notations of the paper. Is there any significant difference between the formalization of this post and the paper?
I read books on multiple devices—GNU/Linux, Android, and Kindle. Last time I checked, Calibre was too feature-rich and heavy, but lacked a simple getting-out-of-my way workflow for syncing my reading between devices. Is there a better solution now?
I love how you emphasized learning Unix tools. I use other things mentioned here except tmux. Would you be willing to share your tmux workflow in more detail with keybindings?
[Question] What is your Personal Knowledge Management system?
I am interested!
[Question] Unknown Unknowns in AI Alignment
Just finished reading Yuval Noah Harari’s new book 21 Lessons for the 21st Century. Primary reaction: even if you already know all the things being presented in the book, it is worth a read just because of the clarity into the discussion the book offers.
This is an amazingly comprehensive and useful paper. I wish it was longer with little summaries of some papers it references, rather than just citing them.
I also wish somebody creates a video version of it in the spirit of CGP Grey’s video on the classic Bostrom paper, so that I can just redirect people to the video instead of sub-optimally trying to explain all these things myself.
Shared the draft with you. Please let me know your feedback.
Shared the draft with you. Feel free to comment and question.
I have started to write a series of rigorous introductory blogposts on Reinforcement Learning for people with no background in it. This is totally experimental and I would love to have some feedback on my draft. Please let me know if anyone is interested.
Would CIRL with many human agents realistically model our world?
What does AI alignment mean with respect to many humans with different goals? Are we implicitly assuming (with all our current agendas) that the final model of AGI is to being corrigible with one human instructor?
How do we synthesize goals of so many human agents into one utility function? Are we assuming solving alignment with one supervisor is easier? Wouldn’t having many supervisors restrict the space meaningfully?