The Sequences. Surprised nobody mentioned this one yet.
While I am pretty sure you can’t compress the length of the sequences much without losing any valuable information, the fact is that for most people it’s just way too long to ever read through, and having some easily digestible video material would still be quite valuable. (Hopefully also by getting some people interested in reading the real thing?)
Turning the sequences into a set of videos would be a massive distillation job. On the high level it would ideally be something like:
Extract the set of important ideas the sequences convey. Identify the necessary dependencies between them.
Start turning the ideas into videos in topological order. (Each video should link the relevant posts for further reading.)
… Profit?
Would making these videos be optimal in some sense? I don’t know. Is trying to create more rationalists a good idea? Eliezer wrote the sequences with the express intent of creating more rationalists to help reduce AI risk. Is this still relevant? Maybe. AFAIK many people think that alignment is currently bottlenecked on good researchers. (Of course in this framing many other alignment relevant technical topics also make sense as video ideas.)
The Sequences. Surprised nobody mentioned this one yet.
While I am pretty sure you can’t compress the length of the sequences much without losing any valuable information, the fact is that for most people it’s just way too long to ever read through, and having some easily digestible video material would still be quite valuable. (Hopefully also by getting some people interested in reading the real thing?)
Turning the sequences into a set of videos would be a massive distillation job. On the high level it would ideally be something like:
Extract the set of important ideas the sequences convey. Identify the necessary dependencies between them.
Start turning the ideas into videos in topological order. (Each video should link the relevant posts for further reading.)
… Profit?
Would making these videos be optimal in some sense? I don’t know. Is trying to create more rationalists a good idea? Eliezer wrote the sequences with the express intent of creating more rationalists to help reduce AI risk. Is this still relevant? Maybe. AFAIK many people think that alignment is currently bottlenecked on good researchers. (Of course in this framing many other alignment relevant technical topics also make sense as video ideas.)