I’m a layman, attempting to help with infrastructure for technical people, who reads the newsletter sporadically to keep up with the overall trends in AI and AI Safety.
Right now I read the newsletter fairly sporadically. I think it might benefit me to, once a year, or maybe once a quarter, reading a higher level summary that goes over which papers seemed most important that year, and which overall research trends seemed most significant. I’m not sure if this is worth the opportunity cost for you, but it’d be helpful to me and probably others.
(I’d be interested in that both from the standpoint of my own personal knowledge, as well as tracking how stable your opinions are over time – when you list something as particularly interested or important do you tend to still think so a year later?)
I also think it’d make more sense for LessWrong to curate a “highlights of the highlights” post once every 3-12 months, than what we currently do, which is every so often randomly decide that a recent Newsletter was particularly good and curate that.
I think it might benefit me to, once a year, or maybe once a quarter, reading a higher level summary that goes over which papers seemed most important that year, and which overall research trends seemed most significant. I’m not sure if this is worth the opportunity cost for you, but it’d be helpful to me and probably others.
A slightly different option would be to read the yearly AI alignment literature review, use that to find the top N most interesting papers, and read their summaries in the spreadsheet. This also has the benefit of showing you a perspective other than mine on what’s important—there could be an Agent Foundations paper in the list that I haven’t summarized.
(I’d be interested in that both from the standpoint of my own personal knowledge, as well as tracking how stable your opinions are over time – when you list something as particularly interested or important do you tend to still think so a year later?)
I think that the stability of my opinions is going up over time, mainly because I started the newsletter while still new to the field.
I also think it’d make more sense for LessWrong to curate a “highlights of the highlights” post once every 3-12 months, than what we currently do, which is every so often randomly decide that a recent Newsletter was particularly good and curate that.
This seems good; I’m currently thinking I could write something like that once every 25 newsletters (which is about half a year), which should also help me evaluate the stability of my opinions.
Copied from my answer in the feedback form:
I’m a layman, attempting to help with infrastructure for technical people, who reads the newsletter sporadically to keep up with the overall trends in AI and AI Safety.
Right now I read the newsletter fairly sporadically. I think it might benefit me to, once a year, or maybe once a quarter, reading a higher level summary that goes over which papers seemed most important that year, and which overall research trends seemed most significant. I’m not sure if this is worth the opportunity cost for you, but it’d be helpful to me and probably others.
(I’d be interested in that both from the standpoint of my own personal knowledge, as well as tracking how stable your opinions are over time – when you list something as particularly interested or important do you tend to still think so a year later?)
I also think it’d make more sense for LessWrong to curate a “highlights of the highlights” post once every 3-12 months, than what we currently do, which is every so often randomly decide that a recent Newsletter was particularly good and curate that.
A slightly different option would be to read the yearly AI alignment literature review, use that to find the top N most interesting papers, and read their summaries in the spreadsheet. This also has the benefit of showing you a perspective other than mine on what’s important—there could be an Agent Foundations paper in the list that I haven’t summarized.
I think that the stability of my opinions is going up over time, mainly because I started the newsletter while still new to the field.
This seems good; I’m currently thinking I could write something like that once every 25 newsletters (which is about half a year), which should also help me evaluate the stability of my opinions.