I’m a layman, attempting to help with infrastructure for technical people, who reads the newsletter sporadically to keep up with the overall trends in AI and AI Safety.
Right now I read the newsletter fairly sporadically. I think it might benefit me to, once a year, or maybe once a quarter, reading a higher level summary that goes over which papers seemed most important that year, and which overall research trends seemed most significant. I’m not sure if this is worth the opportunity cost for you, but it’d be helpful to me and probably others.
(I’d be interested in that both from the standpoint of my own personal knowledge, as well as tracking how stable your opinions are over time – when you list something as particularly interested or important do you tend to still think so a year later?)
I also think it’d make more sense for LessWrong to curate a “highlights of the highlights” post once every 3-12 months, than what we currently do, which is every so often randomly decide that a recent Newsletter was particularly good and curate that.
I think it might benefit me to, once a year, or maybe once a quarter, reading a higher level summary that goes over which papers seemed most important that year, and which overall research trends seemed most significant. I’m not sure if this is worth the opportunity cost for you, but it’d be helpful to me and probably others.
A slightly different option would be to read the yearly AI alignment literature review, use that to find the top N most interesting papers, and read their summaries in the spreadsheet. This also has the benefit of showing you a perspective other than mine on what’s important—there could be an Agent Foundations paper in the list that I haven’t summarized.
(I’d be interested in that both from the standpoint of my own personal knowledge, as well as tracking how stable your opinions are over time – when you list something as particularly interested or important do you tend to still think so a year later?)
I think that the stability of my opinions is going up over time, mainly because I started the newsletter while still new to the field.
I also think it’d make more sense for LessWrong to curate a “highlights of the highlights” post once every 3-12 months, than what we currently do, which is every so often randomly decide that a recent Newsletter was particularly good and curate that.
This seems good; I’m currently thinking I could write something like that once every 25 newsletters (which is about half a year), which should also help me evaluate the stability of my opinions.
I browse this newsletter occasionally via LW; I am not subscribed by email. I am not so far seriously involved in AI research, and I don’t wind up understanding most of it in detail, but I have a longer-term interest in such issues, and I want to keep a fraction of a bird’s eye on the state of the field if possible, so that if I start in on deeper such activities a few years from now, I can re-skim the archives and try to catch up.
Speculative followup: seeing a few other people say similar things here and contrasting it with what seems to have been implied in the retrospective itself makes me guess there’s a seriousness split between LW and email “subscribers”. Does the former have passersby dominating the reader set (especially since it’ll be presented to people who are on LW for some other reason), whereas anyone who cares more deeply and specifically will primarily consume the newsletter by email?
Oh, I think there are a lot of email subscribers who skim/passively consume the newsletter. I didn’t focus very much on them in the retrospective because I don’t think I’m adding that much value to them.
It might be true that all of the people who read it thoroughly are subscribed by email, I’m not sure. It’s hard to tell because I expect skimmers far outnumber thorough readers, so seeing a few skimmers via the comments is not strong evidence that there aren’t thorough readers.
It is hits-based for me, where the hit is usually using analogies or models I otherwise have a better understanding of than alignment. Because I am a relative layperson I do not get a deep understanding of the papers, but the questions of the field are intrinsically interesting to me and I find the difference in viewpoints between the papers and opinions/summaries I do hit on very useful for trying to keep a ‘shape of the field’ in mind should I ever need to engage with it more deeply.
The newsletter is extremely helpful for me for keeping up to date with AI alignment research. I also find the “Other progress in AI” section very helpful.
Both the summaries and the opinion segments are extremely helpful for me!
Overall, I think that reading (or listening to) all the ANs that I’ve read so far was an extremely high EV-per-hour time investment.
I’m a second-year college student. I hope to pursue a career in computing ethics, but I’m not sure I’ll end up specifically in AI safety. I’ve attended some AI safety research meetings at my school, but I don’t expect to actually begin doing my own research until next year.
I laughed at your idea that some people subscribe to the newsletter to feel like part of an elite group… yeah, that might be me at this point! However, I think it will be very useful for me when I have more time this summer to spend on deciphering the content. If I don’t understand something in your summary, I look it up, so I’ve already begun to organically build a useful knowledge base.
Also, the newsletter provides me with a regular dose of reassurance and inspiration. Even when I don’t have time to thoroughly read the summaries, skimming them reminds me how interesting this field is.
Thanks for your work, and I enjoyed reading the retrospective!
If I don’t understand something in your summary, I look it up, so I’ve already begun to organically build a useful knowledge base.
This seems like a great way to use the newsletter :)
Also, the newsletter provides me with a regular dose of reassurance and inspiration. Even when I don’t have time to thoroughly read the summaries, skimming them reminds me how interesting this field is.
Comment thread for the question: What is the value of the newsletter for you?
Copied from my answer in the feedback form:
I’m a layman, attempting to help with infrastructure for technical people, who reads the newsletter sporadically to keep up with the overall trends in AI and AI Safety.
Right now I read the newsletter fairly sporadically. I think it might benefit me to, once a year, or maybe once a quarter, reading a higher level summary that goes over which papers seemed most important that year, and which overall research trends seemed most significant. I’m not sure if this is worth the opportunity cost for you, but it’d be helpful to me and probably others.
(I’d be interested in that both from the standpoint of my own personal knowledge, as well as tracking how stable your opinions are over time – when you list something as particularly interested or important do you tend to still think so a year later?)
I also think it’d make more sense for LessWrong to curate a “highlights of the highlights” post once every 3-12 months, than what we currently do, which is every so often randomly decide that a recent Newsletter was particularly good and curate that.
A slightly different option would be to read the yearly AI alignment literature review, use that to find the top N most interesting papers, and read their summaries in the spreadsheet. This also has the benefit of showing you a perspective other than mine on what’s important—there could be an Agent Foundations paper in the list that I haven’t summarized.
I think that the stability of my opinions is going up over time, mainly because I started the newsletter while still new to the field.
This seems good; I’m currently thinking I could write something like that once every 25 newsletters (which is about half a year), which should also help me evaluate the stability of my opinions.
I browse this newsletter occasionally via LW; I am not subscribed by email. I am not so far seriously involved in AI research, and I don’t wind up understanding most of it in detail, but I have a longer-term interest in such issues, and I want to keep a fraction of a bird’s eye on the state of the field if possible, so that if I start in on deeper such activities a few years from now, I can re-skim the archives and try to catch up.
Speculative followup: seeing a few other people say similar things here and contrasting it with what seems to have been implied in the retrospective itself makes me guess there’s a seriousness split between LW and email “subscribers”. Does the former have passersby dominating the reader set (especially since it’ll be presented to people who are on LW for some other reason), whereas anyone who cares more deeply and specifically will primarily consume the newsletter by email?
Oh, I think there are a lot of email subscribers who skim/passively consume the newsletter. I didn’t focus very much on them in the retrospective because I don’t think I’m adding that much value to them.
It might be true that all of the people who read it thoroughly are subscribed by email, I’m not sure. It’s hard to tell because I expect skimmers far outnumber thorough readers, so seeing a few skimmers via the comments is not strong evidence that there aren’t thorough readers.
It is hits-based for me, where the hit is usually using analogies or models I otherwise have a better understanding of than alignment. Because I am a relative layperson I do not get a deep understanding of the papers, but the questions of the field are intrinsically interesting to me and I find the difference in viewpoints between the papers and opinions/summaries I do hit on very useful for trying to keep a ‘shape of the field’ in mind should I ever need to engage with it more deeply.
The newsletter is extremely helpful for me for keeping up to date with AI alignment research. I also find the “Other progress in AI” section very helpful.
Both the summaries and the opinion segments are extremely helpful for me!
Overall, I think that reading (or listening to) all the ANs that I’ve read so far was an extremely high EV-per-hour time investment.
Thanks!
I’m a second-year college student. I hope to pursue a career in computing ethics, but I’m not sure I’ll end up specifically in AI safety. I’ve attended some AI safety research meetings at my school, but I don’t expect to actually begin doing my own research until next year.
I laughed at your idea that some people subscribe to the newsletter to feel like part of an elite group… yeah, that might be me at this point! However, I think it will be very useful for me when I have more time this summer to spend on deciphering the content. If I don’t understand something in your summary, I look it up, so I’ve already begun to organically build a useful knowledge base.
Also, the newsletter provides me with a regular dose of reassurance and inspiration. Even when I don’t have time to thoroughly read the summaries, skimming them reminds me how interesting this field is.
Thanks for your work, and I enjoyed reading the retrospective!
Thanks!
This seems like a great way to use the newsletter :)
:)