Thanks for all the references! I don’t currently have much time to read all of it right now so I can’t really engage with the specific arguments for the rejection of using utility functions/studying recursive self-improvement.
I essentially agree with most of what you wrote. There is maybe a slight disagreement in how you framed (not what you meant) how research focus shifted since 2014.
I see Superintelligence as essentially saying “hey, there is pb A. And even if we solve A, then we might also have B. And given C and D, there might be E.” Now that the field is more mature and we have many more researchers getting paid to work on these problems, the arguments became much more goal focused. Now people are saying “I’m going to make progress on sub-problem X, by publishing a paper on Y. And working on Z is not cost-effective given so I’m not going to work on it given humanity’s current time constraints.”
These approaches are often grouped as “long-term problems-focused” and “making tractable progress now focused”. In the first group you have Yudkowsky 2010, Bostrom 2014, MIRI’s current research and maybe CAIS. In the second one you have current CHAI/FHI/OpenAI/DeepMind/Ought papers.
Your original framing can be interpreted as “after proving some mathematical theorems, people rejected the main arguments of Superintelligence and now most of the community agrees that working on X, Y and Z are tractable but A, B and C are more controversials”.
I think a more nuanced and precise framing would be: “In Superintelligence Bostrom exposes exhaustively the risks associated with advanced AI. A short portion of the book is dedicated to the problems are working on right now. Indeed, people stopped working on the other problems (largest portion of the book) because 1) there hasn’t been really productive working on them 2) some rebuttals have been written online giving convincing arguments that those pbs are not tractable anyway 3) there are now well-funded research organizations with incentives to make tangible progress on those pbs.”
In your last framing, you presented precise papers/rebuttals (thanks again!) for 2), and I think rebuttals are a great reason to stop working on a pb, but I think they’re not the only reason and not the real reason people stopped working on those pb. To be fair, I think 1) can be explained by many more factors than “it’s theoretically impossible to make progress on those pbs”. It can be that the research mindset required to work on these pbs is less socially/intellectually validating or requires much more theoretical approaches, so will be off-putting/tiresome to most recent grads that enter the field. I also think that AI Safety is now much more intertwined with evidence-based approaches such as Effective Altruism than it was in 2014, which explains 3), so people start presenting their research as “partial solutions to the pb. of AI Safety” or “research agenda”.
To be clear, I’m not criticizing the current shift in research. I think it’s productive for the field, both in the short term and long term. To give a bit more personal context, I started getting interested in AI Safety after reading Bostrom and have always been more interested in the “finding problems” approach. I went to FHI to work on AI Safety because I was super interested in finding new pbs related to the treacherous turn. It’s now almost taboo to say that we’re working on pbs that are sub-optimally minimizing AI risk, but the real reason that pushed me to think about those pbs was because they were both important and interesting. The pb. with the current “shift in framing” is that it’s making it socially unacceptable for people to think/work on more long-term pbs where there is more variance in research productivity.
I don’t quite understand the question?
Sorry about that. I thought there was some link to our discussion about utility functions but I misunderstood.
EDIT: I also wanted to mention that the number of pages in a book doesn’t account for how important the author think the pb. is (Bostrom even comments on this in the postface of its book). Again, the book is mostly about saying “here are all the pbs”, not “these are the tractable pbs we should start working on, and we should dedicate research ressources proportionally to the amount of pages I talk about it in the book”.
Thanks for all the references! I don’t currently have much time to read all of it right now so I can’t really engage with the specific arguments for the rejection of using utility functions/studying recursive self-improvement.
I essentially agree with most of what you wrote. There is maybe a slight disagreement in how you framed (not what you meant) how research focus shifted since 2014.
I see Superintelligence as essentially saying “hey, there is pb A. And even if we solve A, then we might also have B. And given C and D, there might be E.” Now that the field is more mature and we have many more researchers getting paid to work on these problems, the arguments became much more goal focused. Now people are saying “I’m going to make progress on sub-problem X, by publishing a paper on Y. And working on Z is not cost-effective given so I’m not going to work on it given humanity’s current time constraints.”
These approaches are often grouped as “long-term problems-focused” and “making tractable progress now focused”. In the first group you have Yudkowsky 2010, Bostrom 2014, MIRI’s current research and maybe CAIS. In the second one you have current CHAI/FHI/OpenAI/DeepMind/Ought papers.
Your original framing can be interpreted as “after proving some mathematical theorems, people rejected the main arguments of Superintelligence and now most of the community agrees that working on X, Y and Z are tractable but A, B and C are more controversials”.
I think a more nuanced and precise framing would be: “In Superintelligence Bostrom exposes exhaustively the risks associated with advanced AI. A short portion of the book is dedicated to the problems are working on right now. Indeed, people stopped working on the other problems (largest portion of the book) because 1) there hasn’t been really productive working on them 2) some rebuttals have been written online giving convincing arguments that those pbs are not tractable anyway 3) there are now well-funded research organizations with incentives to make tangible progress on those pbs.”
In your last framing, you presented precise papers/rebuttals (thanks again!) for 2), and I think rebuttals are a great reason to stop working on a pb, but I think they’re not the only reason and not the real reason people stopped working on those pb. To be fair, I think 1) can be explained by many more factors than “it’s theoretically impossible to make progress on those pbs”. It can be that the research mindset required to work on these pbs is less socially/intellectually validating or requires much more theoretical approaches, so will be off-putting/tiresome to most recent grads that enter the field. I also think that AI Safety is now much more intertwined with evidence-based approaches such as Effective Altruism than it was in 2014, which explains 3), so people start presenting their research as “partial solutions to the pb. of AI Safety” or “research agenda”.
To be clear, I’m not criticizing the current shift in research. I think it’s productive for the field, both in the short term and long term. To give a bit more personal context, I started getting interested in AI Safety after reading Bostrom and have always been more interested in the “finding problems” approach. I went to FHI to work on AI Safety because I was super interested in finding new pbs related to the treacherous turn. It’s now almost taboo to say that we’re working on pbs that are sub-optimally minimizing AI risk, but the real reason that pushed me to think about those pbs was because they were both important and interesting. The pb. with the current “shift in framing” is that it’s making it socially unacceptable for people to think/work on more long-term pbs where there is more variance in research productivity.
Sorry about that. I thought there was some link to our discussion about utility functions but I misunderstood.
EDIT: I also wanted to mention that the number of pages in a book doesn’t account for how important the author think the pb. is (Bostrom even comments on this in the postface of its book). Again, the book is mostly about saying “here are all the pbs”, not “these are the tractable pbs we should start working on, and we should dedicate research ressources proportionally to the amount of pages I talk about it in the book”.