When you say “the last few years has seen many people here” for your 2nd/3rd paragraph, do you have any posts / authors in mind to illustrate?
For the utility of talking about utility functions, see this rebuttal of an argument justifying the use of utility functions by appealing to the VNM-utility theorem, and a fewmoreposts expanding the discussion. The CAIS paper argues that we shouldn’t model future AI as having monolithic long-term utility function. But it’s by no means a settled debate.
For the rejection of stable self improvement as a research priority, Paul Christiano wrote a post in 2014 where he argued that stable recursive self improvement will be solved a special case of reasoning under uncertainty. And again, the CAIS model proposes that technological progress will feed into itself (not unlike what already happens), rather than a monolithic agent improving itself.
I get the impression that very few people outside of MIRI work on studying stable recursive self improvement, though this might be because they think it’s not their comparative advantage.
I agree that there has been a shift in what people write about because the field grew (as Daniel Filan pointed out). However, I don’t remember reading anyone dismiss convergent instrumental goals such as increasing your own intelligence or utility functions as an useful abstraction to think about agency.
There’s a difference between accepting something as a theoretical problem, and accepting that it’s a tractable research priority. I was arguing that the type of work we do right now might not be useful for future researchers, and so I wasn’t trying to say that these things didn’t exist. Rather, it’s not clear that productive work can be done on them right now. My evidence was that the way we think about these problems has changed over the years. Of course, you could say that the reason why the research focuses shifted is because we made progress, but I’d be skeptical about that hypothesis.
In your thread with ofer, he asked what was the difference between using loss functions in neural nets vs. objective function / utility functions and I haven’t fully catched your opinion on that.
I don’t quite understand the question? It’s my understanding that I was disputing a notion that the inner alignment should count as a “shift in arguments” for AI risk. I claimed that it was a refinement of the traditional arguments; more specifically, we decomposed the value alignment problem into two levels. I’m quite confused at what I’m missing here.
Thanks for all the references! I don’t currently have much time to read all of it right now so I can’t really engage with the specific arguments for the rejection of using utility functions/studying recursive self-improvement.
I essentially agree with most of what you wrote. There is maybe a slight disagreement in how you framed (not what you meant) how research focus shifted since 2014.
I see Superintelligence as essentially saying “hey, there is pb A. And even if we solve A, then we might also have B. And given C and D, there might be E.” Now that the field is more mature and we have many more researchers getting paid to work on these problems, the arguments became much more goal focused. Now people are saying “I’m going to make progress on sub-problem X, by publishing a paper on Y. And working on Z is not cost-effective given so I’m not going to work on it given humanity’s current time constraints.”
These approaches are often grouped as “long-term problems-focused” and “making tractable progress now focused”. In the first group you have Yudkowsky 2010, Bostrom 2014, MIRI’s current research and maybe CAIS. In the second one you have current CHAI/FHI/OpenAI/DeepMind/Ought papers.
Your original framing can be interpreted as “after proving some mathematical theorems, people rejected the main arguments of Superintelligence and now most of the community agrees that working on X, Y and Z are tractable but A, B and C are more controversials”.
I think a more nuanced and precise framing would be: “In Superintelligence Bostrom exposes exhaustively the risks associated with advanced AI. A short portion of the book is dedicated to the problems are working on right now. Indeed, people stopped working on the other problems (largest portion of the book) because 1) there hasn’t been really productive working on them 2) some rebuttals have been written online giving convincing arguments that those pbs are not tractable anyway 3) there are now well-funded research organizations with incentives to make tangible progress on those pbs.”
In your last framing, you presented precise papers/rebuttals (thanks again!) for 2), and I think rebuttals are a great reason to stop working on a pb, but I think they’re not the only reason and not the real reason people stopped working on those pb. To be fair, I think 1) can be explained by many more factors than “it’s theoretically impossible to make progress on those pbs”. It can be that the research mindset required to work on these pbs is less socially/intellectually validating or requires much more theoretical approaches, so will be off-putting/tiresome to most recent grads that enter the field. I also think that AI Safety is now much more intertwined with evidence-based approaches such as Effective Altruism than it was in 2014, which explains 3), so people start presenting their research as “partial solutions to the pb. of AI Safety” or “research agenda”.
To be clear, I’m not criticizing the current shift in research. I think it’s productive for the field, both in the short term and long term. To give a bit more personal context, I started getting interested in AI Safety after reading Bostrom and have always been more interested in the “finding problems” approach. I went to FHI to work on AI Safety because I was super interested in finding new pbs related to the treacherous turn. It’s now almost taboo to say that we’re working on pbs that are sub-optimally minimizing AI risk, but the real reason that pushed me to think about those pbs was because they were both important and interesting. The pb. with the current “shift in framing” is that it’s making it socially unacceptable for people to think/work on more long-term pbs where there is more variance in research productivity.
I don’t quite understand the question?
Sorry about that. I thought there was some link to our discussion about utility functions but I misunderstood.
EDIT: I also wanted to mention that the number of pages in a book doesn’t account for how important the author think the pb. is (Bostrom even comments on this in the postface of its book). Again, the book is mostly about saying “here are all the pbs”, not “these are the tractable pbs we should start working on, and we should dedicate research ressources proportionally to the amount of pages I talk about it in the book”.
For the utility of talking about utility functions, see this rebuttal of an argument justifying the use of utility functions by appealing to the VNM-utility theorem, and a few more posts expanding the discussion. The CAIS paper argues that we shouldn’t model future AI as having monolithic long-term utility function. But it’s by no means a settled debate.
For the rejection of stable self improvement as a research priority, Paul Christiano wrote a post in 2014 where he argued that stable recursive self improvement will be solved a special case of reasoning under uncertainty. And again, the CAIS model proposes that technological progress will feed into itself (not unlike what already happens), rather than a monolithic agent improving itself.
I get the impression that very few people outside of MIRI work on studying stable recursive self improvement, though this might be because they think it’s not their comparative advantage.
There’s a difference between accepting something as a theoretical problem, and accepting that it’s a tractable research priority. I was arguing that the type of work we do right now might not be useful for future researchers, and so I wasn’t trying to say that these things didn’t exist. Rather, it’s not clear that productive work can be done on them right now. My evidence was that the way we think about these problems has changed over the years. Of course, you could say that the reason why the research focuses shifted is because we made progress, but I’d be skeptical about that hypothesis.
I don’t quite understand the question? It’s my understanding that I was disputing a notion that the inner alignment should count as a “shift in arguments” for AI risk. I claimed that it was a refinement of the traditional arguments; more specifically, we decomposed the value alignment problem into two levels. I’m quite confused at what I’m missing here.
Thanks for all the references! I don’t currently have much time to read all of it right now so I can’t really engage with the specific arguments for the rejection of using utility functions/studying recursive self-improvement.
I essentially agree with most of what you wrote. There is maybe a slight disagreement in how you framed (not what you meant) how research focus shifted since 2014.
I see Superintelligence as essentially saying “hey, there is pb A. And even if we solve A, then we might also have B. And given C and D, there might be E.” Now that the field is more mature and we have many more researchers getting paid to work on these problems, the arguments became much more goal focused. Now people are saying “I’m going to make progress on sub-problem X, by publishing a paper on Y. And working on Z is not cost-effective given so I’m not going to work on it given humanity’s current time constraints.”
These approaches are often grouped as “long-term problems-focused” and “making tractable progress now focused”. In the first group you have Yudkowsky 2010, Bostrom 2014, MIRI’s current research and maybe CAIS. In the second one you have current CHAI/FHI/OpenAI/DeepMind/Ought papers.
Your original framing can be interpreted as “after proving some mathematical theorems, people rejected the main arguments of Superintelligence and now most of the community agrees that working on X, Y and Z are tractable but A, B and C are more controversials”.
I think a more nuanced and precise framing would be: “In Superintelligence Bostrom exposes exhaustively the risks associated with advanced AI. A short portion of the book is dedicated to the problems are working on right now. Indeed, people stopped working on the other problems (largest portion of the book) because 1) there hasn’t been really productive working on them 2) some rebuttals have been written online giving convincing arguments that those pbs are not tractable anyway 3) there are now well-funded research organizations with incentives to make tangible progress on those pbs.”
In your last framing, you presented precise papers/rebuttals (thanks again!) for 2), and I think rebuttals are a great reason to stop working on a pb, but I think they’re not the only reason and not the real reason people stopped working on those pb. To be fair, I think 1) can be explained by many more factors than “it’s theoretically impossible to make progress on those pbs”. It can be that the research mindset required to work on these pbs is less socially/intellectually validating or requires much more theoretical approaches, so will be off-putting/tiresome to most recent grads that enter the field. I also think that AI Safety is now much more intertwined with evidence-based approaches such as Effective Altruism than it was in 2014, which explains 3), so people start presenting their research as “partial solutions to the pb. of AI Safety” or “research agenda”.
To be clear, I’m not criticizing the current shift in research. I think it’s productive for the field, both in the short term and long term. To give a bit more personal context, I started getting interested in AI Safety after reading Bostrom and have always been more interested in the “finding problems” approach. I went to FHI to work on AI Safety because I was super interested in finding new pbs related to the treacherous turn. It’s now almost taboo to say that we’re working on pbs that are sub-optimally minimizing AI risk, but the real reason that pushed me to think about those pbs was because they were both important and interesting. The pb. with the current “shift in framing” is that it’s making it socially unacceptable for people to think/work on more long-term pbs where there is more variance in research productivity.
Sorry about that. I thought there was some link to our discussion about utility functions but I misunderstood.
EDIT: I also wanted to mention that the number of pages in a book doesn’t account for how important the author think the pb. is (Bostrom even comments on this in the postface of its book). Again, the book is mostly about saying “here are all the pbs”, not “these are the tractable pbs we should start working on, and we should dedicate research ressources proportionally to the amount of pages I talk about it in the book”.