With apologies for the belated response: I think greghb makes a lot of good points here, and I agree with him on most of the specific disagreements with Daniel. In particular:
I agree that “Bio Anchors doesn’t presume we have a brain, it presumes we have transformers. And transformers don’t know what to do with a lifetime of experience, at least nowhere near as well as an infant brain does.” My guess is that we should not expect human-like sample efficiency from a simple randomly initialized network; instead, we should expect to extensively train a network to the point where it can do this human-like learning. (That said, this is far from obvious, and some AI scientists take the opposite point of view.)
I’m not super sympathetic to Daniel’s implied position that there are lots of possible transformative tasks and we “only need one” of them. I think there’s something to this (in particular, we don’t need to replicate everything humans can do), but I think once we start claiming that there are 5+ independent tasks such that automating them would be transformative, we have to ask ourselves why transformative events are as historically rare as they are. (More at my discussion of persuasion on another thread.)
Overall, I think that datasets/environments are plausible as a major blocker to transformative AI, and I think Bio Anchors would be a lot stronger if it had more to say about this.
I am sympathetic to Bio Anchors’s bottom-line quantitative estimates despite this, though (and to be clear, I held all of these positions at the time Bio Anchors was drafted). It’s not easy for me to explain all of where I’m coming from, but a few intuitions:
We’re still in a regime where compute is an important bottleneck to AI development, and funding and interest are going up. If we get into a regime where compute is plentiful and data/environments are the big blocker, I expect efforts to become heavily focused there.
Combining the first two points leaves me guessing that “if there’s a not-prohibitively-difficult way to do this, people are going to find it on the time frames indicated.” And I think there probably is:
The Internet contains a truly massive amount of information at this point about many different dimensions of the human world. I expect this information source to keep growing, especially as AI advances and interacts more productively and richly with humans, and as AI can potentially be used as an increasingly large part of the process of finding data, cleaning it, etc.
AI developers will also—especially as funding and interest grow—have the ability to collect data by (a) monitoring researchers, contractors, volunteers, etc.; (b) designing products with data collection in mind (e.g., Assistant and Siri).
The above two points seem especially strong to me when considering that automating science and engineering might be sufficient for transformative AI—these seem particularly conducive to learning from digitally captured information.
On a totally separate note, it seems to me that fairly simple ingredients have made the historical human “environment” sufficiently sophisticated to train transformative capabilities. It seems to me that most of what’s “interesting and challenging” about our environment comes from competing with each other, and I’d guess it’s possible to set up some sort of natural-selection-driven environment in which AIs compete with each other; I wouldn’t expect such a thing to be highly sensitive to whether we’re able to capture all of the details of e.g. how to get food that occurred in our past. (I would expect it to be compute-intensive, though.)
Hopefully that gives a sense of where I’m coming from. Overall, I think this is one of the most compelling objections to Bio Anchors; I find it stronger than the points Eliezer focuses on above (unless you are pretty determined to steel-man any argument along the lines of “Brains and AIs are different” into a specific argument about the most important difference.)
With apologies for the belated response: I think greghb makes a lot of good points here, and I agree with him on most of the specific disagreements with Daniel. In particular:
I agree that “Bio Anchors doesn’t presume we have a brain, it presumes we have transformers. And transformers don’t know what to do with a lifetime of experience, at least nowhere near as well as an infant brain does.” My guess is that we should not expect human-like sample efficiency from a simple randomly initialized network; instead, we should expect to extensively train a network to the point where it can do this human-like learning. (That said, this is far from obvious, and some AI scientists take the opposite point of view.)
I’m not super sympathetic to Daniel’s implied position that there are lots of possible transformative tasks and we “only need one” of them. I think there’s something to this (in particular, we don’t need to replicate everything humans can do), but I think once we start claiming that there are 5+ independent tasks such that automating them would be transformative, we have to ask ourselves why transformative events are as historically rare as they are. (More at my discussion of persuasion on another thread.)
Overall, I think that datasets/environments are plausible as a major blocker to transformative AI, and I think Bio Anchors would be a lot stronger if it had more to say about this.
I am sympathetic to Bio Anchors’s bottom-line quantitative estimates despite this, though (and to be clear, I held all of these positions at the time Bio Anchors was drafted). It’s not easy for me to explain all of where I’m coming from, but a few intuitions:
We’re still in a regime where compute is an important bottleneck to AI development, and funding and interest are going up. If we get into a regime where compute is plentiful and data/environments are the big blocker, I expect efforts to become heavily focused there.
Several decades is just a very long time. (This relates to the overall burden of proof on arguments like these, particularly the fact that this century is likely to see most of the effort that has gone into transformative AI development to date.)
Combining the first two points leaves me guessing that “if there’s a not-prohibitively-difficult way to do this, people are going to find it on the time frames indicated.” And I think there probably is:
The Internet contains a truly massive amount of information at this point about many different dimensions of the human world. I expect this information source to keep growing, especially as AI advances and interacts more productively and richly with humans, and as AI can potentially be used as an increasingly large part of the process of finding data, cleaning it, etc.
AI developers will also—especially as funding and interest grow—have the ability to collect data by (a) monitoring researchers, contractors, volunteers, etc.; (b) designing products with data collection in mind (e.g., Assistant and Siri).
The above two points seem especially strong to me when considering that automating science and engineering might be sufficient for transformative AI—these seem particularly conducive to learning from digitally captured information.
On a totally separate note, it seems to me that fairly simple ingredients have made the historical human “environment” sufficiently sophisticated to train transformative capabilities. It seems to me that most of what’s “interesting and challenging” about our environment comes from competing with each other, and I’d guess it’s possible to set up some sort of natural-selection-driven environment in which AIs compete with each other; I wouldn’t expect such a thing to be highly sensitive to whether we’re able to capture all of the details of e.g. how to get food that occurred in our past. (I would expect it to be compute-intensive, though.)
Hopefully that gives a sense of where I’m coming from. Overall, I think this is one of the most compelling objections to Bio Anchors; I find it stronger than the points Eliezer focuses on above (unless you are pretty determined to steel-man any argument along the lines of “Brains and AIs are different” into a specific argument about the most important difference.)