Hi Chin. Thanks for writing this review, it seems like a well-needed and timed article—at least from my perspective as I was looking for something like this. In particular, I’m trying to frame my research interest relative to AI-safety field, but as you point out this is still too early.
I am wondering if you have any more insights for how you came up with your diagram above? In particular, are there any more peer-reviewed articles, or arXiv papers like Amodei et al (https://arxiv.org/abs/1606.06565) that you relied on? For example, I don’t understand why seed AI is such a critical concept in AI literature (is it even published), as it seems related to the concept of viruses which are an entire field in CS. Also, why is brain-inspired AI a category in your diagram, as far as I know that story isn’t published/peer reviewed or have signifcant traction?
I imagine I’m in the same place you were before you wrote this article, and I’d love to get some more insight about how you ended up with this layout.
With regards the Seed AI paradigm, most of the publications seem to have come from MIRI (especially the earlier ones when they were called the Singularity Institute) with many discussions happening both here on LessWrong as well as events like the Singularity Summit. I’d say most of the thinking around this paradigm happened before the era of deep learning. Nate Soares’ post might provide more context.
You’re right that brain-like AI has not had much traction yet, but it seems to me that there is a growing interest in this research area lately (albeit much slower than the Prosaic AI paradigm), and I don’t think they fall squarely under either of the Seed AI paradigm nor the Prosaic AI paradigm. Of course there may be considerable overlap between those ‘paradigms’, but I felt that they were sufficiently distinct to warrant a category of its own even though I may not think of it as a critical concept in AI literature.
Hi Chin. Thanks for writing this review, it seems like a well-needed and timed article—at least from my perspective as I was looking for something like this. In particular, I’m trying to frame my research interest relative to AI-safety field, but as you point out this is still too early.
I am wondering if you have any more insights for how you came up with your diagram above? In particular, are there any more peer-reviewed articles, or arXiv papers like Amodei et al (https://arxiv.org/abs/1606.06565) that you relied on? For example, I don’t understand why seed AI is such a critical concept in AI literature (is it even published), as it seems related to the concept of viruses which are an entire field in CS. Also, why is brain-inspired AI a category in your diagram, as far as I know that story isn’t published/peer reviewed or have signifcant traction?
I imagine I’m in the same place you were before you wrote this article, and I’d love to get some more insight about how you ended up with this layout.
Thank you so much,
catubc
With regards the Seed AI paradigm, most of the publications seem to have come from MIRI (especially the earlier ones when they were called the Singularity Institute) with many discussions happening both here on LessWrong as well as events like the Singularity Summit. I’d say most of the thinking around this paradigm happened before the era of deep learning. Nate Soares’ post might provide more context.
You’re right that brain-like AI has not had much traction yet, but it seems to me that there is a growing interest in this research area lately (albeit much slower than the Prosaic AI paradigm), and I don’t think they fall squarely under either of the Seed AI paradigm nor the Prosaic AI paradigm. Of course there may be considerable overlap between those ‘paradigms’, but I felt that they were sufficiently distinct to warrant a category of its own even though I may not think of it as a critical concept in AI literature.