In this post we present the first public version of our bibliographic database of research on the safety of transformative artificial intelligence (TAI). The primary motivations for assembling this database were to:
Aid potential donors in assessing organizations focusing on TAI safety by collecting and analyzing their research output.
Assemble a comprehensive bibliographic database that can be used as a base for future projects, such as a living review of the field.
One significant limitation of this system was that there was no great frontend for it. Tabular data and RDF can be useful for analysis, but difficult to casually go through.
This system acts a bit like Google Scholar or other academic search engines. However, the emphasis on AI-safety related papers affords a few advantages.
Only papers valuable to AI safety are shown
There’s easy filtering for papers by particular AI safety related organizations or researchers.
We can include blog posts as well as formal academic works. This is important because a lot of valuable writing is posted directly to blogs like Lesswrong and The Alignment Forum.
Later on, we could emphasize custom paper metrics. For example, there could be combinations of citations and blog post karma count.
Tips
Most of the fields are clickable. Click on an author to see other papers with the same author, or on a tag to see other papers which also have it.
To quickly go through query results, use the up and down arrows after entering a search.
Besides the search function, there is also an (Airtable) table view, which can be browsed directly or downloaded as a CSV.
Questions
Who is responsible for AI Safety Papers?
Ozzie Gooen has written most of the application, on behalf of the Quantified Uncertainty Research Institute. Jess Riedel, Angelica Deibel, and Nuño Sempere have all provided a lot of feedback and assistance.
Jess Riedel and Angelica Deibel are maintaining the database. They will probably update it every several months or so, depending on interest. We’ll try to update the AI Safety Papers app accordingly. The date of the most recent data update is shown in the header of the app.
Note that the most recent data in the current database is from December 2020.
Future Steps
This app was made in a few weeks, and as such it has a lot of limitations.
The data is updated in large batches, and is done fairly messily.
There’s a lot more data we could potentially pull in. For example, blog posts could show comment count and karma.
There could be commenting allowed on papers. (This would require a log-in system, which we are reluctant to add until necessary.)
We could use such a database for a more formal paper review system, the results of which could be featured in the UI.
You can see several other potential features here. Please feel free to add suggestions or upvotes.
We’re not sure if or when we’ll make improvements to AI Safety Papers. If there is substantial use or requests for improvements, that will carry a lot of weight regarding our own prioritization. Of course, people are welcome to submit pull requests to the Github repo directly, or simply fork the project there.
AI Safety Papers: An App for the TAI Safety Database
AI Safety Papers is a website to quickly explore papers around AI Safety. The code is hosted on Github here.
In December 2020, Jess Riedel and Angelica Deibel announced the TAI Safety Bibliographic Database. At the time, they wrote:
One significant limitation of this system was that there was no great frontend for it. Tabular data and RDF can be useful for analysis, but difficult to casually go through.
We’ve been experimenting with creating a web frontend to this data. You can see this at http://ai-safety-papers.quantifieduncertainty.org.
This system acts a bit like Google Scholar or other academic search engines. However, the emphasis on AI-safety related papers affords a few advantages.
Only papers valuable to AI safety are shown
There’s easy filtering for papers by particular AI safety related organizations or researchers.
There’s simple integration with blurbs from the Alignment Newsletter and Gyrodiot.
We can include blog posts as well as formal academic works. This is important because a lot of valuable writing is posted directly to blogs like Lesswrong and The Alignment Forum.
Later on, we could emphasize custom paper metrics. For example, there could be combinations of citations and blog post karma count.
Tips
Most of the fields are clickable. Click on an author to see other papers with the same author, or on a tag to see other papers which also have it.
To quickly go through query results, use the up and down arrows after entering a search.
Besides the search function, there is also an (Airtable) table view, which can be browsed directly or downloaded as a CSV.
Questions
Who is responsible for AI Safety Papers?
Ozzie Gooen has written most of the application, on behalf of the Quantified Uncertainty Research Institute. Jess Riedel, Angelica Deibel, and Nuño Sempere have all provided a lot of feedback and assistance.
How can I give feedback?
Please either leave comments, submit feedback through this website, or contact us directly at hello@quantifieduncertainty.org.
How often is the database updated?
Jess Riedel and Angelica Deibel are maintaining the database. They will probably update it every several months or so, depending on interest. We’ll try to update the AI Safety Papers app accordingly. The date of the most recent data update is shown in the header of the app.
Note that the most recent data in the current database is from December 2020.
Future Steps
This app was made in a few weeks, and as such it has a lot of limitations.
The data is updated in large batches, and is done fairly messily.
There’s a lot more data we could potentially pull in. For example, blog posts could show comment count and karma.
There could be commenting allowed on papers. (This would require a log-in system, which we are reluctant to add until necessary.)
We could use such a database for a more formal paper review system, the results of which could be featured in the UI.
You can see several other potential features here. Please feel free to add suggestions or upvotes.
We’re not sure if or when we’ll make improvements to AI Safety Papers. If there is substantial use or requests for improvements, that will carry a lot of weight regarding our own prioritization. Of course, people are welcome to submit pull requests to the Github repo directly, or simply fork the project there.