Epistemic status: Very quickly written, on a thought I’ve been holding for a year and that I haven’t read elsewhere.
I believe that within this decade, there could be AGIs (Artificial General Intelligences) powerful enough that the values they pursue might have a value lock-in effect, at least partially. This means they could have a long-lasting impact on the future values and trajectory of our civilization (assuming we survive).
This brief post aims to share the idea that if your primary focus and concern is animal welfare (or digital sentience), you may want to consider engaging in targeted outreach on those topics towards those who will most likely shape the values of the first AGIs. This group likely includes executives and employees in top AGI labs (e.g. OpenAI, DeepMind, Anthropic), the broader US tech community, as well as policymakers from major countries.
Due to the risk of lock-in effects, I believe that the values of relatively small groups of individuals like the ones I mentioned (less than 3 000 people in top AGI labs) might have a disproportionately large impact on AGI, and consequently, on the future values and trajectory of our civilization. My impression is that, generally speaking, these people currently
a) don’t prioritize animal welfare significantly
b) don’t show substantial concern for digital minds sentience.
Hence if you believe those things are very important (which I do believe), and you think that AGI might come in the next few decades[1] (which a majority of people in the field believe), you might want to consider this intervention.
Feel free to reach out if you want to chat more about this, either here or via my contact you can find here.
AGI x Animal Welfare: A High-EV Outreach Opportunity?
Epistemic status: Very quickly written, on a thought I’ve been holding for a year and that I haven’t read elsewhere.
I believe that within this decade, there could be AGIs (Artificial General Intelligences) powerful enough that the values they pursue might have a value lock-in effect, at least partially. This means they could have a long-lasting impact on the future values and trajectory of our civilization (assuming we survive).
This brief post aims to share the idea that if your primary focus and concern is animal welfare (or digital sentience), you may want to consider engaging in targeted outreach on those topics towards those who will most likely shape the values of the first AGIs. This group likely includes executives and employees in top AGI labs (e.g. OpenAI, DeepMind, Anthropic), the broader US tech community, as well as policymakers from major countries.
Due to the risk of lock-in effects, I believe that the values of relatively small groups of individuals like the ones I mentioned (less than 3 000 people in top AGI labs) might have a disproportionately large impact on AGI, and consequently, on the future values and trajectory of our civilization. My impression is that, generally speaking, these people currently
a) don’t prioritize animal welfare significantly
b) don’t show substantial concern for digital minds sentience.
Hence if you believe those things are very important (which I do believe), and you think that AGI might come in the next few decades[1] (which a majority of people in the field believe), you might want to consider this intervention.
Feel free to reach out if you want to chat more about this, either here or via my contact you can find here.
Even more so if you believe, as I do along with many software engineers in top AGI labs, that it could happen this decade.