No, really. Climate change, cancer and war don’t matter anymore when AGI is about to arrive.
Disagree, sort of—those are all instances of the same underlying generalized problem. Yeah, maybe we don’t need to solve them all literally exactly the same way—but I think a complex systems view of learning gives me hunches that the climate system, body immune systems, geopolitical systems, reinforcement learning ai systems, and supervised learning systems all share key information theoretic properties.
I know that sounds kind of crazy if you don’t already have a mechanistic model of what I’m trying to describe, and it might be hard to identify what I’m pointing at. I’ve been collecting resources intended to make it easier for “ordinary people” to get involved.
(Though nobody is ordinary, humans are very efficient, even by modern ML standards; current smart models just saw a ton of data. And, yeah, maybe you don’t have time to see as much data as they saw.)
In general I think some of your claims about how to raise awareness are oversimplified; in general shouting at people is not a super effective way to change their views and hadn’t been working until evidence showed up that was able to change people’s views without intense shouting. Calmly showing examples and describing the threat seems to me to be more effective. But, it is in fact important to (gently) activate your urgency and agency, because ensuring that we can create universalized <?microsolidarity/coprotection/peer-free-will-protection?> across not just social groups but species is not gonna be trivial. We gotta figure out how to make hypergrowth superplanners not eat literally all beings, including today’s AIs, and a key part of that is figuring out how to create a big enough and open enough co-protection network.
As with most things I post, this was written in a hurry. Feedback welcome. I’m available to talk in realtime on discord at your whim.
Disagree, sort of—those are all instances of the same underlying generalized problem. Yeah, maybe we don’t need to solve them all literally exactly the same way—but I think a complex systems view of learning gives me hunches that the climate system, body immune systems, geopolitical systems, reinforcement learning ai systems, and supervised learning systems all share key information theoretic properties.
I know that sounds kind of crazy if you don’t already have a mechanistic model of what I’m trying to describe, and it might be hard to identify what I’m pointing at. I’ve been collecting resources intended to make it easier for “ordinary people” to get involved.
(Though nobody is ordinary, humans are very efficient, even by modern ML standards; current smart models just saw a ton of data. And, yeah, maybe you don’t have time to see as much data as they saw.)
In general I think some of your claims about how to raise awareness are oversimplified; in general shouting at people is not a super effective way to change their views and hadn’t been working until evidence showed up that was able to change people’s views without intense shouting. Calmly showing examples and describing the threat seems to me to be more effective. But, it is in fact important to (gently) activate your urgency and agency, because ensuring that we can create universalized <?microsolidarity/coprotection/peer-free-will-protection?> across not just social groups but species is not gonna be trivial. We gotta figure out how to make hypergrowth superplanners not eat literally all beings, including today’s AIs, and a key part of that is figuring out how to create a big enough and open enough co-protection network.
As with most things I post, this was written in a hurry. Feedback welcome. I’m available to talk in realtime on discord at your whim.