Working on anti spam/scam features at Google or banks could be a leveraged intervention on some worldviews. As AI advances it will be more difficult for most people to avoid getting scammed, and including really great protections into popular messaging platforms and banks could redistribute a lot of money from AIs to humans
Why not think the scams will be run by humans (using AIs) and thus the intervention would reduce the transfer to these groups?
In principle, groups could legally eat (some of) the free energy here by just red teaming everyone using a similar approach, but not actually taking their money.
Currently, I’m more interested in work demonstrating that AI scams could get really good.
I think I probably agree (though uncertain as demos could prompt this effort), but I wasn’t just considering reducing harm from scams. I care more about general societal understanding of AI and risks and a demo has positive spill over effects.
Working on anti spam/scam features at Google or banks could be a leveraged intervention on some worldviews. As AI advances it will be more difficult for most people to avoid getting scammed, and including really great protections into popular messaging platforms and banks could redistribute a lot of money from AIs to humans
Why not think the scams will be run by humans (using AIs) and thus the intervention would reduce the transfer to these groups?
In principle, groups could legally eat (some of) the free energy here by just red teaming everyone using a similar approach, but not actually taking their money.
Currently, I’m more interested in work demonstrating that AI scams could get really good.
i’d guess effort at google/banks to be more leveraged than demos if you’re only considering harm from scams and not general ai slowdown and risk
I think I probably agree (though uncertain as demos could prompt this effort), but I wasn’t just considering reducing harm from scams. I care more about general societal understanding of AI and risks and a demo has positive spill over effects.