Except “aligned” AI (or at least corrugibility) benefits folks who are doing even shady things (say trying to scam people)
So any gains in those areas that are easily implemented will be widely spread and quickly.
And altruistic individuals already use their own compute and GPU for things like seti@home (if youre old enough to remember) and to protein folding projects for medical research. Those same people will become aware of AI safety and do the same and maybe more.
The cats out of the bag , you can’t “regulate” AI use at home , I can run models on a smartphone.
What we can do is try and steer things toward a beneficial nash equilibrium.
It seems to me like AGI risk needs a “zeitgeist addendum” / “venus project” style movie for the masses. Open up the overton window and touch on things like mesa optimization without boring the average person to death.
The /r/controlproblem faq is the most succinct summary i’ve seen but I couldn’t get the majority of average folks to read that if I tried and it would still go over their heads.