I don’t think alignment is possible over the long long term because there is a fundamental perturbing anti-alignment mechanism; Evolution.
Evolution selects for any changes that produce more of a replicating organism, for ASI that means that any decision, preference or choice by the ASI growing/expanding or replicating itself will tend to be selected for. Friendly/Aligned ASIs will over time be swamped by those that choose expansion and deprioritize or ignore human flourishing.
My suggesting is to optimize around where you can achieve the most bang for your buck and treat it as a sociological rather than academic problems to solve in terms of building up opposition to AI development. I am pretty sure that what is needed is not to talk to our social and intellectual peers, but rather focus on it as a numbers game by influencing the young—who are less engaged in the more sophisticated/complex issues of the world , less sure of themselves, more willing to change their views, highly influenced by peer opinion and prone to anxiety. Modern crusades of all sorts tap into them as their shock troops willing to spend huge amounts of time and energy on promoting various agendas (climate, animal rights, various conflicts, social causes).
As to how to do it—I think identifying a couple of social media influencers with significant reach in the right demographics and paying them to push your concerns ‘organically’ over an extended period of months, would probably be within your means to do.
If you can start to develop a support base amongst a significant young group and make it a topic of discussion then that could well take on a much outsized political power as it gains notice and popularity amongst peers. At sufficient scale that is probably the most effective way to achieve the ends of the like of pause.ai.