I’d suggest reading deepmind’s recent inter-org paper on model evaluation for extreme risks. What you describe as the success case I agree is necessary for success but without sufficient alignment of each person’s personal asi to actually guarantee it will in fact defend against malicious and aggressive misuse of ai by others, you’re just describing filling the world with loose gunpowder.
I’d suggest reading deepmind’s recent inter-org paper on model evaluation for extreme risks. What you describe as the success case I agree is necessary for success but without sufficient alignment of each person’s personal asi to actually guarantee it will in fact defend against malicious and aggressive misuse of ai by others, you’re just describing filling the world with loose gunpowder.