I think this is a great point here:
None of us have ever managed an infinite army of untrained interns before
Its probable that AIs will force us to totally reformat workflows to stay competitive. Even as the tech progresses, it’s likely there will remain things that humans are good at and AIs lag. If intelligence can be represented by some sort of n-th dimensional object, AIs are already super-human at some subset of n, but beating humans at all n seems unlikely in the near-to-mid term.
In this case, we need to segment work, and have a good pipeline for tasking humans with the work that they excel at, and automating the rest with AI. Young zoomers and kids will likely be intuitively good at this, since they are growing up with this tech.
This is also great in a p(doom) scenario, because even if there are a few pesky things that humans can still do, there’s a good reason to keep us around to do them!
Coming from a very technical field, but without an AI or AI-safety background, I’ll say: so much of this AI safety work and research seems like such self-serving nonsense. It just so happens that all the leading AI companies and many employees with huge equity stakes agree that open-source AI == doom and death on a massive scale?
The internet also helps bio-terrorists communicate and learn how to do bad acts. Imagine 50 years ago, the largest internet companies at the time were pushing to make internet protocols closed-source and walled-gardens because of terrorism? (Well they were for different reasons, but can be seen as anticompetitive nonsense in the present day)
Encryption and encrypted messaging apps also help bad actors massively. You can communicate over long distances with no risk of spying or comms interception. Also, governments and the US govt in particular tried really hard to ban encryption algos as “export of arms and munitions”. Luckily this failed, the war on encryption mostly continues, but us plebes do have access to Signal and PGP.
Now it just so happens that AI needs to be closed source, walled off, and controlled by a small cartel for our safety. Have we not heard this before, on like every single technological breakthrough? I haven’t fallen for it… yet at least.
Anthropic CEO:
>”AI will lead to the unemployment of 20% of workers and civil unrest/war level poverty for a major portion of our economy”
>”Oh and also, have you seen our new funding round? It’s the biggest yet! Let’s speed this up!”
OpenAI:
>”we can’t release open models of our most powerful models, as it will lead to bioterrorism” (even though the latest uh bio-COVID-event was created by government labs which do/will have access to uncensored AI)
>Doesn’t even release their GPT-3 model from years past that barely makes coherent sentences (I wonder why, surely ain’t terrorism)