Really enjoyed reading this. The section on “AI pollution” leading to a loss of control about the development of prepotent AI really interested me.
Avoiding [the risk of uncoordinated development of Misaligned Prepotent AI] calls for well-deliberated and respected assessments of the capabilities of publicly available algorithms and hardware, accounting for whether those capabilities have the potential to be combined to yield MPAI technology. Otherwise, the world could essentially accrue “AI-pollution” that might eventually precipitate or constitute MPAI.
I wonder how realistic it is to predict this e.g. would you basically need the knowledge to build it to have a good sense for that potential?
I also thought the idea of AI orgs dropping all their work once the potential for this concentrates in another org is relevant here—are there concrete plans when this happens?
Are there discussion about when AI orgs might want to stop publishing things? I only know of MIRI, but would they advise others like OpenAI or DeepMind to follow their example?
Really enjoyed reading this. The section on “AI pollution” leading to a loss of control about the development of prepotent AI really interested me.
I wonder how realistic it is to predict this e.g. would you basically need the knowledge to build it to have a good sense for that potential?
I also thought the idea of AI orgs dropping all their work once the potential for this concentrates in another org is relevant here—are there concrete plans when this happens?
Are there discussion about when AI orgs might want to stop publishing things? I only know of MIRI, but would they advise others like OpenAI or DeepMind to follow their example?