I may refine this into a formal bounty at some point.
I’m curious if censorship would actually work in the context of blocking deployment of superpowerful AI systems. Sometimes people will mention “matrix multiplication” as a sort of goofy edge case, which isn’t very plausible, but that doesn’t mean there couldn’t be actual political pressure to censor it. A more plausible example would be attention. Say the government threatens soft power against arxiv if they don’t pull attention is all you need, or threatens soft power against harvard if their linguistics department doesn’t pull the pytorch-annotated attention is all you need. By this point, it goes without saying that black hat hackers writing down the equations would face serious consequences if they got caught. Now instead of attention, imagine some more galaxy-brained paper or insight that gets published in 2028 and is an actual missing ingredient to advanced AI (assuming you’re not one of the people who think attention is all you need already is that paper).
While it’s certainly a research project to look at pros and cons of this approach to safety from AI, I think before that we need someone to profile efficacy of technological censorship through history to come at an estimate of how well this would work, i.e., how well it would actually slow or stop the propagation of this information, how well it would slow or stop the deployment of systems based on that information.
My guess at who the ideal person to execute on this bounty would be some patent law nerd, tho I’m sure a variety of types of nerd could do a great job.
I may refine this into a formal bounty at some point.
I’m curious if censorship would actually work in the context of blocking deployment of superpowerful AI systems. Sometimes people will mention “matrix multiplication” as a sort of goofy edge case, which isn’t very plausible, but that doesn’t mean there couldn’t be actual political pressure to censor it. A more plausible example would be attention. Say the government threatens soft power against arxiv if they don’t pull attention is all you need, or threatens soft power against harvard if their linguistics department doesn’t pull the pytorch-annotated attention is all you need. By this point, it goes without saying that black hat hackers writing down the equations would face serious consequences if they got caught. Now instead of attention, imagine some more galaxy-brained paper or insight that gets published in 2028 and is an actual missing ingredient to advanced AI (assuming you’re not one of the people who think attention is all you need already is that paper).
While it’s certainly a research project to look at pros and cons of this approach to safety from AI, I think before that we need someone to profile efficacy of technological censorship through history to come at an estimate of how well this would work, i.e., how well it would actually slow or stop the propagation of this information, how well it would slow or stop the deployment of systems based on that information.
My guess at who the ideal person to execute on this bounty would be some patent law nerd, tho I’m sure a variety of types of nerd could do a great job.