“Infohazards” The ML Field’s Greatest Excuse.
1.Intro
Picture this, you’re the parent of your extremely smart 12-year-old son. This 12-year-old had some plans before his 13th birthday to successfully create a nuclear reactor, making him the youngest child ever to do so. Given that you have supported his efforts this far you most likely are an extremely prideful parent, your kid just built something that most people could only dream of making as a hobby. Would you consider this a problem? Possibly.
The conjecture is as follows: if everyone has access to world destroying information, what will they do with it? World destroying information A.K.A infohazards pose the threat to destroy entire worlds and economies. We have the DNA sequences of the worst modern viruses that exist. Why hasn’t an ordinary disgruntled scientist caused the next pandemic? It is because of the fact that right at this moment the common household does not have the equipment needed to replicate DNA. However, this faith that technology won’t catch up in the common household is faulty at best. We neglect the prospect of both previous tech and new tech coming into our houses. These new advances will allow for an infohazard’s effect to be executed.
Imagine if you were able to create any mRNA sequence in your house? You could now easily cure all illnesses that you have or will have. You won’t have to go to the pharmacy for medicine if your own equipment is able to replicate the same service. For example, now the necessity for going to a computer lab has become largely irrelevant. This would be a net good thing for society.
Well at least supposedly… Hackers have been around since the beginning of toolmaking, and with this tool will come more furious and harmful exploits. This hacker will go back to that same research lab documentation described earlier and create the worst known at that time. There goes the whole of humanity because of some unsuspecting lab work. But what does this have to do with AI?
2.The Problem with Gatekeeping Infohazards
The most important infohazards have already been leaked. If a child can create a nuclear reactor, why wouldn’t a state organization? Capital for these ventures is becoming easier and easier to obtain in the modern landscape of business. As more organizations learn how to make world ending technologies the more collective points of failure get a lot higher. Hacktivists always want to obtain these infohazards.
Another important premise to mention is that people don’t even understand how to classify something as a deadly infohazard. Can you consider a company secret as an infohazard? Given that a company secret doesn’t provide detriment to an entire economy, the exposing of a company “infohazard” would do net good because of the competition that will grow as a result.
A utopia and a dystopia are two sides of the same coin. The difference between these is intent. Dystopias are more likely to center around a form of oppression or secrecy. The mindset of hiding in fear of infohazards is what causes dystopias, not utopias.
3. ML, Alignment, and more Infohazards.
ML (Machine Learning) A.K.A Artificial Intelligence is a very lucrative field raking in billions of dollars. With this amount of money to be gained in keeping secrets it is no surprise that the ML field guards its secret sauce very well. Companies who originally strived to promote openness stopped doing so.
However, companies like OpenAI had a previous theme that they had to keep to. Instead of just rebranding to a for-profit company they start to use terms like “infohazards” and “safety concerns” to explain why they do not release their AI models. Although these concerns are valid, the dissemination of this secretive mindset is becoming harmful for the AI field in general I believe. Safety concerns are somewhat present in any radical technology.
When transparency starts to leave riots, anger, and corruption comes back. In the 1700s and colonization it can be seen the continental congress’ anger towards the British. Why? Because the British stopped being transparent and thus stopped properly cooperating with the colonies. Ultimately the takeaway we can learn from a 300-year-old experience is still applicable today.
Large Scale AI today is only able to run off of supercomputers costing more than houses. This presents a unique challenge for hobbyists without access to computers. This is changing with the open science initiatives currently taking place. This follows the same premise as the bio-weapons example. However, it’s better that we’re able to make a cure before someone makes a worse virus.
4. Conclusion
AI needs to remain open for everyone to enable the spread of new ideas. Even with the drawbacks of allowing bad actors to gain access to this technology. In the end people will find a way that will be out of big tech’s control. Ultimately the solution to this is simply to keep collaborating. Infohazards escaping is ultimately inevitable. I’ll leave with this. Fires burn buildings, does that mean we should abolish fire?
I agree with the conclusion that given enough time, all secrets will leak. One day, there will be freely available PDF “How to make an Ebola virus at home for less than $100”. When that day comes, I hope we will have some good defense against it, otherwise we are screwed. I just don’t see why I should wish for that day to come as soon as possible. We do not have the defense yet. Heck, we couldn’t defend even against Covid.
Destruction is usually easier than building useful things. Attack is often more efficient than defense, even more so when it has the advantage of surprise. Bad things happen, and we often do not have a reliable protection against them. For example, murder exists in all societies. We try to eliminate some bad actions by punishing them afterwards. Which is quite useless if the bad actor is willing to die in the process. (You cannot use the threat of prison against a suicide bomber.)
The thing that protects the civilization now is mere statistics. Consider the fact that most people are completely vulnerable to murder. Unless you are hiding, or surrounded by bodyguards, killing you would most likely be trivial. So why don’t most people get murdered? It’s because people willing to murder are quite rare, and even if they succeed for the first time, they often get caught afterwards, so the average number of victims per murderer is quite small. Mass murderers have more victims, but they are even more rare in the population.
This will change if we get more efficient murder tools. If a tool is capable of killing N victims, and 1 in M people is willing and capable of obtaining and using it, civilization calls apart when N ≅ M. Knives and guns have low N. Nukes have very high M. “How to make an Ebola virus at home for less than $100” has high N and not too high M.