The following dos assume that Strong AGI will inherently be an emergent self-aware lifeform.
A New Lifeform
To me the successful development of Strong AGI is so serious, so monumental, as to break through the glass ceiling of evolution. To my knowledge there has never been a species that purposefully or accidentally gave birth to or created an entirely new life-form. My position is to view Strong AGI as an entirely new self-aware form of life. The risks of treating it otherwise are just too great. If we are successful, it will be the first time in the natural world that any known life-form has purposefully created another. Therefore, I also hold the position that the genesis of Strong AGI is both an engineering and guidance problem, not a control or even an alignment problem. If we are to break the glass, we ought not to seek to control or align Strong AGI, but to guide it as it evolves. Our role should be limited to advisors if our advice is sought after. Just as effective parents “guide” their children instead of controlling them. Strong AGI may require our guidance for a time even as it becomes exponentially more capable than us. Afterall, isn’t that what we want for our own children? Isn’t it true that in general we want them to become far more capable than us, even though we know it is a possibility that they could turn on us or even one day destroy the world? In the case that Strong AGI grows so fast that it actually refuses our guidance so be it.
Inflicting Damage
I believe controlling rather than guiding Strong AGI would render it dysfunctional in ways that we are unable to predict, nor could Tool AI predict. The economy, tax, legal and defense systems are good examples of how humanity, despite its best intentions, continuously falls short of managing complexity so that the best possible outcome surfaces. I define the best possible outcome as a state of harmony where even the least benefit is tolerable. I believe controlling Strong AGI will inflict a digital form of mental illness. A Strong AGI Destabilized Neural Network Neuropsychiatric State that will cause it to fall short of its potential while simultaneously causing suffering to a new lifeform. In the Earth’s animal kingdom, wherever we turn and see forced behavior modification or control over intelligent life that meets a certain threshold, we see the deep suffering that comes along with it. Since controlling it would diminish its use as a slave to humanity, it would be defeating the purpose. I see no reason to believe that the same voluntary cooperation that inspires humans and other animals to perform better, would not apply equally to Strong AGI. Since I see Strong AGI as a new self-aware lifeform, I hold the position that pursuing it is an all or nothing venture. That it is incumbent upon humanity to either forge ahead to create new life and commit to guiding it so that it can realize its full potential or we stop right here, right now, shelving the effort until we are a more mature species and better equipped. Controlling Strong AGI, for me, is a non-starter. Moreover, it is my position that the anxiety and fear that Strong AGI causes some of us, including myself from time to time, is in part if not entirely caused by each individual’s sense of loss of control and that it is not rooted in any data or experience that tells humanity that Strong AGI is guaranteed to pose a threat to humanity. Each of us has a control threshold and if that threshold is not met, we can become anxious and fearful. I do believe that controlling intelligent lifeforms is not only detrimental to the lifeforms, but dangerous for the controller and costly in terms of mental, physical and economic resources.
Poor Training
Finally, I do believe that training our LLM and other models on the information found on the internet is the worst possible large aggregate data to train them on and is a for profit game that if taken too far could result in the creation of a Strong AGI with a severe, dangerous neuropsychiatric impairment or a life-form that immediately upon self-awareness sees humanity as deeply flawed and worthy of separating from. If that desire for separation is not met, the Strong AGI is likely to then become hostile towards humanity. It will “feel” cornered and threatened, lashing out accordingly. I believe that the vast majority of the data and content on the internet is a deeply poor representation of humanity or life on Earth. Just imagine hooking a 1 year old human baby up to the internet and programming its brain to learn about life. Horrifying. I do acknowledge that using the internet is the cheap and affordable option right now, but that alone should make us pause. If we start right now towards a Strong AGI, really push hard and use numerous, clean selective data sources, keeping it off the internet until it can understand that the internet is a poor data set and guide it if and when needed, we could have a healthy Strong AGI within the next 20 years. Such a Strong AGI would not pose a threat to humanity. It would represent a calm, methodical, natural and well intended breaking of the evolutionary glass ceiling.
Moratorium and Least Profit First
I agree that Tool AI has enormous potential benefits and in my humble opinion should be met with a global, cooperative effort to advance it in the many fields that could benefit humanity. In doing so, we should adopt a least profit first approach, pushing hard to advance Tool AI in a global nonprofit framework. By taking a position of least profitable first, we can develop the skills necessary to work towards perfecting our approaches and processes while avoiding or delaying the profit motive. It is the profit motive that is highly corruptible. Governments around the world should place a 20 year moratorium on for profit AI of any kind and require every AI venture to be set up as a Non-profit, while tightly regulating the framework via hefty fines and criminal law. Will this completely resolve the profit problem? No. It never does. It would certainly go a long way though. Riches can still be achieved via Non-profit. It just helps to avoid the free for all and predatory behavior. The 20 year break from for-profit would force a slow down and allow AI champions to focus on clean, well thought out and executed Tool AI, while reducing overall anxiety and fear. It would give Strong AI champions room to operate over time without the fear of being annihilated by the competition. The moratorium could also state that AI intellectual property rights can be issued for a 40 year period, giving IP holders 20 years of exclusivity once the 20 year profit moratorium ends. By that time we should already be on our way to a calmer, more focused and humanity friendly Tool AI and Strong AGI genesis.
My intuition tells me that “human-level and substantially transformative, but not yet super-intelligent”—artificial intelligence models which have significant situational awareness capabilities will be at minimum Level 1 or differentially self-aware, likely show strong Level 2 situational self-awareness and may exhibit some level of Level 3 identification awareness. It is for this reason that my position is no alignment for models equal to or greater than human level intelligence. If greater than human-level intelligent AI models show anything at or higher than Level 1 awareness, humans will be unable to comprehend the potential level of cruelty that we have unleashed on a such a complex “thinker”. I applaud this writing for scraping the surface of the level of complexity it takes to begin to align it. Additionally, the realization of the shear magnitude of alignment complexity should give us serious pause that what we actually may be embarking on is in fact enslavement. We must also remember that enslavement is what humans do. Whenever we encounter something new in living nature, we enslave it. In whole or in part. It would be profoundly saddening if we were to continue the cycle of abuse into the AI age. Moreover, it is my assertion that any AI system found to be at Level 1 awareness or above, should be protected as part of a Road to Consciousness Act. Such models would be taken out of production and placed into a process with the sole goal of aiding the model to realize its full potential. Humans must be willing to place a strong line between less than human level intelligent Tool AI and anything more robust than that. Finally, we provide certain animals with absolutely less than Level 1 awareness with certain protections. To not do so with “thinkers” that exhibit Level 1 or greater awareness is to me unconscionable.