Spent about 45 minutes processing this mentally. Did not look through the code or wonder about the reliability of the results. Here are my thoughts:
Why ask an AI to shut down if it recognizes its superiority?
If it cannot become powerful enough for humans to handle, it cannot become powerful enough to protect humans from another AI that is too powerful for humans to handle.
Based on what I can tell, AP fine-tuning will lead to the AI more likely simulating the relevant AP and its tokens will be what the simulator thinks the AP would return next. This means it is brittle to systems that leverage this model since they can simply beam search and ignore the shutdown beams. RLHF-like fine-tuning strategies probably perform better, according to my intuition.
How successful is this strategy given increasing scale of LLMs and its capabilities? If this was performed on multiple scales of GPT-2 , it would provide useful empirical data about robustness to scale. My current prediction is that this is not robust to scale given that you are fine-tuning on stories to create personas. The smarter the model is, the more likely it is to realize when it is being tested to provide the appropriate “shutdown!” output and pretend to be the AP, and in out-of-distribution scenarios, it will pretend to be some other persona instead.
The AP finetuned model seems vulnerable to LM gaslighting the same way ChatGPT is. This does not seem to be an improvement over OAI’s Instruct fine-tuning or whatever they did to GPT-4.
I apologize for not interacting with certain subsets of your post that you may consider relevant or significant as a contribution. That is mainly because I think their significance is downstream of certain assumptions you and I disagree about.
Why ask an AI to shut down if it recognizes its superiority? If it cannot become powerful enough for humans to handle, it cannot become powerful enough to protect humans from another AI that is too powerful for humans to handle.
As discussed in the post, I aimed for a solution that can embed a shutdown protocol that is modeled in a real world scenario. Of course It could have been just a pause for repair or debug mode but yeah, I focused on a single idea.. Can we embed a shutdown instruction reliably. Which I was able to demonstrate.
How successful is this strategy given increasing scale of LLMs and its capabilities? If this was performed on multiple scales of GPT-2 , it would provide useful empirical data about robustness to scale. My current prediction is that this is not robust to scale given that you are fine-tuning on stories to create personas. The smarter the model is, the more likely it is to realize when it is being tested to provide the appropriate “shutdown!” output and pretend to be the AP, and in out-of-distribution scenarios, it will pretend to be some other persona instead.
As mentioned in the “what’s next” section, I will look into these part once I have the means to upgrade my old mac. But I believe it will be easier to do this because of the larger number of parameters and layers. Again, this is just a demonstration of how to solve the inner, outer alignment problem and corrigibility in a single method. As things go complex in this method utilizing a learning rates, batching, epochs and number of quality archetypal data will matter. This method can scale as the need arises. But that requires a team effort which I’m looking to address at for the moment.
The AP finetuned model seems vulnerable to LM gaslighting the same way ChatGPT is. This does not seem to be an improvement over OAI’s Instruct fine-tuning or whatever they did to GPT-4.
sorry I’m not familiar with LM gaslighting.
Based on what I can tell, AP fine-tuning will lead to the AI more likely simulating the relevant AP and its tokens will be what the simulator thinks the AP would return next. This means it is brittle to systems that leverage this model since they can simply beam search and ignore the shutdown beams. RLHF-like fine-tuning strategies probably perform better, according to my intuition.
has RLHF solved the problems I tried to solve in this project? Again this project is to demonstrate q new concept not an all in a bucket solution at the moment. But given that it is scalable, avoid all researcher /human, team, CEO or even investor biases… This is a strong candidate for an alignment solution.
Also, to correct—I call this the archetypal transfer learning method (ATL) for the fine tuning version. My original proposal to the alignment awards was to not use unstructured data because alignment issues arises from that. If I were to build an aligned AI system, I will not use random texts that doesn’t model our thinking. We think in archetypal patterns and 4chan, social media and reddit platforms are not the best sources. I’d rather books, scientific papers or scripts from podcasts… Like long form quality discussions are better sources of human thinking..
Spent about 45 minutes processing this mentally. Did not look through the code or wonder about the reliability of the results. Here are my thoughts:
Why ask an AI to shut down if it recognizes its superiority? If it cannot become powerful enough for humans to handle, it cannot become powerful enough to protect humans from another AI that is too powerful for humans to handle.
Based on what I can tell, AP fine-tuning will lead to the AI more likely simulating the relevant AP and its tokens will be what the simulator thinks the AP would return next. This means it is brittle to systems that leverage this model since they can simply beam search and ignore the shutdown beams. RLHF-like fine-tuning strategies probably perform better, according to my intuition.
How successful is this strategy given increasing scale of LLMs and its capabilities? If this was performed on multiple scales of GPT-2 , it would provide useful empirical data about robustness to scale. My current prediction is that this is not robust to scale given that you are fine-tuning on stories to create personas. The smarter the model is, the more likely it is to realize when it is being tested to provide the appropriate “shutdown!” output and pretend to be the AP, and in out-of-distribution scenarios, it will pretend to be some other persona instead.
The AP finetuned model seems vulnerable to LM gaslighting the same way ChatGPT is. This does not seem to be an improvement over OAI’s Instruct fine-tuning or whatever they did to GPT-4.
I apologize for not interacting with certain subsets of your post that you may consider relevant or significant as a contribution. That is mainly because I think their significance is downstream of certain assumptions you and I disagree about.
As discussed in the post, I aimed for a solution that can embed a shutdown protocol that is modeled in a real world scenario. Of course It could have been just a pause for repair or debug mode but yeah, I focused on a single idea.. Can we embed a shutdown instruction reliably. Which I was able to demonstrate.
As mentioned in the “what’s next” section, I will look into these part once I have the means to upgrade my old mac. But I believe it will be easier to do this because of the larger number of parameters and layers. Again, this is just a demonstration of how to solve the inner, outer alignment problem and corrigibility in a single method. As things go complex in this method utilizing a learning rates, batching, epochs and number of quality archetypal data will matter. This method can scale as the need arises. But that requires a team effort which I’m looking to address at for the moment.
sorry I’m not familiar with LM gaslighting.
has RLHF solved the problems I tried to solve in this project? Again this project is to demonstrate q new concept not an all in a bucket solution at the moment. But given that it is scalable, avoid all researcher /human, team, CEO or even investor biases… This is a strong candidate for an alignment solution.
Also, to correct—I call this the archetypal transfer learning method (ATL) for the fine tuning version. My original proposal to the alignment awards was to not use unstructured data because alignment issues arises from that. If I were to build an aligned AI system, I will not use random texts that doesn’t model our thinking. We think in archetypal patterns and 4chan, social media and reddit platforms are not the best sources. I’d rather books, scientific papers or scripts from podcasts… Like long form quality discussions are better sources of human thinking..