Hi Gianluca, it’s great that you liked the post and the idea! I think that your approach and mine share things in common and that we have similar views on how activation steering might be useful!
I would definitely like to chat to see whether potential synergies come up :)
Very happy to support you :) It took some time to understand your paper, please find below a few comments: (1) You are using SVD to find the control vectors (similarly to other authors) but your process is more sophisticated in the following ways: the generation of the matrices, how to reduce them, and how to choose the magnitude of each steering vector. You are also using the non-steered response as an active part of your calculations—something that is marginally done by other authors. The final result works, but the process looks arbitrary to me (tbh all the steering techniques are a bit arbitrary at the moment). What’s the added value of your operations? Maybe you have some intuition about why your calculation is finding the “correct” amount of steering, and I am curious to know more. (2) Ethics plays a fundamental role in finding a collective solution to AI safety, but I tend to think that we should solve alignment first. It would be interesting to see your future research going in that direction. I can help brainstorming some topics that have not been exhaustively studied yet. Let me know!
Hi Gianluca, it’s great that you liked the post and the idea! I think that your approach and mine share things in common and that we have similar views on how activation steering might be useful!
I would definitely like to chat to see whether potential synergies come up :)
Very happy to support you :)
It took some time to understand your paper, please find below a few comments:
(1) You are using SVD to find the control vectors (similarly to other authors) but your process is more sophisticated in the following ways: the generation of the matrices, how to reduce them, and how to choose the magnitude of each steering vector. You are also using the non-steered response as an active part of your calculations—something that is marginally done by other authors. The final result works, but the process looks arbitrary to me (tbh all the steering techniques are a bit arbitrary at the moment). What’s the added value of your operations? Maybe you have some intuition about why your calculation is finding the “correct” amount of steering, and I am curious to know more.
(2) Ethics plays a fundamental role in finding a collective solution to AI safety, but I tend to think that we should solve alignment first. It would be interesting to see your future research going in that direction. I can help brainstorming some topics that have not been exhaustively studied yet. Let me know!