Note that task vectors require finetuning. From the newly updated related work section:
Lastly, Editing Models with Task Arithmeticexplored a “dual” version of our activation additions. That work took vectors between weights before and after finetuning on a new task, and then added or subtracted task-specific weight-difference vectors. While this seems interesting, task arithmetic requires finetuning. In Activation additions have advantages over (RL/supervised) finetuning, we explain the advantages our approach has over finetuning.
Note that task vectors require finetuning. From the newly updated related work section: