re 6 -- Interesting. It was my impression that “chain of thought” and other techniques notably improved LLM performance. Regardless, I don’t see compositional improvements as a good thing. They are hard to understand as they are being created, and the improvements seem harder to predict. I am worried about RSI in a misaligned system created/improved via composition.
Re race dynamics: It seems to me there are multiple approaches to coordinating a pause. It doesn’t seem likely that we could get governments or companies to head a pause. Movements from the general population might help, but a movement lead by AI scientists seems much more plausible to me. People working on these systems ought to be more aware of the issues and more sympathetic to avoiding the risks, and since they are the ones doing the development work, they are more in a position to refuse to do work that hasn’t been shown to be safe.
Based on your comment and other thoughts, my current plan is to publish research as normal in order to move forward with my mechanistic interpretability career goals, but to also seek out and/or create a guild or network of AI scientists / workers with the goal of agglomerating with other such organizations into a global network to promote alignment work & reject unsafe capabilities work.
re 6 -- Interesting. It was my impression that “chain of thought” and other techniques notably improved LLM performance. Regardless, I don’t see compositional improvements as a good thing. They are hard to understand as they are being created, and the improvements seem harder to predict. I am worried about RSI in a misaligned system created/improved via composition.
Re race dynamics: It seems to me there are multiple approaches to coordinating a pause. It doesn’t seem likely that we could get governments or companies to head a pause. Movements from the general population might help, but a movement lead by AI scientists seems much more plausible to me. People working on these systems ought to be more aware of the issues and more sympathetic to avoiding the risks, and since they are the ones doing the development work, they are more in a position to refuse to do work that hasn’t been shown to be safe.
Based on your comment and other thoughts, my current plan is to publish research as normal in order to move forward with my mechanistic interpretability career goals, but to also seek out and/or create a guild or network of AI scientists / workers with the goal of agglomerating with other such organizations into a global network to promote alignment work & reject unsafe capabilities work.
Sounds good to me!