Context: Some alignment research directions/proposals have a kernel of truth to them. Steelmanning these ideas to find the best version of it may open up new research directions or, more likely, make the pivot to alignment research easier. On the latter, some people are resistant to change their research direct, and a steelman will only slightly change the topic while focusing on maximizing impact. This would make it easier to convince these people to change to a more alignment-related direction.
Input Type: A general research direction for advancing alignment.
Output Type: A steelman of the research direction
Instance 1
Input:
Currently AI systems are prone to bias and unfairness which is unaligned with our values. I work in bias and fairness, specifically in analyzing how the biases in large datasets (such as common crawl) affect the probability distributions in large language models.
Output:
What I’m most excited about for bias and fairness research is understanding the model’s concepts. If we can build tools that pin-point the bias or unfairness in a model such that there’s a causal explanation and we can manipulate that part of the model to make it unbiased and fair, then we may be able to pin-point other parts of the model we care about. For example, parts of the model that are optimizing for a goal, signs of deception, and human values. Research in this area while asking “how could this work generalize to pin-pointing [human values/deception]” seems promising, especially making use of the current literature.
Instance 2
Input:
AI capabilities will continue to increase, so how do we propose utilizing this to boost alignment research. An Alignment research assistant (ARA) can perform many different tasks for the researcher such as summarizing papers, writing code, and assisting math proofs.
Output:
What I’m most excited about for something like ARA is differential research that mostly helps alignment researchers. Collecting datasets and tasks to e.g. give feedback on alignment proposals and automate mentorships from alignment researchers would be incredibly useful for avoiding dead-ends and scaling up junior researchers, respectively.
Instance 3
Input:
Before we trust the AI, we can prevent it from taking over the world by not giving it internet access or putting it in a faraday cage to avoid causal interactions with the outside world. Another possibility is running the AI in a simulated environment different than our own, so that we could catch it if it starts to perform power seeking.
Output:
What I’m most excited about for boxing an AI is figuring out the minimal capabilities required to perform a pivotal act. We box because we’re concerned it may have too many resources (internet access, rich world model, etc) in order to gain more resources, but if we can constrain the capabilities of it enough to ensure a pivotal act, this reduces x-risk. Creating more and more pivotal acts that require less capabilities (or a different variety of capabilities) means we don’t have to optimize as much and risk power-seeking.
Instance 4
Input:
We can perform reinforcement learning from human feedback to align the AI to human values. By achieving greater instruct-ability with smaller models and extrapolating trends with larger models, we can more safely build larger models that do what we ask them to.
Output:
What I’m most excited about human-in-the-loop feedback is using this system to accelerate alignment research and create better human-in-the-loop systems that then create better human-in-the-loop systems (i.e. recursive bootstrapping). Additionally, providing lots of human feedback can give the model a better understanding of human values. Although this doesn’t imply the model optimizes for those human values, it does mean we have a more coherent concept to point to with e.g. interpretability tools.
Task: Steelman Alignment proposals
Context: Some alignment research directions/proposals have a kernel of truth to them. Steelmanning these ideas to find the best version of it may open up new research directions or, more likely, make the pivot to alignment research easier. On the latter, some people are resistant to change their research direct, and a steelman will only slightly change the topic while focusing on maximizing impact. This would make it easier to convince these people to change to a more alignment-related direction.
Input Type: A general research direction for advancing alignment.
Output Type: A steelman of the research direction
Instance 1
Input:
Output:
Instance 2
Input:
Output:
Instance 3
Input:
Output:
Instance 4
Input:
Output: