As 2022 comes to an end, I thought it’d be good to maintain a list of “questions that bother me” in thinking about AI safety and alignment. I don’t claim I’m the first or only one to have thought about them. I’ll keep updating this list.
(The title of this post alludes to the book “Things That Bother Me” by Galen Strawson)
First posted: 12/6/22
Last updated: 1/30/23
General Cognition
What signs do I need to look for to tell whether a model’s cognition has started to emerge, e.g., situational awareness?
Will a capacity for “doing science” be sufficient condition for general intelligence?
How easy was it for humans to get science (e.g., compared to evolving to take over the world).
Deception
What kind of interpretability tools do we need to avoid deception?
How do we get these interpretability tools and even if we do get them, what if they’re like neuroscience for understanding brains (not enough)?
How can I tell whether a model has found another goal to optimize for during its training?
What is it that makes a model switch to a goal different from the one set by the designer? How do you prevent it from doing so?
Agent Foundations
Is the description/modeling of an agent ultimately a mathematical task?
What is the best way to explain the difference between forecasting extinction scenarios and narratives from chiliasm or escatology?
What is the best way to think about serious risks in the future without reinforcing a sense of doom?
Teaching and Communication
Younger people (e.g., my undergraduate students) seem more willing to entertain scenarios of catastrophes and extinction compared to older people (e.g., academics). I find that strange and I don’t have a good explanation as to why that is the case.
The idea of a technological singularity was not difficult to explain and discuss with my students. I think that’s surprising given how powerful the weirdness heuristic is.
The idea of “agency” or “being an agent” was easy to conflate with “consciousness” in philosophical discussions. It’s not clear to me why that was the case since I gave a very specific definition of agency.
Most of my students thought that AI models will never be conscious; it was difficult for them to articulate specific arguments about this, but their intuition seemed to be that there’s something uniquely human about consciousness/sentience.
The “AIs will take our jobs in the future” seems to be a very common concern both among students and academics.
80% of a ~25 people classroom thought that philosophy is the right thing to major in if you’re interested in how minds work. The question I asked them was: “should you major in philosophy or cognitive science if you want to study how minds work?”
Governance/Strategy
Should we try to slow down AI progress? What does this mean in concrete steps?
How should we go about capabilities externalities?
How should concrete AI risk stories inform/affect AI governance and short-term/long-term future planning?
Questions about AI that bother me
Crossposted from the EA Forum: https://forum.effectivealtruism.org/posts/4TcaBNu7EmEukjGoc/questions-about-ai-that-have-bothered-me
As 2022 comes to an end, I thought it’d be good to maintain a list of “questions that bother me” in thinking about AI safety and alignment. I don’t claim I’m the first or only one to have thought about them. I’ll keep updating this list.
(The title of this post alludes to the book “Things That Bother Me” by Galen Strawson)
First posted: 12/6/22
Last updated: 1/30/23
General Cognition
What signs do I need to look for to tell whether a model’s cognition has started to emerge, e.g., situational awareness?
Will a capacity for “doing science” be sufficient condition for general intelligence?
How easy was it for humans to get science (e.g., compared to evolving to take over the world).
Deception
What kind of interpretability tools do we need to avoid deception?
How do we get these interpretability tools and even if we do get them, what if they’re like neuroscience for understanding brains (not enough)?
How can I tell whether a model has found another goal to optimize for during its training?
What is it that makes a model switch to a goal different from the one set by the designer? How do you prevent it from doing so?
Agent Foundations
Is the description/modeling of an agent ultimately a mathematical task?
From where do human agents derive their goals?
Is value fragile?
Theory of Machine Learning
What explains the success of deep neural networks?
Why was connectionism unlikely to succeed?
Epistemology of Alignment (I’ve written about this here)
How can we accelerate research?
Has philosophy ever really helped scientific research e.g., with concept clarification?
What are some concrete takeaways from the history of science and technology that could be used as advice for alignment researchers and field-builders?
The emergence of the AI Safety paradigm
Philosophy of Existential Risk
What is the best way to explain the difference between forecasting extinction scenarios and narratives from chiliasm or escatology?
What is the best way to think about serious risks in the future without reinforcing a sense of doom?
Teaching and Communication
Younger people (e.g., my undergraduate students) seem more willing to entertain scenarios of catastrophes and extinction compared to older people (e.g., academics). I find that strange and I don’t have a good explanation as to why that is the case.
The idea of a technological singularity was not difficult to explain and discuss with my students. I think that’s surprising given how powerful the weirdness heuristic is.
The idea of “agency” or “being an agent” was easy to conflate with “consciousness” in philosophical discussions. It’s not clear to me why that was the case since I gave a very specific definition of agency.
Most of my students thought that AI models will never be conscious; it was difficult for them to articulate specific arguments about this, but their intuition seemed to be that there’s something uniquely human about consciousness/sentience.
The “AIs will take our jobs in the future” seems to be a very common concern both among students and academics.
80% of a ~25 people classroom thought that philosophy is the right thing to major in if you’re interested in how minds work. The question I asked them was: “should you major in philosophy or cognitive science if you want to study how minds work?”
Governance/Strategy
Should we try to slow down AI progress? What does this mean in concrete steps?
How should we go about capabilities externalities?
How should concrete AI risk stories inform/affect AI governance and short-term/long-term future planning?