What do you mean by “formalizing all of philosophy”? I don’t see ‘From Philosophy to Math to Engineering’ as arguing that we should turn all of philosophy into math (and I don’t even see the relevance of this to Friendly AI). It’s just claiming that FAI research begins with fuzzy informal ideas/puzzles/goals (like the sort you might see philosophers debate), then tries to move in a more formal directions.
I abused the hyperbole in that case. What I was pointing out is the impression that old-school MIRI (a lot of the HRAD work) thinks that solving the alignment problem requires deconfusing every related philosophical problem in terms of maths, and then implementing that. Such a view doesn’t seem shared by many in the community for a couple of reasons:
Some doubt that the level of mathematical formalization required is even possible
If timelines are quite short, we probably don’t have the time to do all that.
If AGI turns out to be prosaic AGI (which sounds like one of the best bet to make now), then what matters is aligning neural nets, not finding a way of write down a perfectly aligned AGI from scratch (related to the previous point because it seems improbable that the formalization will be finished before neural nets reach AGI, in such a prosaic setting).
I imagine part of Luke’s point in writing the post was to push back against the temptation to see formal and informal approaches as opposed (‘MIRI does informal stuff, so it must not like formalisms’), and to push back against the idea that analytic philosophers ‘own’ whatever topics they happen to have historically discussed.
Thanks for that clarification, it makes sense to me. That being said, multiple people (both me a couple of years ago and people I mentor/talk too) seem to have been pushed by MIRI’s work in general to think that they need extremely high-level of maths and formalism to even contribute to alignment, which I disagree with, and apparently Luke and you do too.
Reading the linked post, what jumps to me is the focus that friendly AI is about turning philosophy into maths, and I think that’s the culprit. That is part of the process, important one and great if we manage it. But expressing and thinking through problems of alignment at a less formal level is still very useful and important; that’s how we have most of the big insights and arguments in the field.
Pearl’s causality (the main example of “turning philosophy into mathematics” Luke uses) was an example of achieving deconfusion about causality, not an example of ‘merely formalizing’ something. I agree that calling this deconfusion is a clearer way of pointing at the thing, though!
Funnily, it sounds like MIRI itself (specifically Scott) has call that into doubt with Finite Factored Sets. This work isn’t throwing away all of Pearl’s work, but it argues that some part where missing/some assumptions unwarranted. Even a case of deconfusion as grounded than Pearl’s isn’t necessary the right abstraction/deconfusion.
The subtlety I’m trying to point out: actually formally deconfusing is really hard, in part because the formalization we come up with seem so much more serious and research-like than the fuzzy intuition underlying it all. And so I found it really useful to always emphasize that what we actually care about is the intuition/weird philosophical thinking, and the mathematical model are just tools to get clearer about the former. Which I expect is obvious for you and Luke, but isn’t for so many others (me from a couple of years ago included).
I abused the hyperbole in that case. What I was pointing out is the impression that old-school MIRI (a lot of the HRAD work) thinks that solving the alignment problem requires deconfusing every related philosophical problem in terms of maths, and then implementing that. Such a view doesn’t seem shared by many in the community for a couple of reasons:
I’m still not totally clear here about which parts were “hyperbole” vs. endorsed. You say that people’s “impression” was that MIRI wanted to deconfuse “every related philosophical problem”, which suggests to me that you think there’s some gap between the impression and reality. But then you say “such a view doesn’t seem shared by many in the community” (as though the “impression” is an actual past-MIRI-view others rejected, rather than a misunderstanding).
HRAD has always been about deconfusion (though I agree we did a terrible job of articulating this), not about trying to solve all of philosophy or “write down a perfectly aligned AGI from scratch”. The spirit wasn’t ‘we should dutifully work on these problems because they’re Important-sounding and Philosophical’; from my perspective, it was more like ‘we tried to write down a sketch of how to align an AGI, and immediately these dumb issues with self-reference and counterfactuals and stuff cropped up, so we tried to get those out of the way fast so we could go back to sketching how to aim an AGI at intended targets’. As Eliezer put it,
It was a dumb kind of obstacle to run into—or at least it seemed that way at that time. It seemed like if you could get a textbook from 200 years later, there would be one line of the textbook telling you how to get past that.
From my perspective, the biggest reason MIRI started diversifying approaches away from our traditional focus was shortening timelines, where we still felt that “conceptual” progress was crucial, and still felt that marginal progress on the Agent Foundations directions would be useful; but we now assigned more probability to ‘there may not be enough time to finish the core AF stuff’, enough to want to put a lot of time into other problems too.
Actually, I’m not sure how to categorize MIRI’s work using your conceptual vs. applied division. I’d normally assume “conceptual”, because our work is so far away from prosaic alignment; but you also characterize applied alignment research as being about “experimentally testing these ideas [from conceptual alignment]”, which sounds like the 2017-initiated lines of research we described in our 2018 update. If someone is running software experiments to test ideas about “Seeking entirely new low-level foundations for optimization” outside the current ML paradigm, where does that fall?
If AGI turns out to be prosaic AGI (which sounds like one of the best bet to make now), then what matters is aligning neural nets, not finding a way of write down a perfectly aligned AGI from scratch
Prosaic AGI alignment and “write down a perfectly aligned AGI from scratch” both seem super doomed to me, compared to approaches that are neither prosaic nor perfectly-neat-and-tidy. Where does research like that fall?
HRAD has always been about deconfusion (though I agree we did a terrible job of articulating this), not about trying to solve all of philosophy or “write down a perfectly aligned AGI from scratch”. The spirit wasn’t ‘we should dutifully work on these problems because they’re Important-sounding and Philosophical’; from my perspective, it was more like ‘we tried to write down a sketch of how to align an AGI, and immediately these dumb issues with self-reference and counterfactuals and stuff cropped up, so we tried to get those out of the way fast so we could go back to sketching how to aim an AGI at intended targets’.
I think that the issue is that I have a mental model of this process you describe that summarize it as “you need to solve a lot of philosophical issues for it to work”, and so that’s what I get by default when I query for that agenda. Still, I always had the impression that this line of work focused more on how to build a perfectly rational AGI than on building an aligned one. Can you explain me why that’s inaccurate?
From my perspective, the biggest reason MIRI started diversifying approaches away from our traditional focus was shortening timelines, where we still felt that “conceptual” progress was crucial, and still felt that marginal progress on the Agent Foundations directions would be useful; but we now assigned more probability to ‘there may not be enough time to finish the core AF stuff’, enough to want to put a lot of time into other problems too.
Yeah, I think this is a pretty common perspective on that work from outside MIRI. That’s my take (that there isn’t enough time to solve all of the necessary components) and the one I’ve seen people use in discussing MIRI multiple time.
Actually, I’m not sure how to categorize MIRI’s work using your conceptual vs. applied division. I’d normally assume “conceptual”, because our work is so far away from prosaic alignment; but you also characterize applied alignment research as being about “experimentally testing these ideas [from conceptual alignment]”, which sounds like the 2017-initiated lines of research we described in our 2018 update. If someone is running software experiments to test ideas about “Seeking entirely new low-level foundations for optimization” outside the current ML paradigm, where does that fall?
A really important point is that the division isn’t meant to split researchers themselves but research. So the experiment part would be applied alignment research and the rest conceptual alignment research. What’s interesting is that this is a good example of applied alignment research that doesn’t have the benefits I mention of more prosaic applied alignment research: being publishable at big ML/AI conferences, being within an accepted paradigm of modern AI...
Prosaic AGI alignment and “write down a perfectly aligned AGI from scratch” both seem super doomed to me, compared to approaches that are neither prosaic nor perfectly-neat-and-tidy. Where does research like that fall?
I would say that the non-prosaic approaches require at least some conceptual alignment research (because the research can’t be done fully inside current paradigms of ML and AI), but probably encompass some applied research. Maybe Steve’s work is a good example, with a proposal split of two of his posts in this comment.
Still, I always had the impression that this line of work focused more on how to build a perfectly rational AGI than on building an aligned one. Can you explain me why that’s inaccurate?
I don’t know what you mean by “perfectly rational AGI”. (Perfect rationality isn’t achievable, rationality-in-general is convergently instrumental, and rationality is insufficient for getting good outcomes. So why would that be the goal?)
I think of the basic case for HRAD this way:
We seem to be pretty confused about a lot of aspects of optimization, reasoning, decision-making, etc. (Embedded Agency is talking about more or less the same set of questions as HRAD, just with subsystem alignment added to the mix.)
If we were less confused, it might be easier to steer toward approaches to AGI that make it easier to do alignment work like ‘understand what cognitive work the system is doing internally’, ‘ensure that none of the system’s compute is being used to solve problems we don’t understand / didn’t intend’, ‘ensure that the amount of quality-adjusted thinking the system is putting into the task at hand is staying within some bound’, etc.
These approaches won’t look like decision theory, but being confused about basic ground-floor things like decision theory is a sign that you’re likely not in an epistemic position to efficiently find such approaches, much like being confused about how/whether chess is computable is a sign that you’re not in a position to efficiently steer toward good chess AI designs.
Maybe what I want is a two-dimensional “prosaic AI vs. novel AI” and “whiteboards vs. code”. Then I can more clearly say that I’m pretty far toward ‘novel AI’ on one dimension (though not as far as I was in 2015), separate from whether I currently think the bigger bottlenecks (now or in the future) are more whiteboard-ish problems vs. more code-ish problems.
What you propose seems valuable, although not an alternative to my distinction IMO. This 2-D grid is more about what people consider as the most promising way of getting aligned AGI and how to get there, whereas my distinction focuses on separating two different types of research which have very different methods, epistemic standards and needs in terms of field-building.
Thanks for the comment!
I abused the hyperbole in that case. What I was pointing out is the impression that old-school MIRI (a lot of the HRAD work) thinks that solving the alignment problem requires deconfusing every related philosophical problem in terms of maths, and then implementing that. Such a view doesn’t seem shared by many in the community for a couple of reasons:
Some doubt that the level of mathematical formalization required is even possible
If timelines are quite short, we probably don’t have the time to do all that.
If AGI turns out to be prosaic AGI (which sounds like one of the best bet to make now), then what matters is aligning neural nets, not finding a way of write down a perfectly aligned AGI from scratch (related to the previous point because it seems improbable that the formalization will be finished before neural nets reach AGI, in such a prosaic setting).
Thanks for that clarification, it makes sense to me. That being said, multiple people (both me a couple of years ago and people I mentor/talk too) seem to have been pushed by MIRI’s work in general to think that they need extremely high-level of maths and formalism to even contribute to alignment, which I disagree with, and apparently Luke and you do too.
Reading the linked post, what jumps to me is the focus that friendly AI is about turning philosophy into maths, and I think that’s the culprit. That is part of the process, important one and great if we manage it. But expressing and thinking through problems of alignment at a less formal level is still very useful and important; that’s how we have most of the big insights and arguments in the field.
Funnily, it sounds like MIRI itself (specifically Scott) has call that into doubt with Finite Factored Sets. This work isn’t throwing away all of Pearl’s work, but it argues that some part where missing/some assumptions unwarranted. Even a case of deconfusion as grounded than Pearl’s isn’t necessary the right abstraction/deconfusion.
The subtlety I’m trying to point out: actually formally deconfusing is really hard, in part because the formalization we come up with seem so much more serious and research-like than the fuzzy intuition underlying it all. And so I found it really useful to always emphasize that what we actually care about is the intuition/weird philosophical thinking, and the mathematical model are just tools to get clearer about the former. Which I expect is obvious for you and Luke, but isn’t for so many others (me from a couple of years ago included).
Cool, that makes sense!
I’m still not totally clear here about which parts were “hyperbole” vs. endorsed. You say that people’s “impression” was that MIRI wanted to deconfuse “every related philosophical problem”, which suggests to me that you think there’s some gap between the impression and reality. But then you say “such a view doesn’t seem shared by many in the community” (as though the “impression” is an actual past-MIRI-view others rejected, rather than a misunderstanding).
HRAD has always been about deconfusion (though I agree we did a terrible job of articulating this), not about trying to solve all of philosophy or “write down a perfectly aligned AGI from scratch”. The spirit wasn’t ‘we should dutifully work on these problems because they’re Important-sounding and Philosophical’; from my perspective, it was more like ‘we tried to write down a sketch of how to align an AGI, and immediately these dumb issues with self-reference and counterfactuals and stuff cropped up, so we tried to get those out of the way fast so we could go back to sketching how to aim an AGI at intended targets’. As Eliezer put it,
From my perspective, the biggest reason MIRI started diversifying approaches away from our traditional focus was shortening timelines, where we still felt that “conceptual” progress was crucial, and still felt that marginal progress on the Agent Foundations directions would be useful; but we now assigned more probability to ‘there may not be enough time to finish the core AF stuff’, enough to want to put a lot of time into other problems too.
Actually, I’m not sure how to categorize MIRI’s work using your conceptual vs. applied division. I’d normally assume “conceptual”, because our work is so far away from prosaic alignment; but you also characterize applied alignment research as being about “experimentally testing these ideas [from conceptual alignment]”, which sounds like the 2017-initiated lines of research we described in our 2018 update. If someone is running software experiments to test ideas about “Seeking entirely new low-level foundations for optimization” outside the current ML paradigm, where does that fall?
Prosaic AGI alignment and “write down a perfectly aligned AGI from scratch” both seem super doomed to me, compared to approaches that are neither prosaic nor perfectly-neat-and-tidy. Where does research like that fall?
I think that the issue is that I have a mental model of this process you describe that summarize it as “you need to solve a lot of philosophical issues for it to work”, and so that’s what I get by default when I query for that agenda. Still, I always had the impression that this line of work focused more on how to build a perfectly rational AGI than on building an aligned one. Can you explain me why that’s inaccurate?
Yeah, I think this is a pretty common perspective on that work from outside MIRI. That’s my take (that there isn’t enough time to solve all of the necessary components) and the one I’ve seen people use in discussing MIRI multiple time.
A really important point is that the division isn’t meant to split researchers themselves but research. So the experiment part would be applied alignment research and the rest conceptual alignment research. What’s interesting is that this is a good example of applied alignment research that doesn’t have the benefits I mention of more prosaic applied alignment research: being publishable at big ML/AI conferences, being within an accepted paradigm of modern AI...
I would say that the non-prosaic approaches require at least some conceptual alignment research (because the research can’t be done fully inside current paradigms of ML and AI), but probably encompass some applied research. Maybe Steve’s work is a good example, with a proposal split of two of his posts in this comment.
OK, thanks for the clarifications!
I don’t know what you mean by “perfectly rational AGI”. (Perfect rationality isn’t achievable, rationality-in-general is convergently instrumental, and rationality is insufficient for getting good outcomes. So why would that be the goal?)
I think of the basic case for HRAD this way:
We seem to be pretty confused about a lot of aspects of optimization, reasoning, decision-making, etc. (Embedded Agency is talking about more or less the same set of questions as HRAD, just with subsystem alignment added to the mix.)
If we were less confused, it might be easier to steer toward approaches to AGI that make it easier to do alignment work like ‘understand what cognitive work the system is doing internally’, ‘ensure that none of the system’s compute is being used to solve problems we don’t understand / didn’t intend’, ‘ensure that the amount of quality-adjusted thinking the system is putting into the task at hand is staying within some bound’, etc.
These approaches won’t look like decision theory, but being confused about basic ground-floor things like decision theory is a sign that you’re likely not in an epistemic position to efficiently find such approaches, much like being confused about how/whether chess is computable is a sign that you’re not in a position to efficiently steer toward good chess AI designs.
Maybe what I want is a two-dimensional “prosaic AI vs. novel AI” and “whiteboards vs. code”. Then I can more clearly say that I’m pretty far toward ‘novel AI’ on one dimension (though not as far as I was in 2015), separate from whether I currently think the bigger bottlenecks (now or in the future) are more whiteboard-ish problems vs. more code-ish problems.
What you propose seems valuable, although not an alternative to my distinction IMO. This 2-D grid is more about what people consider as the most promising way of getting aligned AGI and how to get there, whereas my distinction focuses on separating two different types of research which have very different methods, epistemic standards and needs in terms of field-building.