We start with some ML model which has lots from many different fields, like GPT-n. We also have a human who has a domain-specific problem to solve (like e.g. a coding problem, or a translation to another language) but lacks the relevant domain knowledge (e.g. coding skills, or language fluency). The problem, roughly speaking, is to get the ML model and the human to work as a team, and produce an outcome at-least-as-good as a human expert in the domain. In other words, we want to factorize the “expert knowledge” and the “having a use-case” parts of the problem. ... This sort of problem comes up all the time in real-world businesses. We could just as easily consider a product designer at a tech startup (who knows what they want but little about coding), an engineer (who knows lots about coding but doesn’t understand what the designer wants)...
These examples conflate “what the human who provided the task to the AI+human combined system wants” with “what the human who is working together with the AI wants” in a way that I think is confusing and sort of misses the point of sandwiching. In sandwiching, “what the human wants” is implicit in the choice of task, but the “what the human wants” part isn’t really what is being delegated or factored off to the human who is working together with the AI; what THAT human wants doesn’t enter into it at all. Using Cotra’s initial example to belabor the point: if someone figured out a way to get some non-medically-trained humans to work together with a mediocre medical-advice-giving AI in such a way that the output of the combined human+AI team is actually good medical advice, it doesn’t matter whether those non-medically-trained humans actually care that the result is good medical advice; they might not even individually know what the purpose of the system is, and just be focused on whatever their piece of the task is—say, verifying the correctness of individual steps of a chain of reasoning generated by the system, or checking that each step logically follows from the previous, or whatever. Of course this might be really time intensive, but if you can improve even slightly on the performance of the original mediocre system, then hopefully you can train a new AI system to match the performance of the original AI+human system by imitation learning, and bootstrap from there.
The point, as I understand it, is that if we can get human+AI systems to progress from “mediocre” to “excellent” (in other words, to remain aligned with the designer’s goal) -- despite the fact that the only feedback involved is from humans who wouldn’t even be mediocre at achieving the designer’s goal if they were asked to do it themselves—and if we can do it in a way that generalizes across all kinds of tasks, then that would be really promising. To me, it seems hard enough that we definitely shouldn’t take a few failed attempts as evidence that it can’t be done, but not so hard as to seem obviously impossible.
I at least partially buy this, but it seems pretty easy to update the human analogies to match what you’re saying. Rather than analogizing to e.g. a product designer + software engineer, we’d analogize to the tech company CEO trying to build some kind of product assembly line which can reliably produce good apps without any of the employees knowing what the product is supposed to be. Which still seems like something for which there’s already immense economic pressure, and we still generally can’t do it well for most cognitive problems (although we can do it well for most manufacturing problems).
Thanks, I agree that’s a better analogy. Though of course, it isn’t necessary that none of the employees (participants in a sandwiching project) are unaware of the CEO’s (sandwiching project overseer’s) goal; I was only highlighting that they need not necessarily be aware of it in order to make it clear that the goals of the human helpers/judges aren’t especially relevant to what sandwiching, debate, etc. is really about. But of course if it turns out that having the human helpers know what the ultimate goal is helps, then they’re absolutely allowed to be in on it...
Perhaps this is a bit glib, but arguably some of the most profitable companies in the mobile game space have essentially built product assembly lines to churn out fairly derivative games that are nevertheless unique enough to do well on the charts, and they absolutely do it by factoring the project of “making a game” into different bits that are done by different people (programmers, artists, voice actors, etc.), some of whom might not have any particular need to know what the product will look like as a whole to play their part.
However, I don’t want to press too hard on this game example as you may or may not consider this ‘cognitive work’ and as it has other disanalogies with what we are actually talking about here. And to a certain degree I share your intuition that factoring certain kinds of tasks is probably very hard: if it wasn’t, we might expect to see a lot more non-manufacturing companies whose employee main base consists of assembly lines (or hierarchies of assembly lines, or whatever) requiring workers with general intelligence but few specialized rare skills, which I think is the broader point you’re making in this comment. I think that’s right, although I also think there are reasons for this that go beyond just the difficulty of task factorization, and which don’t all apply in the HCH etc. case, as some other commenters have pointed out.
These examples conflate “what the human who provided the task to the AI+human combined system wants” with “what the human who is working together with the AI wants” in a way that I think is confusing and sort of misses the point of sandwiching. In sandwiching, “what the human wants” is implicit in the choice of task, but the “what the human wants” part isn’t really what is being delegated or factored off to the human who is working together with the AI; what THAT human wants doesn’t enter into it at all. Using Cotra’s initial example to belabor the point: if someone figured out a way to get some non-medically-trained humans to work together with a mediocre medical-advice-giving AI in such a way that the output of the combined human+AI team is actually good medical advice, it doesn’t matter whether those non-medically-trained humans actually care that the result is good medical advice; they might not even individually know what the purpose of the system is, and just be focused on whatever their piece of the task is—say, verifying the correctness of individual steps of a chain of reasoning generated by the system, or checking that each step logically follows from the previous, or whatever. Of course this might be really time intensive, but if you can improve even slightly on the performance of the original mediocre system, then hopefully you can train a new AI system to match the performance of the original AI+human system by imitation learning, and bootstrap from there.
The point, as I understand it, is that if we can get human+AI systems to progress from “mediocre” to “excellent” (in other words, to remain aligned with the designer’s goal) -- despite the fact that the only feedback involved is from humans who wouldn’t even be mediocre at achieving the designer’s goal if they were asked to do it themselves—and if we can do it in a way that generalizes across all kinds of tasks, then that would be really promising. To me, it seems hard enough that we definitely shouldn’t take a few failed attempts as evidence that it can’t be done, but not so hard as to seem obviously impossible.
I at least partially buy this, but it seems pretty easy to update the human analogies to match what you’re saying. Rather than analogizing to e.g. a product designer + software engineer, we’d analogize to the tech company CEO trying to build some kind of product assembly line which can reliably produce good apps without any of the employees knowing what the product is supposed to be. Which still seems like something for which there’s already immense economic pressure, and we still generally can’t do it well for most cognitive problems (although we can do it well for most manufacturing problems).
Thanks, I agree that’s a better analogy. Though of course, it isn’t necessary that none of the employees (participants in a sandwiching project) are unaware of the CEO’s (sandwiching project overseer’s) goal; I was only highlighting that they need not necessarily be aware of it in order to make it clear that the goals of the human helpers/judges aren’t especially relevant to what sandwiching, debate, etc. is really about. But of course if it turns out that having the human helpers know what the ultimate goal is helps, then they’re absolutely allowed to be in on it...
Perhaps this is a bit glib, but arguably some of the most profitable companies in the mobile game space have essentially built product assembly lines to churn out fairly derivative games that are nevertheless unique enough to do well on the charts, and they absolutely do it by factoring the project of “making a game” into different bits that are done by different people (programmers, artists, voice actors, etc.), some of whom might not have any particular need to know what the product will look like as a whole to play their part.
However, I don’t want to press too hard on this game example as you may or may not consider this ‘cognitive work’ and as it has other disanalogies with what we are actually talking about here. And to a certain degree I share your intuition that factoring certain kinds of tasks is probably very hard: if it wasn’t, we might expect to see a lot more non-manufacturing companies whose employee main base consists of assembly lines (or hierarchies of assembly lines, or whatever) requiring workers with general intelligence but few specialized rare skills, which I think is the broader point you’re making in this comment. I think that’s right, although I also think there are reasons for this that go beyond just the difficulty of task factorization, and which don’t all apply in the HCH etc. case, as some other commenters have pointed out.