Do you have interesting tasks in mind where expert systems are stronger and more robust than a 1B model trained from scratch with GPT-4 demos and where it’s actually hard (>1 day of human work) to build an expert system?
I would guess that it isn’t the case: interesting hard tasks have many edge cases which would make expert systems break. Transparency would enable you to understand the failures when they happen, but I don’t think that the stack of ad-hoc rules stacked on top of each other would be more robust than a model trained from scratch to solve the task. (The tasks I have in mind are sentiment classification and paraphrasing. I don’t have enough medical knowledge to imagine what would the expert system look like for medical diagnosis.) Or maybe you have in mind a particular way of writing expert systems which ensures that the stack of ad-hoc rules doesn’t interact in weird ways that produces unexpected results?
No, I don’t have any explicit examples of that. However, I don’t think that the main issue with GOFAI systems necessarily is that they have bad performance. Rather, I think the main problem is that they are very difficult and laborious to create. Consider, for example, IBM Watson. I consider this system to be very impressive. However, it took a large team of experts four years of intense engineering to create Watson, whereas you probably could get similar performance in an afternoon by simply fine-tuning GPT-2. However, this is less of a problem if you can use a fleet of LLM software engineers and have them spend 1,000 subjective years on the problem over the course of a weekend.
I also want to note that: 1. Some trade-off between performance and transparency is acceptable, as long as it is not too large. 2. The system doesn’t have to be an expert system: the important thing is just that it’s transparent. 3. If it is impossible to create interpretable software for solving a particular task, then strong interpretability must also fail.
Do you have interesting tasks in mind where expert systems are stronger and more robust than a 1B model trained from scratch with GPT-4 demos and where it’s actually hard (>1 day of human work) to build an expert system?
I would guess that it isn’t the case: interesting hard tasks have many edge cases which would make expert systems break. Transparency would enable you to understand the failures when they happen, but I don’t think that the stack of ad-hoc rules stacked on top of each other would be more robust than a model trained from scratch to solve the task. (The tasks I have in mind are sentiment classification and paraphrasing. I don’t have enough medical knowledge to imagine what would the expert system look like for medical diagnosis.) Or maybe you have in mind a particular way of writing expert systems which ensures that the stack of ad-hoc rules doesn’t interact in weird ways that produces unexpected results?
No, I don’t have any explicit examples of that. However, I don’t think that the main issue with GOFAI systems necessarily is that they have bad performance. Rather, I think the main problem is that they are very difficult and laborious to create. Consider, for example, IBM Watson. I consider this system to be very impressive. However, it took a large team of experts four years of intense engineering to create Watson, whereas you probably could get similar performance in an afternoon by simply fine-tuning GPT-2. However, this is less of a problem if you can use a fleet of LLM software engineers and have them spend 1,000 subjective years on the problem over the course of a weekend.
I also want to note that:
1. Some trade-off between performance and transparency is acceptable, as long as it is not too large.
2. The system doesn’t have to be an expert system: the important thing is just that it’s transparent.
3. If it is impossible to create interpretable software for solving a particular task, then strong interpretability must also fail.