I wonder if the discussion of the scientific capabilities of e.g. GPT-3 would be more productive if it were anchored to some model of the wider scientific feedback loop in which it’s situated?
Consider three scenarios:
A: A model trained to predict the shapes of proteins from their DNA sequences.
B: A model with access to custom molecule synthesizer and high-volume, high-quality feedback about the scientific value of its text production, trained to write papers with scientific value about the nature of the molecules produced.
C: A model given the resources of an psychology department and trained to advance the state of psychological understanding (without just hiring other humans).
As we go from A to C we see a decrease in the quality of the feedback loop, and with it an increasing need for general, rather than narrow intelligence. I would argue that even A should count as doing science, since it advances the state of the art knowledge about an important phenomena, and current models are clearly capable of doing so. C is clearly well beyond the capabilities of GPT-3 and also many well qualified, intelligent scientists, because the feedback loop is so poor. B is intermediate and I expect beyond GPT-3 but I’m not confident that current techniques couldn’t provide value.
If you’re interested in taking the point further, perhaps one of you could specify the loosest scientific feedback loop that you think the current paradigm of AI is capable of meaningfully participating in?
I wonder if the discussion of the scientific capabilities of e.g. GPT-3 would be more productive if it were anchored to some model of the wider scientific feedback loop in which it’s situated?
Consider three scenarios:
A: A model trained to predict the shapes of proteins from their DNA sequences.
B: A model with access to custom molecule synthesizer and high-volume, high-quality feedback about the scientific value of its text production, trained to write papers with scientific value about the nature of the molecules produced.
C: A model given the resources of an psychology department and trained to advance the state of psychological understanding (without just hiring other humans).
As we go from A to C we see a decrease in the quality of the feedback loop, and with it an increasing need for general, rather than narrow intelligence. I would argue that even A should count as doing science, since it advances the state of the art knowledge about an important phenomena, and current models are clearly capable of doing so. C is clearly well beyond the capabilities of GPT-3 and also many well qualified, intelligent scientists, because the feedback loop is so poor. B is intermediate and I expect beyond GPT-3 but I’m not confident that current techniques couldn’t provide value.
If you’re interested in taking the point further, perhaps one of you could specify the loosest scientific feedback loop that you think the current paradigm of AI is capable of meaningfully participating in?