I use ChatGPT as a starting point to investigate hypotheses to test at my biomedical engineering job on a daily basis. I am able to independently approach the level of understanding of specific problems of an experienced chemist with many years of experience on certain problems, although his familiarity with our chemical systems and education makes him faster to arrive at the same result. This is a lived example of the phenomenon in which AI improves the performance of the lower-tier performers more than the higher-tier performers (I am a recent MS grad, he is a post-postdoc).
So far, I haven’t been able to get ChatGPT to independently troubleshoot effectively or propose improvements. This seems to be partly because it struggles profoundly to grasp and hang onto the specific details I have provided to it. It’s as if our specific issue is mixed with more the more general problems it has encountered in its training. Or as if, whereas in the real world, strong evidence is common, to ChatGPT, what I tell it is only weak evidence. And if you can’t update strongly on evidence in my research world, you just can’t make progress.
The way I use it instead is to validate and build confidence in my conjectures, and as an incredibly sophisticated form of search. I can ask it how very specific systems we use in our research, not covered in any one resource, likely work. And I can ask it to explain how complex chemical interactions are likely behaving in specific buffer and heat conditions. Then I can ask it how adjusting these parameters might affect the behavior of the system. An iterated process like this combines ChatGPT’s unlimited generalist knowledge with my extremely specific understanding of our specific system to achieve a concrete, testable hypothesis that I can bring to work after a couple of hours. It feels like a natural, stimulating process. But you do have to be smart enough to steer the process yourself.
One way you know you’re on the wrong track with ChatGPT is when it starts trying to “advertise” to you by talking about the “valuable insights” you might gain from this or that method. And indeed, if you ask an expert about whether the method in question really offers specific forms of insight relevant to your research question, a promise of “valuable insights” from ChatGPT often means that there aren’t actually any. “Valuable insights” means “I couldn’t think of anything specific,” which is not a promising sign.
I use ChatGPT as a starting point to investigate hypotheses to test at my biomedical engineering job on a daily basis. I am able to independently approach the level of understanding of specific problems of an experienced chemist with many years of experience on certain problems, although his familiarity with our chemical systems and education makes him faster to arrive at the same result. This is a lived example of the phenomenon in which AI improves the performance of the lower-tier performers more than the higher-tier performers (I am a recent MS grad, he is a post-postdoc).
So far, I haven’t been able to get ChatGPT to independently troubleshoot effectively or propose improvements. This seems to be partly because it struggles profoundly to grasp and hang onto the specific details I have provided to it. It’s as if our specific issue is mixed with more the more general problems it has encountered in its training. Or as if, whereas in the real world, strong evidence is common, to ChatGPT, what I tell it is only weak evidence. And if you can’t update strongly on evidence in my research world, you just can’t make progress.
The way I use it instead is to validate and build confidence in my conjectures, and as an incredibly sophisticated form of search. I can ask it how very specific systems we use in our research, not covered in any one resource, likely work. And I can ask it to explain how complex chemical interactions are likely behaving in specific buffer and heat conditions. Then I can ask it how adjusting these parameters might affect the behavior of the system. An iterated process like this combines ChatGPT’s unlimited generalist knowledge with my extremely specific understanding of our specific system to achieve a concrete, testable hypothesis that I can bring to work after a couple of hours. It feels like a natural, stimulating process. But you do have to be smart enough to steer the process yourself.
One way you know you’re on the wrong track with ChatGPT is when it starts trying to “advertise” to you by talking about the “valuable insights” you might gain from this or that method. And indeed, if you ask an expert about whether the method in question really offers specific forms of insight relevant to your research question, a promise of “valuable insights” from ChatGPT often means that there aren’t actually any. “Valuable insights” means “I couldn’t think of anything specific,” which is not a promising sign.