“But of course, at the time, researchers didn’t have the crucial information “your field will soon become a dead-end where genuine AI is concerned.” ”
But you find this project to be so informative that you wrote an article to make us aware of it. So it wasn’t exactly a “dead-end”: it didn’t achieve the lofty goal of producing AI, but it did produce results that you believe are useful in AI research. Negative results are still results.
Still “a dead-end where genuine AI is concerned”. And “you will become a useful example of cognitive bias for a future generation” is not exactly what the AI makers were aiming for, methinks...
It might not be what they were aiming for, but maybe scientists should be more willing to embrace this sort of result. It’s more glamorous to find a line of research that does work, but research projects that don’t work are still useful, and should still be valued. I don’t think it’s good for science for scientists to be denigrated for choosing lines of research that end up not working.
I don’t think it’s good for science for scientists to be denigrated for choosing lines of research that end up not working.
No, they shouldn’t. But they shouldn’t be blase and relaxed about their projects working or not working either. They should never think “I will be used as an example of how not to do science” as being a positive thing to aim for...
their projects working or not working either. They should never think “I will be used as an example of how not to do science”
These are completely different things. Most theories/projects/attempts in science fail. That is how science works and is NOT “an example of how not to do science”.
“But of course, at the time, researchers didn’t have the crucial information “your field will soon become a dead-end where genuine AI is concerned.” ”
But you find this project to be so informative that you wrote an article to make us aware of it. So it wasn’t exactly a “dead-end”: it didn’t achieve the lofty goal of producing AI, but it did produce results that you believe are useful in AI research. Negative results are still results.
Still “a dead-end where genuine AI is concerned”. And “you will become a useful example of cognitive bias for a future generation” is not exactly what the AI makers were aiming for, methinks...
It might not be what they were aiming for, but maybe scientists should be more willing to embrace this sort of result. It’s more glamorous to find a line of research that does work, but research projects that don’t work are still useful, and should still be valued. I don’t think it’s good for science for scientists to be denigrated for choosing lines of research that end up not working.
No, they shouldn’t. But they shouldn’t be blase and relaxed about their projects working or not working either. They should never think “I will be used as an example of how not to do science” as being a positive thing to aim for...
These are completely different things. Most theories/projects/attempts in science fail. That is how science works and is NOT “an example of how not to do science”.