Philosophers love to make overly simplistic statements about what computers can’t do, even when they’re pro-tech. “Someday, we will have computers that can program themselves!” Meanwhile, a C program I wrote the other day wrote some js, and I did not feel like it was worth a Nobel Prize.
I think they mean a computer that can translate imprecise requirements into precise programs in the same way that a human can, not just code that outputs code. I do agree that philosophers can tend to underestimate what a computer can theoretically do/overestimate how wonderful and unique humans are, though.
Philosophers love to make overly simplistic statements about what computers can’t do, even when they’re pro-tech. “Someday, we will have computers that can program themselves!” Meanwhile, a C program I wrote the other day wrote some js, and I did not feel like it was worth a Nobel Prize.
I think they mean a computer that can translate imprecise requirements into precise programs in the same way that a human can, not just code that outputs code. I do agree that philosophers can tend to underestimate what a computer can theoretically do/overestimate how wonderful and unique humans are, though.