I think the best definition of consciousness I’ve come across is Hofstadter’s, which is something like “when you are thinking, you can think about the fact that you’re thinking, and incorporate that into your conclusions. You can dive down the rabbit hole of meta-thinking as many times as you like.”
Philosophers love to make overly simplistic statements about what computers can’t do, even when they’re pro-tech. “Someday, we will have computers that can program themselves!” Meanwhile, a C program I wrote the other day wrote some js, and I did not feel like it was worth a Nobel Prize.
I think they mean a computer that can translate imprecise requirements into precise programs in the same way that a human can, not just code that outputs code. I do agree that philosophers can tend to underestimate what a computer can theoretically do/overestimate how wonderful and unique humans are, though.
On the other hand, I don’t know any humans who know themselves quite as precisely as does an optimizing compiler which has just compiled its own source to machine code.
I can write programs that can do that.
Philosophers love to make overly simplistic statements about what computers can’t do, even when they’re pro-tech. “Someday, we will have computers that can program themselves!” Meanwhile, a C program I wrote the other day wrote some js, and I did not feel like it was worth a Nobel Prize.
I think they mean a computer that can translate imprecise requirements into precise programs in the same way that a human can, not just code that outputs code. I do agree that philosophers can tend to underestimate what a computer can theoretically do/overestimate how wonderful and unique humans are, though.
I don’t think anyone can yet write a program that can reflect on itself in quite the same way a human can.
On the other hand, I don’t know any humans who know themselves quite as precisely as does an optimizing compiler which has just compiled its own source to machine code.