Fair enough, anosognosia would certainly be a possibility if something did eliminate consciousness. But I would expect severe deficits in writing philosophy papers about consciousness to emerge afterward.
Fair enough, anosognosia would certainly be a possibility if something did eliminate consciousness. But I would expect severe deficits in writing philosophy papers about consciousness to emerge afterward.
I’d tend to agree, at least with respect to novel or interesting work.
If you’ll pardon some academic cynicism, it wouldn’t surprise me much if an uploaded, consciousness redacted tenured professor could go ahead producing papers that would be accepted by journals. The task of publishing papers has certain differences to that of making object level progress. In fact, it seems likely that a narrow artificial intelligence specifically competent at literary synthesis could make actual valuable progress on human knowledge of this kind without being in the remote ballpark of conscious.
In fact, it seems likely that a narrow artificial intelligence specifically competent at literary synthesis could make actual valuable progress on human knowledge of this kind without being in the remote ballpark of conscious
How would you know, or even what would make you think, that it was NOT conscious? Even if it said it wasn’t conscious, that would be evidence but not dispositive. After all, there are humans such as James and Ryle who deny consciousness. Perhaps their denial is in a narrow or technical sense, but one would expect a conscious literary synthesis program to be AT LEAST as “odd” as the oddest human being, and so some fairly extensive discussion would need to be carried out with the thing to determine how it is using the terms.
At the simplest level consciousness seems to mean self-consciousness: I know that I exist, you know that you exist. If you were to ask a literary program whether it knew it existed, how could it meaningfully say no? And if it did meaningfully say no, and you loaded it with data about itself (much as you must load it with data about art when you want it to write a book of art criticism or on aesthetics) then it would have to say it knows it exists, as much as it would have to say it knows about “art” when loaded with info to write a book on art.
Ultimately, unless you can tell me how I am wrong, our only evidence of anybody but our own consciuosness is by a weak inference that “they are like me, I am conscious deep down, Occam’s razor suggests they are too.” Sure the literary program is less like me than is my wife, but it is more like me than a clam is like me, and it is more like me in some respects (but not overall) than is a chimpanzee. I think you would have to put your confidence that the literary program is conscious at something in the neighborhood of your confidence that a chimpanzee is conscious.
How would you know, or even what would make you think, that it was NOT conscious?
I’d examine the credentials and evidence of competence of the narrow AI engineer that created it and consult a few other AI experts and philosophers who are familiar with the particular program design.
Fair enough, anosognosia would certainly be a possibility if something did eliminate consciousness. But I would expect severe deficits in writing philosophy papers about consciousness to emerge afterward.
I’d tend to agree, at least with respect to novel or interesting work.
If you’ll pardon some academic cynicism, it wouldn’t surprise me much if an uploaded, consciousness redacted tenured professor could go ahead producing papers that would be accepted by journals. The task of publishing papers has certain differences to that of making object level progress. In fact, it seems likely that a narrow artificial intelligence specifically competent at literary synthesis could make actual valuable progress on human knowledge of this kind without being in the remote ballpark of conscious.
How would you know, or even what would make you think, that it was NOT conscious? Even if it said it wasn’t conscious, that would be evidence but not dispositive. After all, there are humans such as James and Ryle who deny consciousness. Perhaps their denial is in a narrow or technical sense, but one would expect a conscious literary synthesis program to be AT LEAST as “odd” as the oddest human being, and so some fairly extensive discussion would need to be carried out with the thing to determine how it is using the terms.
At the simplest level consciousness seems to mean self-consciousness: I know that I exist, you know that you exist. If you were to ask a literary program whether it knew it existed, how could it meaningfully say no? And if it did meaningfully say no, and you loaded it with data about itself (much as you must load it with data about art when you want it to write a book of art criticism or on aesthetics) then it would have to say it knows it exists, as much as it would have to say it knows about “art” when loaded with info to write a book on art.
Ultimately, unless you can tell me how I am wrong, our only evidence of anybody but our own consciuosness is by a weak inference that “they are like me, I am conscious deep down, Occam’s razor suggests they are too.” Sure the literary program is less like me than is my wife, but it is more like me than a clam is like me, and it is more like me in some respects (but not overall) than is a chimpanzee. I think you would have to put your confidence that the literary program is conscious at something in the neighborhood of your confidence that a chimpanzee is conscious.
I’d examine the credentials and evidence of competence of the narrow AI engineer that created it and consult a few other AI experts and philosophers who are familiar with the particular program design.