Here’s another possible example of the DLAW fallacy:
Suppose an engineer wishes to reverse-engineer a philsopher’s brain, and create a working software model of it. The engineer will consider it a success if the model can experience qualia.
Having read and agreed with Eliezer’s argument about zombies, the engineer decides that if the model can write at least a fairly decent paper about qualia, then that model is more probably capable of experiencing qualia than one that cannot. The engineer implements this as a utility function for a genetic algorithm that generates variations on a simple early implementation (which submits the papers to respected phiilosophical journals to determine their legitimacy), starts it running, then leaves for a well-earned vacation.
6 months later, the engineer returns to find that the GA has broken the firewall on its host computer, and produced an amazingly competent English language construction system that randomly looks up philosophical articles, then constructs a new article that reprocesses their statements into a new article, with references to the original article and a title that always follows the pattern “Analyses of [X}, [Y], and [Z]: A Retrospective”.
The fallacy was that a metric of qualia-experiencing (the ability to produce coherent English sentences describing qualia) was confused with the thing being measured.
5 seconds later, the running system achieves recursive self-improvement and tiles the solar system with MLA-compliant bibliographies referencing each other.
Here’s another possible example of the DLAW fallacy:
Suppose an engineer wishes to reverse-engineer a philsopher’s brain, and create a working software model of it. The engineer will consider it a success if the model can experience qualia.
Having read and agreed with Eliezer’s argument about zombies, the engineer decides that if the model can write at least a fairly decent paper about qualia, then that model is more probably capable of experiencing qualia than one that cannot. The engineer implements this as a utility function for a genetic algorithm that generates variations on a simple early implementation (which submits the papers to respected phiilosophical journals to determine their legitimacy), starts it running, then leaves for a well-earned vacation.
6 months later, the engineer returns to find that the GA has broken the firewall on its host computer, and produced an amazingly competent English language construction system that randomly looks up philosophical articles, then constructs a new article that reprocesses their statements into a new article, with references to the original article and a title that always follows the pattern “Analyses of [X}, [Y], and [Z]: A Retrospective”.
The fallacy was that a metric of qualia-experiencing (the ability to produce coherent English sentences describing qualia) was confused with the thing being measured.
5 seconds later, the running system achieves recursive self-improvement and tiles the solar system with MLA-compliant bibliographies referencing each other.