You seem to fundamentally misunderstand computation, in ways similar to Searle. I can’t engage deeply, but recommend Scott Aaronson’s primer on computational complexity: https://www.scottaaronson.com/papers/philos.pdf
Yeah, maybe I misunderstood the part about the molecules of gas “representing” the states of Turing machine, but my first reaction is that it’s not enough to declare that X represents Y, it must also be true that the functionality of X corresponds to the functionality of Y.
I can say that my sock is a calculator, and that this thread represents the number 2 and this thread represents the number 3, but unless it somehow actually calculates 2+3, such analogy is useless.
Similarly, it is not enough to say that a position of a molecule represents a state of a Turing machine, there must also be a way how the rules of the TM actually constrain the movement of the molecule.
Is this the passage you’re referring to that means I’m “fundamentally misunderstanding computation”?
suppose we actually wanted to use a waterfall to help us calculate chess moves. [...] I conjecture that, given any chess-playing algorithm A that accesses a “waterfall oracle” W, there is an equally-good chess-playing algorithm A0, with similar time and space requirements, that does not access W. If this conjecture holds, then it gives us a perfectly observer-independent way to formalize our intuition that the “semantics” of waterfalls have nothing to do with chess.
This boils down to the Chalmers response. He isn’t arguing that the waterfall couldn’t implement a single pass through of a chess game, but it couldn’t robustly play many different chess games. I discuss the Chalmers response in the appendix and why I think it doesn’t fix the issue.
Yes, and no, it does not boil down to Chalmer’s argument. (as Aaronson makes clear in the paragraph before the one you quote, where he cites the Chalmers argument!) The argument from complexity is about the nature and complexity of systems capable of playing chess—which is why I think you need to carefully read the entire piece and think about what it says.
But as a small rejoinder, if we’re talking about playing a single game, the entire argument is ridiculous; I can write the entire “algorithm” a kilobyte of specific instructions. So it’s not that an algorithm must be capable of playing multiple counterfactual games to qualify, or that counterfactuals are required for moral weight—it’s that the argument hinges on a misunderstanding of how complex different classes of system need to be to do the things they do.
PS. Apologies that the original response comes off as combative—I really think this discussion is important, and wanted to engage to correct an important point, but have very little time to do so at the moment!
As far as I can tell, Scott’s argument does not argue against the possibility that a waterfall could execute a single forward pass of a chess playing algorithm, if you defined a gerrymandered enough map between the waterfall and logical states.
When he defines the waterfall as a potential oracle, implicit in that is that the oracle will respond correctly to different inputs—counterfactuals.
Viewing the waterfall’s potential oracleness as an intrinsic property of that system is to view counterfactual waterfalls as also intrinsic.
as Aaronson makes clear in the paragraph before the one you quote, where he cites the Chalmers argument!
Different arguments aren’t always orthogonal. They are often partial refraimings of the same generators. Maybe I was too clumsy when I said his boils down to the chalmers response, what I really meant to say was his argument is vulnerable to the same issues as the chalmers response (counterfactuals are not intrinsic to the waterfall), which is why I don’t think it solves the problem.
if we’re talking about playing a single game, the entire argument is ridiculous; I can write the entire “algorithm” a kilobyte of specific instructions.
I don’t understand what you’re trying to say here.
PS. Apologies that the original response comes off as combative
You seem to fundamentally misunderstand computation, in ways similar to Searle. I can’t engage deeply, but recommend Scott Aaronson’s primer on computational complexity: https://www.scottaaronson.com/papers/philos.pdf
Yeah, maybe I misunderstood the part about the molecules of gas “representing” the states of Turing machine, but my first reaction is that it’s not enough to declare that X represents Y, it must also be true that the functionality of X corresponds to the functionality of Y.
I can say that my sock is a calculator, and that this thread represents the number 2 and this thread represents the number 3, but unless it somehow actually calculates 2+3, such analogy is useless.
Similarly, it is not enough to say that a position of a molecule represents a state of a Turing machine, there must also be a way how the rules of the TM actually constrain the movement of the molecule.
Yeah, something like that. See my response to Euan in the other reply to my post.
Is this the passage you’re referring to that means I’m “fundamentally misunderstanding computation”?
This boils down to the Chalmers response. He isn’t arguing that the waterfall couldn’t implement a single pass through of a chess game, but it couldn’t robustly play many different chess games. I discuss the Chalmers response in the appendix and why I think it doesn’t fix the issue.
Yes, and no, it does not boil down to Chalmer’s argument. (as Aaronson makes clear in the paragraph before the one you quote, where he cites the Chalmers argument!) The argument from complexity is about the nature and complexity of systems capable of playing chess—which is why I think you need to carefully read the entire piece and think about what it says.
But as a small rejoinder, if we’re talking about playing a single game, the entire argument is ridiculous; I can write the entire “algorithm” a kilobyte of specific instructions. So it’s not that an algorithm must be capable of playing multiple counterfactual games to qualify, or that counterfactuals are required for moral weight—it’s that the argument hinges on a misunderstanding of how complex different classes of system need to be to do the things they do.
PS. Apologies that the original response comes off as combative—I really think this discussion is important, and wanted to engage to correct an important point, but have very little time to do so at the moment!
As far as I can tell, Scott’s argument does not argue against the possibility that a waterfall could execute a single forward pass of a chess playing algorithm, if you defined a gerrymandered enough map between the waterfall and logical states.
When he defines the waterfall as a potential oracle, implicit in that is that the oracle will respond correctly to different inputs—counterfactuals.
Viewing the waterfall’s potential oracleness as an intrinsic property of that system is to view counterfactual waterfalls as also intrinsic.
Different arguments aren’t always orthogonal. They are often partial refraimings of the same generators. Maybe I was too clumsy when I said his boils down to the chalmers response, what I really meant to say was his argument is vulnerable to the same issues as the chalmers response (counterfactuals are not intrinsic to the waterfall), which is why I don’t think it solves the problem.
I don’t understand what you’re trying to say here.
Thanks, I appreciate this :)
I’ve written my point more clearly here: https://www.lesswrong.com/posts/zxLbepy29tPg8qMnw/refuting-searle-s-wall-putnam-s-rock-and-johnson-s-popcorn