To be fair, even if what you’re referring to above is true (I don’t believe it is—lookup table compression is a thing), it’s an implementation detail. It doesn’t matter that a naive implementation might not fit in our current observable universe; it need merely be able to exist in some universe for the argument to hold.
And in a way, this is my core problem with Searle’s argument. I believe you can fully emulate a human with both sufficiently large lookup tables, and also with pretty small lookup tables combined with some table expansion/generation code running on an organic substrate. I don’t challenge the argument based the technical feasibility of the table implementation. I challenge the argument on the basis that the author mistakenly believes that the implementation of any given table (static lookup table versus algorithmic lookup) somehow determines consciousness.
While I agree with your argument against Searle, it matters whether it’s at all feasible because if it isn’t, then the argument Searle is using has no real relation to AI today or in the future, and therefore we can’t use it to argue against the hypothesis that they lack intelligence/consciousness.
To be clear, I agree with your argument. I just want to note that physical impossibilities are being used to argue that today’s AI aren’t intelligent or conscious.
To be fair, even if what you’re referring to above is true (I don’t believe it is—lookup table compression is a thing), it’s an implementation detail. It doesn’t matter that a naive implementation might not fit in our current observable universe; it need merely be able to exist in some universe for the argument to hold.
And in a way, this is my core problem with Searle’s argument. I believe you can fully emulate a human with both sufficiently large lookup tables, and also with pretty small lookup tables combined with some table expansion/generation code running on an organic substrate. I don’t challenge the argument based the technical feasibility of the table implementation. I challenge the argument on the basis that the author mistakenly believes that the implementation of any given table (static lookup table versus algorithmic lookup) somehow determines consciousness.
While I agree with your argument against Searle, it matters whether it’s at all feasible because if it isn’t, then the argument Searle is using has no real relation to AI today or in the future, and therefore we can’t use it to argue against the hypothesis that they lack intelligence/consciousness.
To be clear, I agree with your argument. I just want to note that physical impossibilities are being used to argue that today’s AI aren’t intelligent or conscious.