In general, I expect these sorts of constraint removals to make problems trivial, with exceptions being problems where you have to arbitrarily maintain a finite computational power, and a big problem of philosophy is not realizing how much their intuitions rests on constraints of our own world that don’t have to hold when infinity is involved.
More generally, a lot of our intuitions involve exploiting constraints on the world at large, which means that when you remove those constraints, our intuitions become false.
I think Searle’s Chinese Room argument is flawed for reasons similar to this, and more generally the use of idealizations/thought experiments make philosophers forget how wrong their intuition is when they consider the question (at least for non-moral and possibly non-identity cases, though I am much more fragile on the confidence of the non-identity case specifically.
In general, I expect these sorts of constraint removals to make problems trivial, with exceptions being problems where you have to arbitrarily maintain a finite computational power, and a big problem of philosophy is not realizing how much their intuitions rests on constraints of our own world that don’t have to hold when infinity is involved.
More generally, a lot of our intuitions involve exploiting constraints on the world at large, which means that when you remove those constraints, our intuitions become false.
I think Searle’s Chinese Room argument is flawed for reasons similar to this, and more generally the use of idealizations/thought experiments make philosophers forget how wrong their intuition is when they consider the question (at least for non-moral and possibly non-identity cases, though I am much more fragile on the confidence of the non-identity case specifically.