So I can write down these formal symbol-manipulating algorithms, that look to a naive onlooker like they will do things like keep to themselves and prove the Goldbach conjecture. We can talk about the question of fact: if we run such an algorithm on a Turing machine (made of math), would it in fact output a proof of the Goldbach conjecture? And then we can talk about the other question of fact, which seems to be equivalent unless you dispute some very fundamental claims: if we simulate that computation on a real computer, will it in fact output a proof of the Goldbach conjecture?
It seems like one could try and cut this sort of reasoning at three points, if you accept it so far: either it breaks down when the goals get complicated, it breaks down when the reasoning gets hard, or it breaks down when the algorithm’s embedding in the environment is too complicated.
If you accept that these algorithms systematically do things that lead to their apparent “goals” being satisfied (so that we can predict outcomes using this sort of reasoning), then I don’t know what exactly you are arguing.
This does help with clarity.
So I can write down these formal symbol-manipulating algorithms, that look to a naive onlooker like they will do things like keep to themselves and prove the Goldbach conjecture. We can talk about the question of fact: if we run such an algorithm on a Turing machine (made of math), would it in fact output a proof of the Goldbach conjecture? And then we can talk about the other question of fact, which seems to be equivalent unless you dispute some very fundamental claims: if we simulate that computation on a real computer, will it in fact output a proof of the Goldbach conjecture?
It seems like one could try and cut this sort of reasoning at three points, if you accept it so far: either it breaks down when the goals get complicated, it breaks down when the reasoning gets hard, or it breaks down when the algorithm’s embedding in the environment is too complicated.
If you accept that these algorithms systematically do things that lead to their apparent “goals” being satisfied (so that we can predict outcomes using this sort of reasoning), then I don’t know what exactly you are arguing.