Lead to severe discounting of the ‘reasoning method’ that arrived at 3^^^3 dust-specks>torture conclusion without ever coming across the exhaustion of states issue.
Even better, to my mind, is to think about the scenario from the ground up and form my own conclusions, rather than start with some intuitive judgment about someone else’s writeup about it and then update that judgment based on things they didn’t mention in that writeup.
If someone haven’t got visual cortex they can’t see, even if they do insane amount of reasoning deliberately
It’s not clear to me that I correctly understand what you mean here, but given my current understanding, I disagree. All my visual cortex is doing is performing computations on the output of my eyes; if that’s seeing, then anything else that performs the same computations can see just as well.
Even better, to my mind, is to think about the scenario from the ground up and form my own conclusions, rather than start with some intuitive judgment about someone else’s writeup about it and then update that judgment based on things they didn’t mention in that writeup.
The point is that the approach is flawed; one should always learn on mistakes. The issue here is in building an argument which is superficially logical—conforms to the structure of something a logical rational person might say—something you might have a logical character in a movie say—but is fundamentally a string of very shaky intuitions which are only correct if nothing outside the argument interferes, rather than solid steps.
It’s not clear to me that I correctly understand what you mean here, but given my current understanding, I disagree. All my visual cortex is doing is performing computations on the output of my eyes; if that’s seeing, then anything else that performs the same computations can see just as well.
In theory. In practice it takes ridiculous number of operations, and you can’t chinese-room vision without slow-down by factor of billions. Decades for single-cat recognition versus fraction of a second, and that’s if you got an algorithm for it.
I disagree with pretty much all of this, as well as with most of what seem to be the ideas underlying it, and don’t see any straightforward way to achieve convergence rather than infinitely ramified divergence, so I suppose it’s best for me to drop the thread here.
Even better, to my mind, is to think about the scenario from the ground up and form my own conclusions, rather than start with some intuitive judgment about someone else’s writeup about it and then update that judgment based on things they didn’t mention in that writeup.
It’s not clear to me that I correctly understand what you mean here, but given my current understanding, I disagree. All my visual cortex is doing is performing computations on the output of my eyes; if that’s seeing, then anything else that performs the same computations can see just as well.
The point is that the approach is flawed; one should always learn on mistakes. The issue here is in building an argument which is superficially logical—conforms to the structure of something a logical rational person might say—something you might have a logical character in a movie say—but is fundamentally a string of very shaky intuitions which are only correct if nothing outside the argument interferes, rather than solid steps.
In theory. In practice it takes ridiculous number of operations, and you can’t chinese-room vision without slow-down by factor of billions. Decades for single-cat recognition versus fraction of a second, and that’s if you got an algorithm for it.
I disagree with pretty much all of this, as well as with most of what seem to be the ideas underlying it, and don’t see any straightforward way to achieve convergence rather than infinitely ramified divergence, so I suppose it’s best for me to drop the thread here.