but would the distinction … seem more tenable to you if I said “possible in principle to observe physical representations of” instead of “possible in principle to physically extract”?
Heh, I actually had a response half-written up to this position, until I decided that something like the comment I did make would be more relevant. So, let’s port that over...
The answer to your question is: yes, as long as you can specify what observations of the system (and you may of course include any physically-possible mode of entanglement) count as evidence for it having considered multiple alternatives.
This criterion, I think, is what AnnaSalamon should be focusing on: what does it mean for “alternative-consideration” to be embedded in a physical system? In such a limited world as chess, it’s easy to see the embedding. [Now begins what I hadn’t written before.] I think that’s a great example of what I’m wondering about: what is this possible class of intelligent algorithms that stands in contrast to CSAs? If there were a good chess computer that was not a CSA, what would it be doing instead?
You could imagine one, perhaps, that computes moves purely as a function of the current board configuration. If bishop here, more than three pawns between here, move knight there, etc.
The first thing to notice is that for the program to actually be good, it would require that some other process was able to find a lot of regularity to the search space, and compactly express it. And to find that regularity, it would have to interact with it. So, such a good “insta-evaluator” implicitly contains the result of previous simulations.
Arguably, this, rather than(?) a CSA is what humans (mostly) are. Throughout our evolutionary history, a self-replicating process iterated through a lot of experiences that told it what “does work” and “doesn’t work”. The way we exist today, just the same as in the case of chess above, implicitly contains a compression of previous evaluations of “does work” and “doesn’t work”, known as heuristics, which together guide our behavior.
Is a machine that acts purely this way, and without humans’ ability to consciously consider alternatives, what AnnaSalamon means by a non-CSA algorithm? Or would it include that too?
Heh, I actually had a response half-written up to this position, until I decided that something like the comment I did make would be more relevant. So, let’s port that over...
The answer to your question is: yes, as long as you can specify what observations of the system (and you may of course include any physically-possible mode of entanglement) count as evidence for it having considered multiple alternatives.
This criterion, I think, is what AnnaSalamon should be focusing on: what does it mean for “alternative-consideration” to be embedded in a physical system? In such a limited world as chess, it’s easy to see the embedding. [Now begins what I hadn’t written before.] I think that’s a great example of what I’m wondering about: what is this possible class of intelligent algorithms that stands in contrast to CSAs? If there were a good chess computer that was not a CSA, what would it be doing instead?
You could imagine one, perhaps, that computes moves purely as a function of the current board configuration. If bishop here, more than three pawns between here, move knight there, etc.
The first thing to notice is that for the program to actually be good, it would require that some other process was able to find a lot of regularity to the search space, and compactly express it. And to find that regularity, it would have to interact with it. So, such a good “insta-evaluator” implicitly contains the result of previous simulations.
Arguably, this, rather than(?) a CSA is what humans (mostly) are. Throughout our evolutionary history, a self-replicating process iterated through a lot of experiences that told it what “does work” and “doesn’t work”. The way we exist today, just the same as in the case of chess above, implicitly contains a compression of previous evaluations of “does work” and “doesn’t work”, known as heuristics, which together guide our behavior.
Is a machine that acts purely this way, and without humans’ ability to consciously consider alternatives, what AnnaSalamon means by a non-CSA algorithm? Or would it include that too?