For the purposes of the argument I was making, “possible in principle to physically extract” is the same as “possible in principle to extract”. For once you know the laws of physics, which supposedly you can learn from a pebble, you can physically extract data that is functionally equivalent to alternatives/utility assignments.
For example, our knowledge of thermodynamics and chemistry tells us that a chemical would go to a lower energy state (and perhaps release heat) if it could observe certain other chemicals (which we call “catalysts”). It is our knowledge of science that justifies saying that there is this lower energy state that it “has a tendency” to want to go to, which is an “alternative” lacking “couldness” in the same sense of the proposed CSAs.
Laying down rules for what counts as evidence that a body is considering alternatives, is messier than AnnaSalamon thinks.
Laying down rules for what counts as evidence that a body is considering alternatives, is mess[y]
Agreed. But I don’t think that means that it’s not possible to do so, or that there aren’t clear cases on either side of the line. My previous formulation probably wasn’t as clear as it should have been, but would the distinction seem more tenable to you if I said “possible in principle to observe physical representations of” instead of “possible in principle to physically extract”? I think the former better captures my intended meaning.
If there were a (potentially) observable physical process going on inside the pebble that contained representations of alternative paths available to it, and the utility assigned to them, then I think you could argue that the pebble is a CSA. But we have no evidence of that whatsoever. Those representations might exist in our minds once we decide to model the pebble in that way, but that isn’t the same thing at all.
On the other hand, we do seem to have such evidence for e.g. chess-playing computers, and (while claims about what neuroimaging studies have identified are frequently overstated) we also seem to be gathering it for the human brain.
but would the distinction … seem more tenable to you if I said “possible in principle to observe physical representations of” instead of “possible in principle to physically extract”?
Heh, I actually had a response half-written up to this position, until I decided that something like the comment I did make would be more relevant. So, let’s port that over...
The answer to your question is: yes, as long as you can specify what observations of the system (and you may of course include any physically-possible mode of entanglement) count as evidence for it having considered multiple alternatives.
This criterion, I think, is what AnnaSalamon should be focusing on: what does it mean for “alternative-consideration” to be embedded in a physical system? In such a limited world as chess, it’s easy to see the embedding. [Now begins what I hadn’t written before.] I think that’s a great example of what I’m wondering about: what is this possible class of intelligent algorithms that stands in contrast to CSAs? If there were a good chess computer that was not a CSA, what would it be doing instead?
You could imagine one, perhaps, that computes moves purely as a function of the current board configuration. If bishop here, more than three pawns between here, move knight there, etc.
The first thing to notice is that for the program to actually be good, it would require that some other process was able to find a lot of regularity to the search space, and compactly express it. And to find that regularity, it would have to interact with it. So, such a good “insta-evaluator” implicitly contains the result of previous simulations.
Arguably, this, rather than(?) a CSA is what humans (mostly) are. Throughout our evolutionary history, a self-replicating process iterated through a lot of experiences that told it what “does work” and “doesn’t work”. The way we exist today, just the same as in the case of chess above, implicitly contains a compression of previous evaluations of “does work” and “doesn’t work”, known as heuristics, which together guide our behavior.
Is a machine that acts purely this way, and without humans’ ability to consciously consider alternatives, what AnnaSalamon means by a non-CSA algorithm? Or would it include that too?
Thanks for your reply.
For the purposes of the argument I was making, “possible in principle to physically extract” is the same as “possible in principle to extract”. For once you know the laws of physics, which supposedly you can learn from a pebble, you can physically extract data that is functionally equivalent to alternatives/utility assignments.
For example, our knowledge of thermodynamics and chemistry tells us that a chemical would go to a lower energy state (and perhaps release heat) if it could observe certain other chemicals (which we call “catalysts”). It is our knowledge of science that justifies saying that there is this lower energy state that it “has a tendency” to want to go to, which is an “alternative” lacking “couldness” in the same sense of the proposed CSAs.
Laying down rules for what counts as evidence that a body is considering alternatives, is messier than AnnaSalamon thinks.
Agreed. But I don’t think that means that it’s not possible to do so, or that there aren’t clear cases on either side of the line. My previous formulation probably wasn’t as clear as it should have been, but would the distinction seem more tenable to you if I said “possible in principle to observe physical representations of” instead of “possible in principle to physically extract”? I think the former better captures my intended meaning.
If there were a (potentially) observable physical process going on inside the pebble that contained representations of alternative paths available to it, and the utility assigned to them, then I think you could argue that the pebble is a CSA. But we have no evidence of that whatsoever. Those representations might exist in our minds once we decide to model the pebble in that way, but that isn’t the same thing at all.
On the other hand, we do seem to have such evidence for e.g. chess-playing computers, and (while claims about what neuroimaging studies have identified are frequently overstated) we also seem to be gathering it for the human brain.
Heh, I actually had a response half-written up to this position, until I decided that something like the comment I did make would be more relevant. So, let’s port that over...
The answer to your question is: yes, as long as you can specify what observations of the system (and you may of course include any physically-possible mode of entanglement) count as evidence for it having considered multiple alternatives.
This criterion, I think, is what AnnaSalamon should be focusing on: what does it mean for “alternative-consideration” to be embedded in a physical system? In such a limited world as chess, it’s easy to see the embedding. [Now begins what I hadn’t written before.] I think that’s a great example of what I’m wondering about: what is this possible class of intelligent algorithms that stands in contrast to CSAs? If there were a good chess computer that was not a CSA, what would it be doing instead?
You could imagine one, perhaps, that computes moves purely as a function of the current board configuration. If bishop here, more than three pawns between here, move knight there, etc.
The first thing to notice is that for the program to actually be good, it would require that some other process was able to find a lot of regularity to the search space, and compactly express it. And to find that regularity, it would have to interact with it. So, such a good “insta-evaluator” implicitly contains the result of previous simulations.
Arguably, this, rather than(?) a CSA is what humans (mostly) are. Throughout our evolutionary history, a self-replicating process iterated through a lot of experiences that told it what “does work” and “doesn’t work”. The way we exist today, just the same as in the case of chess above, implicitly contains a compression of previous evaluations of “does work” and “doesn’t work”, known as heuristics, which together guide our behavior.
Is a machine that acts purely this way, and without humans’ ability to consciously consider alternatives, what AnnaSalamon means by a non-CSA algorithm? Or would it include that too?