To learn about chess by experimenting with Deep Blue, you must already know that it is a game-playing device, and something about how to engage it appropriately, such that your interaction with it will be an instance of the game. If you don’t know that, it is just a complicated finite-state object which responds to its boundary conditions in a certain way. And conversely, if you know that a brick wall allows some aspects of tennis game-play to be reproduced, and you know the appropriate form of interaction, then you will be able to infer a little about tennis. Not much, but something.
However, this is a side issue, compared to your avowed subjectivism about computation. You say:
the existence of a computation in a process is observer-dependent
I appreciate that your personal theory of consciousness is a work in progress (and you may want to examine Giulio Tononi’s theory, which I discovered simply by a combined search on “consciousness” and “mutual information”), so you may not have an answer to this question yet, just an intended answer, but—is the existence of an observer going to be observer-dependent as well?
If you don’t know that, it is just a complicated finite-state object which responds to its boundary conditions in a certain way.
So, if you don’t know that it is a playing chess but decide ‘hey, I want to maximise the amount of control I have over these little pieces here’ then learn to play chess anyway you are really learning ‘Zombie Chess’ and not actually Chess.
Depending on how you define ‘maximum amount of control’ you may find yourself playing for something other than checkmate, since the game ends for both of you in a mate. For example, if we define ‘amount of control of the board’ by the number of moves open to you, divided by the number of moves open to your opponent—or perhaps the sum of that quantity over all positions throughout the game—then you will be playing for a drawn position in which you have as much freedom to move as possible and your opponent has as little freedom to move as possible. This also assumes that you don’t control the pieces by directly manipulating the screen image, and that you don’t intervene in Deep Blue’s computational processes.
The game that you learnt while interacting with Deep Blue would depend on the utility function you brought to the experience, and on the range of interactions you permitted yourself. Of course there is a relationship between the game of chess and the state transition diagram for Deep Blue, but you cannot infer the former from the latter alone.
Of course there is a relationship between the game of chess and the state transition diagram for Deep Blue, but you cannot infer the former from the latter alone.
You’re right, you can’t. Now, assume I do in fact infer and adopt a utility function that so happens to be that of chess. This is not an unrealistic assumption, the guy has a crown and the game ends. In that case, is ‘Chess’ in the room, even though there’s just this silicone powered thing and me who has decided to fiddle with it? Were I to grant that you can’t make Blue out of non-Blue I would assume I also couldn’t make Chess out of Deep Blue.
Were I to grant that you can’t make Blue out of non-Blue I would assume I also couldn’t make Chess out of Deep Blue.
It’s a bit different because (from my perspective) the issue here is intentionality rather than qualia. You can’t turn something blue just by calling it blue. But you can make something part of a game by using it in the game. It has to be the right sort of thing to play the intended role, so its intrinsic properties do matter, but they only provide a necessary and not a sufficient condition. The other necessary condition is that it is being interpreted as playing the role, and so here we get back to the role of consciousness. If a copy of Deep Blue popped into being like a Boltzmann Brain and started playing itself in the intergalactic void, that really would be an instance of “zombie chess”.
We will have to return to definitions then. Can you have a game without players? Can you have a player without intentions? It is like arguing whether the Face on Mars is really a face. It is not the product of intention, but it does indeed look like a face. Is looking like a face enough for it to be a face? Deep Blue “plays chess” if you define chess as occurring whenever there is a conformance to certain appearances. But if chess requires the presence of a mind possessing certain minimal concepts and intentions, then Deep Blue in itself does not play chess.
Given the assumption that the computer is optimizing something, and given the awareness of the possibility of a game, you can infer essentially the whole of chess from the program. Chess consists of three things: the board and pieces, the movement rules, and the winning criterion. Observing the game, you will find that the computer steers the chessboard into different final regions depending on whether it moves the black pieces or the white pieces, and this will tell you the criterion the computer uses to optimize its position. And these will tell you that checkmate favors the party moving last and draws are preferred to being checkmated but not to checkmating.
To learn about chess by experimenting with Deep Blue, you must already know that it is a game-playing device, and something about how to engage it appropriately, such that your interaction with it will be an instance of the game. If you don’t know that, it is just a complicated finite-state object which responds to its boundary conditions in a certain way. And conversely, if you know that a brick wall allows some aspects of tennis game-play to be reproduced, and you know the appropriate form of interaction, then you will be able to infer a little about tennis. Not much, but something.
However, this is a side issue, compared to your avowed subjectivism about computation. You say:
I appreciate that your personal theory of consciousness is a work in progress (and you may want to examine Giulio Tononi’s theory, which I discovered simply by a combined search on “consciousness” and “mutual information”), so you may not have an answer to this question yet, just an intended answer, but—is the existence of an observer going to be observer-dependent as well?
So, if you don’t know that it is a playing chess but decide ‘hey, I want to maximise the amount of control I have over these little pieces here’ then learn to play chess anyway you are really learning ‘Zombie Chess’ and not actually Chess.
Depending on how you define ‘maximum amount of control’ you may find yourself playing for something other than checkmate, since the game ends for both of you in a mate. For example, if we define ‘amount of control of the board’ by the number of moves open to you, divided by the number of moves open to your opponent—or perhaps the sum of that quantity over all positions throughout the game—then you will be playing for a drawn position in which you have as much freedom to move as possible and your opponent has as little freedom to move as possible. This also assumes that you don’t control the pieces by directly manipulating the screen image, and that you don’t intervene in Deep Blue’s computational processes.
The game that you learnt while interacting with Deep Blue would depend on the utility function you brought to the experience, and on the range of interactions you permitted yourself. Of course there is a relationship between the game of chess and the state transition diagram for Deep Blue, but you cannot infer the former from the latter alone.
You’re right, you can’t. Now, assume I do in fact infer and adopt a utility function that so happens to be that of chess. This is not an unrealistic assumption, the guy has a crown and the game ends. In that case, is ‘Chess’ in the room, even though there’s just this silicone powered thing and me who has decided to fiddle with it? Were I to grant that you can’t make Blue out of non-Blue I would assume I also couldn’t make Chess out of Deep Blue.
It’s a bit different because (from my perspective) the issue here is intentionality rather than qualia. You can’t turn something blue just by calling it blue. But you can make something part of a game by using it in the game. It has to be the right sort of thing to play the intended role, so its intrinsic properties do matter, but they only provide a necessary and not a sufficient condition. The other necessary condition is that it is being interpreted as playing the role, and so here we get back to the role of consciousness. If a copy of Deep Blue popped into being like a Boltzmann Brain and started playing itself in the intergalactic void, that really would be an instance of “zombie chess”.
I’m not talking about parts. I’m talking about the game Chess itself (or an instance thereof).
We will have to return to definitions then. Can you have a game without players? Can you have a player without intentions? It is like arguing whether the Face on Mars is really a face. It is not the product of intention, but it does indeed look like a face. Is looking like a face enough for it to be a face? Deep Blue “plays chess” if you define chess as occurring whenever there is a conformance to certain appearances. But if chess requires the presence of a mind possessing certain minimal concepts and intentions, then Deep Blue in itself does not play chess.
Given the assumption that the computer is optimizing something, and given the awareness of the possibility of a game, you can infer essentially the whole of chess from the program. Chess consists of three things: the board and pieces, the movement rules, and the winning criterion. Observing the game, you will find that the computer steers the chessboard into different final regions depending on whether it moves the black pieces or the white pieces, and this will tell you the criterion the computer uses to optimize its position. And these will tell you that checkmate favors the party moving last and draws are preferred to being checkmated but not to checkmating.