In the scenarios like 2, you should give info other than black-box access to running. There should be a way to abstractly model the other player. Following something like Solomonoff induction is the last resort.
Systematic construction of formal systems that are in a certain correspondence with the system under study, such that studying the properties of these formal systems may allow proving the properties of the system under study. This describes any nontrivial formal analysis method, including the construction of Bayesian networks.
Returning to your example, you may want to give if not the whole source code, then at least a good model of the source code, or some kind of abstraction that is able to answer the requests about the formal properties of the code. Having to actually run the fixed source code in black-box mode is much too harsh.
You are making the situation worse for scenario 2, at least you give the source code for scenario 1. But yes, in general you’d want to not just give source code, but also a way of proving from that source code that it has certain properties (otherwise you are faced with halting problem/Rice theorem issues, and even more practical problems that halt you far short of those). It’s easy for the explicit situation you describe in scenario 1, but worse in other cases.
In some cases, you don’t really need an intermediary of explicit source code, you may just directly communicate a relatively small formal model allowing to prove required properties. This is useful for formally proving properties of systems for which formal models can’t be constructed automatically, but can more or less reliably be constructed manually. See, for example, model checking (although the Wikipedia article is too short to reflect this aspect).
In general, you are interested in some aspect of runtime semantics, not of source code per se; as was noted by one commenter, what the program does also depends on how it’s compiled and interpreted, what hardware it runs on, and what that hardware does as a result. But you are also not interested in most of that runtime semantics, only the aspects you care about. The code in a programming language usually (at least nominally) comes with formal semantics, that allows you to construct an overdetailed formal model, from which you can then try to abstract out the details that you don’t care about, to find the answers to your questions, like “does this program ever crash at runtime?”
I admit I’m in a fog. Intuitively, abstractly interpreting an opponent that’s also an abstract interpreter should only give you information about its unconditional behavior, not its behavior against a concrete you. Do you have a simple specific example of what you’re describing—a mathematical game where both players can analyze certain properties of each other?
I admit I’m in a fog. Intuitively, abstractly interpreting an opponent that’s also an abstract interpreter should only give you information about its unconditional behavior, not its behavior against a concrete you.
You’ve given an example yourself, in scenario 1. What happens can be modeled by analyzing programs for the presence of property X, that is a minimal predicate such that it holds for a given program foo if for each arg that has property X, foo(arg) returns “Cooperate”. Then, you can use your knowledge that f1 and f2 both have property X to prove that f1(f2) returns “Cooperate”. X doesn’t need to be as simple or explicit as the fixed prefix, you can in general assign an arbitrary algorithm for computing such X (with some amount of trouble resulting from that). Did you mean an example more detailed than this, more formal than this, or does this answer succeed in connecting the terminology?
Did you mean an example more detailed than this, more formal than this, or does this answer succeed in connecting the terminology?
None of the above! In scenario 1, PREFIX isn’t part of the problem statement; it is part of the solution. The problem statement said only that programs can read each other’s source code. Unless I’m misunderstanding your initial comment in this thread, you want to invent a Scenario 3 with rules distinct from 1 or 2: programs are given each other’s source code and/or Something Else. Well, what is that Something Else? What are the mathematical rules of the game for your proposed Scenario 3? Or are you proposing some new player algorithms for Scenario 1 or Scenario 2 instead of making up a new game? If so, what are they? Please answer simply and concretely.
In these terms, I’m describing a solution to scenario 1, and its limitations, that is it can’t solve all scenarios 1 (naturally, at least because of the halting problem). The Something Else are the limitations, for example your solution to scenatio 1 requires Something Else in a form of a fixed prefix. This solution is not terribly general, as it won’t accept even equivalent programs that include an additional comment in the body of the prefix.
Analyzing a formal model of the program gives more generality, and stronger methods of analysis give more generality still. If you are the program for which you introduce these properties, you’d also want to pick a property that tells that you achieve your goals, or as high in your preference order as you can get.
Ah—you want to generalize the PREFIX-test, expand the equivalence class. Sorry for being obtuse in my replies—I was giving you the benefit of the doubt, because generality is so obviously, totally useless in Scenario 1. Each program author wanting to utilize the PREFIX method of cooperation has quite enough incentive to coordinate on a common value of PREFIX with other such authors. If you prohibit vocal coordination beforehand, they will use the Schelling focal point: the lexicographically smallest PREFIX possible in a given programming language.
You may consider the scenario 3, where the trusted algorithm checks that your PREFIX is present in your opponent, and only tells you that fact, without giving you its source code or a simulator black box. But anyway, this little example is not a Totally General Solution to All Similar Problems, and so generalization is not useless. It might in a given formal problem, that is already solved by other means, but not in other cases.
For example, recall my previous remarks about how PD turns into an Ultimatum game once you allow mixed strategies (that is, any form of randomness in the programs). In which case, pure ‘Cooperate’ becomes suboptimal.
And if you do that properly, taking into account other issues like reflective consistency, you won’t need the other part of your algorithm at all, since picking fair Pareto-optimal strategies will also allow to correctly deal with defectors.
I’m still perplexed by the fairness part (and Ultimatum game in particular) though. One solution from symmetry seems obvious (not ‘cut on equal parts’, since what ‘equal part’ means also depends on the preference orders of each agent), but I don’t know in what sense it’s universally right, and if it isn’t, what should be done.
In the scenarios like 2, you should give info other than black-box access to running. There should be a way to abstractly model the other player. Following something like Solomonoff induction is the last resort.
What do you mean by “abstractly model”?
Systematic construction of formal systems that are in a certain correspondence with the system under study, such that studying the properties of these formal systems may allow proving the properties of the system under study. This describes any nontrivial formal analysis method, including the construction of Bayesian networks.
From programming languages/computer science, see e.g. abstract interpretation, bisimulation, basically things that hang on Gaois connection.
Returning to your example, you may want to give if not the whole source code, then at least a good model of the source code, or some kind of abstraction that is able to answer the requests about the formal properties of the code. Having to actually run the fixed source code in black-box mode is much too harsh.
Thanks for the link to Galois connection. But why did you pick scenario 2? Can you give an example how such techniques would be useful in scenario 1?
You are making the situation worse for scenario 2, at least you give the source code for scenario 1. But yes, in general you’d want to not just give source code, but also a way of proving from that source code that it has certain properties (otherwise you are faced with halting problem/Rice theorem issues, and even more practical problems that halt you far short of those). It’s easy for the explicit situation you describe in scenario 1, but worse in other cases.
In some cases, you don’t really need an intermediary of explicit source code, you may just directly communicate a relatively small formal model allowing to prove required properties. This is useful for formally proving properties of systems for which formal models can’t be constructed automatically, but can more or less reliably be constructed manually. See, for example, model checking (although the Wikipedia article is too short to reflect this aspect).
In general, you are interested in some aspect of runtime semantics, not of source code per se; as was noted by one commenter, what the program does also depends on how it’s compiled and interpreted, what hardware it runs on, and what that hardware does as a result. But you are also not interested in most of that runtime semantics, only the aspects you care about. The code in a programming language usually (at least nominally) comes with formal semantics, that allows you to construct an overdetailed formal model, from which you can then try to abstract out the details that you don’t care about, to find the answers to your questions, like “does this program ever crash at runtime?”
I admit I’m in a fog. Intuitively, abstractly interpreting an opponent that’s also an abstract interpreter should only give you information about its unconditional behavior, not its behavior against a concrete you. Do you have a simple specific example of what you’re describing—a mathematical game where both players can analyze certain properties of each other?
You’ve given an example yourself, in scenario 1. What happens can be modeled by analyzing programs for the presence of property X, that is a minimal predicate such that it holds for a given program foo if for each arg that has property X, foo(arg) returns “Cooperate”. Then, you can use your knowledge that f1 and f2 both have property X to prove that f1(f2) returns “Cooperate”. X doesn’t need to be as simple or explicit as the fixed prefix, you can in general assign an arbitrary algorithm for computing such X (with some amount of trouble resulting from that). Did you mean an example more detailed than this, more formal than this, or does this answer succeed in connecting the terminology?
None of the above! In scenario 1, PREFIX isn’t part of the problem statement; it is part of the solution. The problem statement said only that programs can read each other’s source code. Unless I’m misunderstanding your initial comment in this thread, you want to invent a Scenario 3 with rules distinct from 1 or 2: programs are given each other’s source code and/or Something Else. Well, what is that Something Else? What are the mathematical rules of the game for your proposed Scenario 3? Or are you proposing some new player algorithms for Scenario 1 or Scenario 2 instead of making up a new game? If so, what are they? Please answer simply and concretely.
In these terms, I’m describing a solution to scenario 1, and its limitations, that is it can’t solve all scenarios 1 (naturally, at least because of the halting problem). The Something Else are the limitations, for example your solution to scenatio 1 requires Something Else in a form of a fixed prefix. This solution is not terribly general, as it won’t accept even equivalent programs that include an additional comment in the body of the prefix.
Analyzing a formal model of the program gives more generality, and stronger methods of analysis give more generality still. If you are the program for which you introduce these properties, you’d also want to pick a property that tells that you achieve your goals, or as high in your preference order as you can get.
Ah—you want to generalize the PREFIX-test, expand the equivalence class. Sorry for being obtuse in my replies—I was giving you the benefit of the doubt, because generality is so obviously, totally useless in Scenario 1. Each program author wanting to utilize the PREFIX method of cooperation has quite enough incentive to coordinate on a common value of PREFIX with other such authors. If you prohibit vocal coordination beforehand, they will use the Schelling focal point: the lexicographically smallest PREFIX possible in a given programming language.
You may consider the scenario 3, where the trusted algorithm checks that your PREFIX is present in your opponent, and only tells you that fact, without giving you its source code or a simulator black box. But anyway, this little example is not a Totally General Solution to All Similar Problems, and so generalization is not useless. It might in a given formal problem, that is already solved by other means, but not in other cases.
For example, recall my previous remarks about how PD turns into an Ultimatum game once you allow mixed strategies (that is, any form of randomness in the programs). In which case, pure ‘Cooperate’ becomes suboptimal.
Then just change PREFIX to play the Pareto-optimal symmetric mixed strategy instead of cooperating.
And if you do that properly, taking into account other issues like reflective consistency, you won’t need the other part of your algorithm at all, since picking fair Pareto-optimal strategies will also allow to correctly deal with defectors.
I’m still perplexed by the fairness part (and Ultimatum game in particular) though. One solution from symmetry seems obvious (not ‘cut on equal parts’, since what ‘equal part’ means also depends on the preference orders of each agent), but I don’t know in what sense it’s universally right, and if it isn’t, what should be done.