Robin, until we solve this problem (and I do agree that you’ve identified a problem that needs to be solved), is there anything wrong with taking the decomposition of an agent into program and data as an external input to the decision theory, much like how priors and utility functions are external inputs to evidential decision theory, and causal relationships are an additional input to causal decision theory?
It seems that in most decision problems there are intuitively obvious decompositions, even if we can’t yet formalize the criteria that we use to to do this, so this doesn’t seem to pose a practical problem as far as using TDT/UDT to make everyday decisions. Do you have an example where the decomposition is not intuitively obvious?
It seems that in most decision problems there are intuitively obvious decompositions, even if we can’t yet formalize the criteria that we use to to do this
I propose the following formalization. The “program” is everything that we can control fully and hold constant between all situations given in the problem. The “data” is everything else.
Which things we want to hold constant and which things vary depend on the problem we’re considering. In ordinary game theory, the program is a complete strategy, which we assume is memorized before the beginning and followed perfectly, and the data is some set of observations made between the start of the game and some decision point within it. Problems may force us to move things that are normally part of the program into the state, by taking them out of our control. For example, when reasoning about how a company should act in relation to a market, we treat everything that decides what the corporation does as a black box program, and the observations it makes of the market as its input data. If internal politics matter, then we have to narrow the black-boxing boundary to only ourselves. If we’re worried about akrasia or mind control, then we draw the boundary inside our own mind.
Whether something is Program or Data is not a property of the object itself, but rather of how we reason about it. If it can be fully modeled as a black box function, then it’s part of the program; otherwise it’s data.
If Functional Programming and LISP has taught me anything is that all “programs” are “data”. The boundary between data and code is blurry at least. We are all instances of “data” that is executed on the machine known as the “Universe”. (I think this kind of Cartesian duality will lead to other dualities and I don’t think we need “soul” and “body” mixed into this talk)
The decomposition rarely seems intuitively obvious to me. For example, what part of me is program vs. data? And are there any constraints on acceptable decompositions? Is it really all right to act as if you were controlling the actions of all physical objects, for example?
I wonder if it would help to try to bracket the uncertain area with less ambiguous cases, and maybe lead to a better articulation of the implicit criteria by which people distinguish program and data.
On one side, I propose that if the behavior you’re talking about would also be exhibited by a crash dummy substituted for your body, then it’s data and not program. For example, if someone pushes me off a cliff, it’s not my suicidal “program” that accelerates me downwards @ 32ft / s^2, but the underlying “data.”
On the other, if you write down a plan beforehand and actually locomote (e.g. on muscle power) to enact the plan, then it is program.
Are these reasonable outer bounds to our uncertainty? If not, why? If so, can we narrow them further?
Robin, until we solve this problem (and I do agree that you’ve identified a problem that needs to be solved), is there anything wrong with taking the decomposition of an agent into program and data as an external input to the decision theory, much like how priors and utility functions are external inputs to evidential decision theory, and causal relationships are an additional input to causal decision theory?
It seems that in most decision problems there are intuitively obvious decompositions, even if we can’t yet formalize the criteria that we use to to do this, so this doesn’t seem to pose a practical problem as far as using TDT/UDT to make everyday decisions. Do you have an example where the decomposition is not intuitively obvious?
I propose the following formalization. The “program” is everything that we can control fully and hold constant between all situations given in the problem. The “data” is everything else.
Which things we want to hold constant and which things vary depend on the problem we’re considering. In ordinary game theory, the program is a complete strategy, which we assume is memorized before the beginning and followed perfectly, and the data is some set of observations made between the start of the game and some decision point within it. Problems may force us to move things that are normally part of the program into the state, by taking them out of our control. For example, when reasoning about how a company should act in relation to a market, we treat everything that decides what the corporation does as a black box program, and the observations it makes of the market as its input data. If internal politics matter, then we have to narrow the black-boxing boundary to only ourselves. If we’re worried about akrasia or mind control, then we draw the boundary inside our own mind.
Whether something is Program or Data is not a property of the object itself, but rather of how we reason about it. If it can be fully modeled as a black box function, then it’s part of the program; otherwise it’s data.
If Functional Programming and LISP has taught me anything is that all “programs” are “data”. The boundary between data and code is blurry at least. We are all instances of “data” that is executed on the machine known as the “Universe”. (I think this kind of Cartesian duality will lead to other dualities and I don’t think we need “soul” and “body” mixed into this talk)
The decomposition rarely seems intuitively obvious to me. For example, what part of me is program vs. data? And are there any constraints on acceptable decompositions? Is it really all right to act as if you were controlling the actions of all physical objects, for example?
I wonder if it would help to try to bracket the uncertain area with less ambiguous cases, and maybe lead to a better articulation of the implicit criteria by which people distinguish program and data.
On one side, I propose that if the behavior you’re talking about would also be exhibited by a crash dummy substituted for your body, then it’s data and not program. For example, if someone pushes me off a cliff, it’s not my suicidal “program” that accelerates me downwards @ 32ft / s^2, but the underlying “data.”
On the other, if you write down a plan beforehand and actually locomote (e.g. on muscle power) to enact the plan, then it is program.
Are these reasonable outer bounds to our uncertainty? If not, why? If so, can we narrow them further?