And we now have a prototype of the notion of preference: a fixed program that computes utility.
One possible problem: There is a difference between a program and the function it computes. The notion of preference is perhaps captured best by a function, not a program. Many programs could be written, all computing the same function. This wouldn’t matter much if we were only going to call or invoke the program, since all versions of the program compute the same result. But, this is not what cousin_it suggests here. He wants us to examine the source of the program, to prove theorems about the program. Given finite resources, the things we can prove about one program may not match the things we can prove about another program, even if the two programs would compute the same result in all cases if agent() returned the same result.
There is a difference between a program and the function it computes. The notion of preference is perhaps captured best by a function, not a program.
No, a program. What the program defines is a single constant value, not a function (remember: if we are talking about a program, it’s a constant program, taking no parameters!). And of course it matters how that constant is defined, and not at all what that constant is. More generally, we can define that constant not by a program, but by a logical theory, which will be the topic of my next post.
By a “logical theory” do you mean what logicians usually mean by a “theory”? A deductively closed set of sentences?
Wow! Well, that eliminates a lot of the arbitrary character I was objecting to in using programs to represent the world/decision problem. But there still are a lot of deductive systems to choose among. I await your next post with interest.
One possible problem: There is a difference between a program and the function it computes. The notion of preference is perhaps captured best by a function, not a program. Many programs could be written, all computing the same function. This wouldn’t matter much if we were only going to call or invoke the program, since all versions of the program compute the same result. But, this is not what cousin_it suggests here. He wants us to examine the source of the program, to prove theorems about the program. Given finite resources, the things we can prove about one program may not match the things we can prove about another program, even if the two programs would compute the same result in all cases if agent() returned the same result.
No, a program. What the program defines is a single constant value, not a function (remember: if we are talking about a program, it’s a constant program, taking no parameters!). And of course it matters how that constant is defined, and not at all what that constant is. More generally, we can define that constant not by a program, but by a logical theory, which will be the topic of my next post.
By a “logical theory” do you mean what logicians usually mean by a “theory”? A deductively closed set of sentences?
Wow! Well, that eliminates a lot of the arbitrary character I was objecting to in using programs to represent the world/decision problem. But there still are a lot of deductive systems to choose among. I await your next post with interest.
I won’t settle the choice, only point out the generality of notion and how it applies, direction to look for further refinements.