I think this ascription is meant to be pretty informal and general. So you could say for example that quicksort believes that 5 is less than 6.
I don’t think there’s meant to be any presumption that the inner workings of the algorithm are anything like a mind. That’s my read from section I.2. Ascribing beliefs to arbitrary computations.
The examples in the post are a chess-playing algorithm, image classification, and (more fleshed out) deduction, physics modeling, and the SDP solver
The deduction case is probably the simplest; our system is manipulating a bunch of explicitly-represented facts according to the normal rules of logic, we ascribe beliefs in the obvious way (i.e. if it deduces X, we say it believes X).
Those are all pretty opaque, as in, their inner workings are not immediately obvious, so it’s natural to take the intentional stance toward them. I had in mind something much simpler. For example, does an algorithm that adds two numbers have a belief about the rules of addition? Does a GIF to JPEG converter have a belief about which image format is “better”?
does an algorithm that adds two numbers have a belief about the rules of addition? Does a GIF to JPEG converter have a belief about which image format is “better”?
I’m not assuming any fact of the matter about what beliefs an system has. I’m quantifying over all “reasonable” ways of ascribing beliefs. So the only question is which ascription procedures are reasonable.
I think the most natural definition is to allow an ascription procedure to ascribe arbitrary fixed beliefs. That is, we can say that an addition algorithm has beliefs about the rules of addition, or about what kinds of operations will please God, or about what kinds of triples of numbers are aesthetically appealing, or whatever you like.
Universality requires dominating the beliefs produced by any reasonable ascription procedure, and adding particular arbitrary beliefs doesn’t make an ascription procedure harder to dominate (so it doesn’t really matter if we allow the procedures in the last paragraph as reasonable). The only thing that makes it hard to dominate C is the fact that C can do actual work that causes its beliefs to be accurate.
their inner workings are not immediately obvious
OK, consider the theorem prover that randomly searches over proofs then?
Can you describe a simple non-opaque algorithm to which you can meaningfully ascribe a belief?
I think this ascription is meant to be pretty informal and general. So you could say for example that quicksort believes that 5 is less than 6.
I don’t think there’s meant to be any presumption that the inner workings of the algorithm are anything like a mind. That’s my read from section I.2. Ascribing beliefs to arbitrary computations.
The examples in the post are a chess-playing algorithm, image classification, and (more fleshed out) deduction, physics modeling, and the SDP solver
The deduction case is probably the simplest; our system is manipulating a bunch of explicitly-represented facts according to the normal rules of logic, we ascribe beliefs in the obvious way (i.e. if it deduces X, we say it believes X).
Those are all pretty opaque, as in, their inner workings are not immediately obvious, so it’s natural to take the intentional stance toward them. I had in mind something much simpler. For example, does an algorithm that adds two numbers have a belief about the rules of addition? Does a GIF to JPEG converter have a belief about which image format is “better”?
I’m not assuming any fact of the matter about what beliefs an system has. I’m quantifying over all “reasonable” ways of ascribing beliefs. So the only question is which ascription procedures are reasonable.
I think the most natural definition is to allow an ascription procedure to ascribe arbitrary fixed beliefs. That is, we can say that an addition algorithm has beliefs about the rules of addition, or about what kinds of operations will please God, or about what kinds of triples of numbers are aesthetically appealing, or whatever you like.
Universality requires dominating the beliefs produced by any reasonable ascription procedure, and adding particular arbitrary beliefs doesn’t make an ascription procedure harder to dominate (so it doesn’t really matter if we allow the procedures in the last paragraph as reasonable). The only thing that makes it hard to dominate C is the fact that C can do actual work that causes its beliefs to be accurate.
OK, consider the theorem prover that randomly searches over proofs then?
Was this meant to read, “The only thing that makes it hard to dominate C …”, or something like that? I don’t quite understand the meaning as written.
Yes, thanks.