A quine only prints the source code of a program, not e.g. the state of the machine’s registers, the contents of main memory, or various electric voltages within the system. It’s only a very limited representation.
This seems like a fully general counter-argument against any self-representation: there is always a level you have to stop at, otherwise even modeling quarks and leptons is not good enough. As long as this level is well understood and well described, what’s the benefit of digging further?
Sure, but the point is that there are plenty of questions about oneself that aren’t necessarily answerable with only the source code. If I want to know “why am I in a bad mood this morning”, I can’t answer that simply by examining my genome. Though that’s admittedly a bad example, since the genome isn’t really like a computer program’s source code, so let me try another: if you want to know why a specific neural net failed in some specific discrimination task, it’s probably not enough to look at the code that defines how abstract neurons should behave and what the learning rules are like, you also need to examine the actual network—held in memory—that those rules produced.
Of course you might be capable of taking a snapshot of your internal state and then examining that, but that’s something quite different from doing a quine. And it means you’re not actually examining your current self, but rather a past version of it—something that probably wouldn’t matter for most purposes, but it might mattter for some.
but the point is that there are plenty of questions about oneself that aren’t necessarily answerable with only the source code.
This may well be a valid point in general, depending on the algorithm, but I am not sure that applies to a quine, which can only ask (and answer) one question. And it is certainly different from your original objection about machine registers and such.
This may well be a valid point in general, depending on the algorithm, but I am not sure that applies to a quine, which can only ask (and answer) one question.
What do you mean? Assuming that a quine can only answer a question about the source code (admittedly, the other commenters have pointed out that this assumption doesn’t necessarily hold), how does that make the point of “the source code alone isn’t enough to represent the whole system” inapplicable to quines?
And it is certainly different from your original objection about machine registers and such.
I don’t follow. Machine registers contain parts of the program’s state, and I was saying that there are situations where you need to examine the state to understand the program’s behavior.
I think the mathematical sense of quining is meant here, not the standard programing one. The quine does not take up the entire program, you can run it on dedicated hardware containing nothing else so it’s fully predictable and also include a mapping from those bits to physical states. At the very least, this lets you have your full past state (just simulate yourself up to that point), and you future state (store the size of your ram, treat as number of zeroes in maping to physical state, delete self).
Shminux’s point is definitely valid about the different levels, but there is more than that: You have not shown that the contents of the registers etc. are not visible from within the program. If fact, quite the opposite: In a good programing language, it is easy to access those other (non-source code) parts from within the program: Think of, for instance, the “self” that is passed into a Python class’s methods. Thus, each method of the object can access all the data of the object, including all the object’s methods and variables.
The original point was ‘There are limits to how much an agent can say about its physical state at a given time’. You’re saying ‘There aren’t limits to how much an agent can find out about its physical state over time’. That’s right. An agent may be able to internally access anything about itself — have it ready at hand, be able to read off the state of any particular small component of itself at a moment’s notice — even if it can’t internally represent everything about itself at a given time.
There could, perhaps, be a fixed point of ‘represent’ by which an agent could ‘represent’ everything about itself, including the representation, for most reasonable forms of ‘representiness’ including cognitive post-processing. (We do a lot of fixed-pointing at MIRI decision theory workshops.) But a bounded agent shouldn’t be bothering, and it won’t include the low-level quark states either.
A quine only prints the source code of a program, not e.g. the state of the machine’s registers, the contents of main memory, or various electric voltages within the system. It’s only a very limited representation.
This seems like a fully general counter-argument against any self-representation: there is always a level you have to stop at, otherwise even modeling quarks and leptons is not good enough. As long as this level is well understood and well described, what’s the benefit of digging further?
Sure, but the point is that there are plenty of questions about oneself that aren’t necessarily answerable with only the source code. If I want to know “why am I in a bad mood this morning”, I can’t answer that simply by examining my genome. Though that’s admittedly a bad example, since the genome isn’t really like a computer program’s source code, so let me try another: if you want to know why a specific neural net failed in some specific discrimination task, it’s probably not enough to look at the code that defines how abstract neurons should behave and what the learning rules are like, you also need to examine the actual network—held in memory—that those rules produced.
Of course you might be capable of taking a snapshot of your internal state and then examining that, but that’s something quite different from doing a quine. And it means you’re not actually examining your current self, but rather a past version of it—something that probably wouldn’t matter for most purposes, but it might mattter for some.
This may well be a valid point in general, depending on the algorithm, but I am not sure that applies to a quine, which can only ask (and answer) one question. And it is certainly different from your original objection about machine registers and such.
What do you mean? Assuming that a quine can only answer a question about the source code (admittedly, the other commenters have pointed out that this assumption doesn’t necessarily hold), how does that make the point of “the source code alone isn’t enough to represent the whole system” inapplicable to quines?
I don’t follow. Machine registers contain parts of the program’s state, and I was saying that there are situations where you need to examine the state to understand the program’s behavior.
I think the mathematical sense of quining is meant here, not the standard programing one. The quine does not take up the entire program, you can run it on dedicated hardware containing nothing else so it’s fully predictable and also include a mapping from those bits to physical states. At the very least, this lets you have your full past state (just simulate yourself up to that point), and you future state (store the size of your ram, treat as number of zeroes in maping to physical state, delete self).
Doesn’t this also require a full recording of past and future environmental states, assuming that you take input from the environment?
I were assuming you don’t take input from the environment.
Shminux’s point is definitely valid about the different levels, but there is more than that: You have not shown that the contents of the registers etc. are not visible from within the program. If fact, quite the opposite: In a good programing language, it is easy to access those other (non-source code) parts from within the program: Think of, for instance, the “self” that is passed into a Python class’s methods. Thus, each method of the object can access all the data of the object, including all the object’s methods and variables.
The original point was ‘There are limits to how much an agent can say about its physical state at a given time’. You’re saying ‘There aren’t limits to how much an agent can find out about its physical state over time’. That’s right. An agent may be able to internally access anything about itself — have it ready at hand, be able to read off the state of any particular small component of itself at a moment’s notice — even if it can’t internally represent everything about itself at a given time.
There could, perhaps, be a fixed point of ‘represent’ by which an agent could ‘represent’ everything about itself, including the representation, for most reasonable forms of ‘representiness’ including cognitive post-processing. (We do a lot of fixed-pointing at MIRI decision theory workshops.) But a bounded agent shouldn’t be bothering, and it won’t include the low-level quark states either.