Okay this is much better and different from what I’d thought you’d been saying.
When you say “we” and “minds” you are getting at something and here is my attempt to see if I’ve understood:
Given an algorithm which models itself (something like a mind; but not so specific, taboo mind) and its environment, that algorithm must recognize the difference between its model of its environment, which is filtered through it’s I/O devices of whatever form, and the environment itself.
The model this algorithm has should realize that the set of information contained in the environment may be in a different format from the set of information contained in the model (dualism of a sort) and that its accuracy is optimizing for predictions as opposed to truth.
Okay this is much better and different from what I’d thought you’d been saying.
When you say “we” and “minds” you are getting at something and here is my attempt to see if I’ve understood:
Given an algorithm which models itself (something like a mind; but not so specific, taboo mind) and its environment, that algorithm must recognize the difference between its model of its environment, which is filtered through it’s I/O devices of whatever form, and the environment itself.
The model this algorithm has should realize that the set of information contained in the environment may be in a different format from the set of information contained in the model (dualism of a sort) and that its accuracy is optimizing for predictions as opposed to truth.
Is this similar to what you mean?
No. If it involves self modeling, it is very far from what I am talking about. Give it up. It is just not worth it.
Okay. Sorry ):