I feel like doing a better job of motivating why we should care about this specific problem might help get you more feedback.
If we want to alter a decision theory to learn its set of inputs and outputs, your proposal makes sense to me at first glance. But I’m not sure why I should particularly care, or why there is even a problem to begin with solution. The link you provide doesn’t help me much after skimming it, and I (and I assume many people) almost never read something that requires me to read other posts without even a summary of the references. I made an exception today because I’m trying to give more feedback, and I feel that this specific piece of feedback might be useful for you.
Basically, I’m not sure of what problem you’re trying to solve with having this ability to learn your cartesian boundary, and so I’m unable to judge how well you are solving it.
The link would have been to better illustrate how the proposed system works, not about motivation. So, it seems that you understood the proposal, and wouldn’t have needed it.
I don’t exactly want to learn the cartesian boundary. A cartesian agent believes that its input set fully screens off any other influence on its thinking, and the outputs screen off any influence of the thinking on the world. Its very hard to find things that actually fulfill this. I explain how PDT can learn cartesian boundaries, if there are any, as a sanity/conservative extension check. But it can also learn that it controls copies or predictions of itself for example.
I feel like doing a better job of motivating why we should care about this specific problem might help get you more feedback.
If we want to alter a decision theory to learn its set of inputs and outputs, your proposal makes sense to me at first glance. But I’m not sure why I should particularly care, or why there is even a problem to begin with solution. The link you provide doesn’t help me much after skimming it, and I (and I assume many people) almost never read something that requires me to read other posts without even a summary of the references. I made an exception today because I’m trying to give more feedback, and I feel that this specific piece of feedback might be useful for you.
Basically, I’m not sure of what problem you’re trying to solve with having this ability to learn your cartesian boundary, and so I’m unable to judge how well you are solving it.
The link would have been to better illustrate how the proposed system works, not about motivation. So, it seems that you understood the proposal, and wouldn’t have needed it.
I don’t exactly want to learn the cartesian boundary. A cartesian agent believes that its input set fully screens off any other influence on its thinking, and the outputs screen off any influence of the thinking on the world. Its very hard to find things that actually fulfill this. I explain how PDT can learn cartesian boundaries, if there are any, as a sanity/conservative extension check. But it can also learn that it controls copies or predictions of itself for example.