I’m confused by this setting, but as a response to the idea of controlling stronger things (this comes up in agent-simulates-predictor): You control things to the extent their dependence on your actions controls your actions. So it’s possible to control superintelligences or other things that should know all about you, but only if you can predict the effect of your possible actions on their properties, which you probably won’t be able to do unless they let you, and I don’t understand how such problems look from the point of view of the agent with epistemic advantage (how do you let a piece of paper with “Defect” written on it control your decision?). It seems like what matters here are possible explanations for weaker agent being a certain way, and it’s these (potentially more powerful) explanations that play the game with stronger opponents in your stead (so you engage whoever wrote “Defect” of that piece of paper, Transparent Newcomb-style).
(See how pointless it is to just broadcast poorly-explained and poorly-understood ideas?)
See how pointless it is to just broadcast poorly-explained and poorly-understood ideas?
Slightly clumsy but the point came across nonetheless. The last sentence is the interesting one:
It seems like what matters here are possible explanations for weaker agent being a certain way, and it’s these (potentially more powerful) explanations that play the game with stronger opponents in your stead (so you engage whoever wrote “Defect” of that piece of paper, Transparent Newcomb-style).
The earlier sentences seem to be making a simple concept more complicated than it needs to be. Control just isn’t that deep when you reduce it.
Grr, why did someone downvote this? We should be doing our best to encourage Nesov to talk about things like this, it’s the only shot we non-decision-theorists have at understanding the practical implications of Kantian “epistemology” I mean theoretical decision theory! Not much of a shot but still.
I’m confused by this setting, but as a response to the idea of controlling stronger things (this comes up in agent-simulates-predictor): You control things to the extent their dependence on your actions controls your actions. So it’s possible to control superintelligences or other things that should know all about you, but only if you can predict the effect of your possible actions on their properties, which you probably won’t be able to do unless they let you, and I don’t understand how such problems look from the point of view of the agent with epistemic advantage (how do you let a piece of paper with “Defect” written on it control your decision?). It seems like what matters here are possible explanations for weaker agent being a certain way, and it’s these (potentially more powerful) explanations that play the game with stronger opponents in your stead (so you engage whoever wrote “Defect” of that piece of paper, Transparent Newcomb-style).
(See how pointless it is to just broadcast poorly-explained and poorly-understood ideas?)
It actually kinda worked, the confusion I had was really simple and was mostly resolved by User:paulfchristiano saying “I see no problems with that”.
That’s a good way of phrasing it, thanks.
Again, a good way of phrasing it, thanks.
Slightly clumsy but the point came across nonetheless. The last sentence is the interesting one:
The earlier sentences seem to be making a simple concept more complicated than it needs to be. Control just isn’t that deep when you reduce it.
Which sentences, how simple should it be?
I’m still trying to figure it out.
Grr, why did someone downvote this? We should be doing our best to encourage Nesov to talk about things like this, it’s the only shot we non-decision-theorists have at understanding the practical implications of Kantian “epistemology” I mean theoretical decision theory! Not much of a shot but still.