I don’t think there is such a big difference between the agent and the agent-designer.
Hmm. Seems to me there is a crucial difference, the former is in scope for OO, the latter is not.
We (as humans) are (always) still figuring out how to make decisions. From our perspective, we are still inventing the decision algorithm. From OO’s perspective, we were always going to behave a certain way. But, this does not contradict our perspective; OO just knows more.
If you know that someone has predicted your behavior, then you accept that you are a deterministic algorithm, and the inventing of the decision algorithm is just a deterministic subroutine of it. I don’t think we disagree there. The future is set, you are relegated to learning about what it is, and to feel the illusion of inventing the decision algorithm and/or acting on it. A self-consistent attitude in the OO setup is more like “I am just acting out my programming, and it feels like making decisions”.
we think of the idea → we start programming, and design various bots → the bots themselves learn (in the case of ML bots), which selects between various strategies → the strategies themselves perform some computation to select actions
Yes and no. “we” in this case are the agent designers, and the bots are agents acting out their programming, but we are neither OO, nor we are in the OO scope of predictability. In fact maybe there is no OO in that world, especially if the agent has access to quantum randomness or freebits, or is otherwise too hard to predict. That applies to complicated enough automata, like Alpha Zero.
Of course, from our perspective, it is a little difficult to imagine the learning algorithm selecting a different strategy, if we understand how the learning algorithm works. And it is fairly difficult to imagine the strategy selecting a different action, since it is going to be a relatively small computation.
Right, the more OO-like we are, the less agenty the algorithm feels to us.
The learning algorithm’s entire job is to select between different alternative strategies; it has to “think as if it has a choice”, or else it could not perform the computation it needs to perform.
Well. I am not sure that “it has to “think as if it has a choice”″. Thinking about having a choice seems like it requires an internal narrator, a degree of self-awareness. It is an open question whether an internal narrator necessarily emerges once the algorithm complexity is large enough. In fact, that would be an interesting open problem to work on, and if I were to do research in the area of agency and decision making, I would look into this as a project.
If an internal narrator is not required, then there is no thinking about choices, just following the programming that makes a decision. A bacteria following a sugar gradient probably doesn’t think about choices. Not sure what counts as thinking for a chess program and whether it has the quale of having a choice.
This does not mean choice is an illusion at any point in the chain. Choice is precisely the computation which chooses between alternatives. The alternatives are an illusion, in that counterfactuals are subjective.
Yes, action is a part of the computation, and sometimes we anthropomorphize this action as making a choice. The alternatives are an illusion indeed, and I am not sure what you mean by counterfactuals there, potential future choices, or paths not taken because they could never have been taken given the agent’s programming.
How is this consistent with your assertion that OO-problems are inconsistent because “you cannot optimize for interaction with an interaction with OO”? As you say, the NPC is forced to consider the “illusion” of choice—it is an illusion which cannot be avoided. Furthermore, this is due to the real situation which it actually finds itself in. (Or at least, the realistic scenario which we are imagining it is in.) So it seems to me it is faced with a real problem, which it actually has to solve; and, there are better and worse ways of approaching this problem (e.g., UDT-like thinking will tend to produce better results).
Yep, “it is faced with a real problem, which it actually has to solve; and, there are better and worse ways of approaching this problem”, and these “ways of approaching the problem” are coded by the agent designer, whether explicitly, or by making it create and apply a “decision theory” subroutine. Once the algorithm is locked in by the designer (who is out of scope for OO), in this world an OO already knows what decision theory the agent will discover and use.
TL;DR: the agent is in scope of OO, while the agent designer is out of scope and so potentially has the grounds of thinking of themselves as “making a (free) decision” without breaking self-consistency, while the agent has no such luxury. That’s the “special point in the chain”.
I am making no claims here whether in the “real world” we are more like agents or more like agent designers, since there are no OOs that we know of that could answer the question.
Yep, “it is faced with a real problem, which it actually has to solve; and, there are better and worse ways of approaching this problem”, and these “ways of approaching the problem” are coded by the agent designer, whether explicitly, or by making it create and apply a “decision theory” subroutine. Once the algorithm is locked in by the designer (who is out of scope for OO), in this world an OO already knows what decision theory the agent will discover and use.
TL;DR: the agent is in scope of OO, while the agent designer is out of scope and so potentially has the grounds of thinking of themselves as “making a (free) decision” without breaking self-consistency, while the agent has no such luxury. That’s the “special point in the chain”.
What exactly does in-scope / out-of-scope mean? The OO has access to what the designer does (since the designer’s design is given to the OO), so for practical purposes, the OO is predicting the designer perfectly. Just not by simulating the OO. Seems like this is what is relevant in this case.
I am making no claims here whether in the “real world” we are more like agents or more like agent designers, since there are no OOs that we know of that could answer the question.
But you are making the claim that there is an objective distinction. It seems to me more like a subjective one: I can look at an algorithm from a number of perspectives; some of them will be more like OO (seeing it as “just an algorithm”), while others will regard the algorithm as an agent (unable to calculate exactly what the algorithm will do, they’re forced to take the intentional stance).
IE, for any agent you can imagine an OO for that agent, while you can also imagine a number of other perspectives. (Even if there are true-random bits involved in a decision, we can imagine an OO with access to those true-random bits. For quantum mechanics this might involve a violation of physics (e.g. no-cloning theorem), which is important in some sense, but doesn’t strike me as so philosophically important.)
I don’t know what it means for there to be a more objective distinction, unless it is the quantum randomness thing, in which case maybe we largely agree on questions aside from terminology.
Well. I am not sure that “it has to “think as if it has a choice”″. Thinking about having a choice seems like it requires an internal narrator, a degree of self-awareness. It is an open question whether an internal narrator necessarily emerges once the algorithm complexity is large enough. In fact, that would be an interesting open problem to work on, and if I were to do research in the area of agency and decision making, I would look into this as a project.
If an internal narrator is not required, then there is no thinking about choices, just following the programming that makes a decision. A bacteria following a sugar gradient probably doesn’t think about choices. Not sure what counts as thinking for a chess program and whether it has the quale of having a choice.
I want to distinguish “thinking about choices” from “awareness of thinking about choices” (which seems approximately like “thinking about thinking about choices”, though there’s probably more to it).
I am only saying that it is thinking about choices, ie computing relative merits of different choices, not that it is necessarily consciously aware of doing so, or that it has an internal narrator.
It “has a perspective” from which it has choices in that there is a describable epistemic position which it is in, not that it’s necessarily self-aware of being in that position in a significant sense.
If you know that someone has predicted your behavior, then you accept that you are a deterministic algorithm, and the inventing of the decision algorithm is just a deterministic subroutine of it. I don’t think we disagree there.
(correct)
The future is set, you are relegated to learning about what it is, and to feel the illusion of inventing the decision algorithm and/or acting on it. A self-consistent attitude in the OO setup is more like “I am just acting out my programming, and it feels like making decisions”.
This seems to be where we disagree. It is not like there is a seperate bit of clockwork deterministically ticking away and eventually spitting out an answer, with “us” standing off to the side and eventually learning what decision was made. We are the computation which outputs the decision. Our hand is not forced. So it does not seem right to me to say that the making-of-decisions is only an illusion. If we did not think through the decisions, they would in fact not be made the same. So the thing-which-determines-the-decision is precisely such thinking. There is not a false perception about what hand is pulling the strings in this scenario; so what is the illusion?
What exactly does in-scope / out-of-scope mean? The OO has access to what the designer does (since the designer’s design is given to the OO), so for practical purposes, the OO is predicting the designer perfectly.
I was definitely unclear there. What I meant is something like a (deterministic) computer game: the game desginer is outside the game, the agent is an NPC inside the game, and the OO is an entity with the access to the game engine. So the OO can predict the agent perfectly, but not whoever designed the agent’s algorithm. That’s the natural edge of the chain of predictability.
But you are making the claim that there is an objective distinction. It seems to me more like a subjective one: I can look at an algorithm from a number of perspectives; some of them will be more like OO (seeing it as “just an algorithm”), while others will regard the algorithm as an agent (unable to calculate exactly what the algorithm will do, they’re forced to take the intentional stance).
Yes, it’s more like Dennett’s intentional stance vs physical (or, in this case, algorithmic, since the universe’s physics is fully encoded in the algorithms). Definitely there are perspectives where one has to settle for the intentional stance (like the human game players do when dealing with high-level NPCs, because they are unable to calculate the NPC’s actions precisely). Whether this hypothetical game situation is isomorphic to the universe we live in is an open problem, and I do not make definite claims that it is.
I want to distinguish “thinking about choices” from “awareness of thinking about choices” (which seems approximately like “thinking about thinking about choices”, though there’s probably more to it).
It’s a good distinction, definitely. “Thinking about choices” is executing the decision making algorithm, including generating the algorithm itself. I was referring to thinking about the origin of both of those. It may or may not be what you are referring to.
This seems to be where we disagree. It is not like there is a seperate bit of clockwork deterministically ticking away and eventually spitting out an answer, with “us” standing off to the side and eventually learning what decision was made. We are the computation which outputs the decision. Our hand is not forced.
Yes, that’s where we differ, in the very last sentence. There is no separate bit of an algorithm, we (or, in this case, the agents in the setup) are the algorithm. Yes, we are the computation which outputs the decision. And that’s precisely why our hand is forced! There is no other output possible even if it feels like it is.
So it does not seem right to me to say that the making-of-decisions is only an illusion. If we did not think through the decisions, they would in fact not be made the same.
Looks like this is the crux of the disagreement. the agents have no option not to think through the decisions. Once the universe is set in motion, the agents will execute their algorithms, including thinking through the decisions, generating the relevant abstractions, including the decision theory, then executing the decision to pay or not pay the counterfactual mugger. “If we did not think through the decisions” in not an option in this universe, except potentially as a (useless) subroutine in the agent’s algorithm. The agent will do what it is destined to do, and, while the making-of-decisions is not an illusion, since the decision is eventually made, the potential to make a different decision is definitely an illusion, just like the potential to not think through the decisions.
So, a (more) self-consistent approach to “thinking about thinking” is “Let’s see what decision theory, if any, my algorithm will generate, and how it will apply it to the problem at hand.” I am not sure whether there is any value in this extra layer, or if there is anything that can be charitably called “value” in this setup from an outside perspective. Certainly the OO does not need the abstraction we call “value” to predict anything, they can just emulate (or analyze) the agent’s algorithm.
So, my original point that “Do you give Omega $100?” is not a meaningful question as stated, since it assumes you have a choice in the matter. You can phrase the question differently, and more profitably, as “Do you think that you are the sort of agent who gives Omega $100?” or “Which agents gain more expected value in this setup?” There is no freedom to “self-modify” to be an agent that pays or doesn’t pay. You are one of the two, you just don’t yet know which. Best you can do is try to discover it ahead of time.
Hmm. Seems to me there is a crucial difference, the former is in scope for OO, the latter is not.
If you know that someone has predicted your behavior, then you accept that you are a deterministic algorithm, and the inventing of the decision algorithm is just a deterministic subroutine of it. I don’t think we disagree there. The future is set, you are relegated to learning about what it is, and to feel the illusion of inventing the decision algorithm and/or acting on it. A self-consistent attitude in the OO setup is more like “I am just acting out my programming, and it feels like making decisions”.
Yes and no. “we” in this case are the agent designers, and the bots are agents acting out their programming, but we are neither OO, nor we are in the OO scope of predictability. In fact maybe there is no OO in that world, especially if the agent has access to quantum randomness or freebits, or is otherwise too hard to predict. That applies to complicated enough automata, like Alpha Zero.
Right, the more OO-like we are, the less agenty the algorithm feels to us.
Well. I am not sure that “it has to “think as if it has a choice”″. Thinking about having a choice seems like it requires an internal narrator, a degree of self-awareness. It is an open question whether an internal narrator necessarily emerges once the algorithm complexity is large enough. In fact, that would be an interesting open problem to work on, and if I were to do research in the area of agency and decision making, I would look into this as a project.
If an internal narrator is not required, then there is no thinking about choices, just following the programming that makes a decision. A bacteria following a sugar gradient probably doesn’t think about choices. Not sure what counts as thinking for a chess program and whether it has the quale of having a choice.
Yes, action is a part of the computation, and sometimes we anthropomorphize this action as making a choice. The alternatives are an illusion indeed, and I am not sure what you mean by counterfactuals there, potential future choices, or paths not taken because they could never have been taken given the agent’s programming.
Yep, “it is faced with a real problem, which it actually has to solve; and, there are better and worse ways of approaching this problem”, and these “ways of approaching the problem” are coded by the agent designer, whether explicitly, or by making it create and apply a “decision theory” subroutine. Once the algorithm is locked in by the designer (who is out of scope for OO), in this world an OO already knows what decision theory the agent will discover and use.
TL;DR: the agent is in scope of OO, while the agent designer is out of scope and so potentially has the grounds of thinking of themselves as “making a (free) decision” without breaking self-consistency, while the agent has no such luxury. That’s the “special point in the chain”.
I am making no claims here whether in the “real world” we are more like agents or more like agent designers, since there are no OOs that we know of that could answer the question.
What exactly does in-scope / out-of-scope mean? The OO has access to what the designer does (since the designer’s design is given to the OO), so for practical purposes, the OO is predicting the designer perfectly. Just not by simulating the OO. Seems like this is what is relevant in this case.
But you are making the claim that there is an objective distinction. It seems to me more like a subjective one: I can look at an algorithm from a number of perspectives; some of them will be more like OO (seeing it as “just an algorithm”), while others will regard the algorithm as an agent (unable to calculate exactly what the algorithm will do, they’re forced to take the intentional stance).
IE, for any agent you can imagine an OO for that agent, while you can also imagine a number of other perspectives. (Even if there are true-random bits involved in a decision, we can imagine an OO with access to those true-random bits. For quantum mechanics this might involve a violation of physics (e.g. no-cloning theorem), which is important in some sense, but doesn’t strike me as so philosophically important.)
I don’t know what it means for there to be a more objective distinction, unless it is the quantum randomness thing, in which case maybe we largely agree on questions aside from terminology.
I want to distinguish “thinking about choices” from “awareness of thinking about choices” (which seems approximately like “thinking about thinking about choices”, though there’s probably more to it).
I am only saying that it is thinking about choices, ie computing relative merits of different choices, not that it is necessarily consciously aware of doing so, or that it has an internal narrator.
It “has a perspective” from which it has choices in that there is a describable epistemic position which it is in, not that it’s necessarily self-aware of being in that position in a significant sense.
(correct)
This seems to be where we disagree. It is not like there is a seperate bit of clockwork deterministically ticking away and eventually spitting out an answer, with “us” standing off to the side and eventually learning what decision was made. We are the computation which outputs the decision. Our hand is not forced. So it does not seem right to me to say that the making-of-decisions is only an illusion. If we did not think through the decisions, they would in fact not be made the same. So the thing-which-determines-the-decision is precisely such thinking. There is not a false perception about what hand is pulling the strings in this scenario; so what is the illusion?
I was definitely unclear there. What I meant is something like a (deterministic) computer game: the game desginer is outside the game, the agent is an NPC inside the game, and the OO is an entity with the access to the game engine. So the OO can predict the agent perfectly, but not whoever designed the agent’s algorithm. That’s the natural edge of the chain of predictability.
Yes, it’s more like Dennett’s intentional stance vs physical (or, in this case, algorithmic, since the universe’s physics is fully encoded in the algorithms). Definitely there are perspectives where one has to settle for the intentional stance (like the human game players do when dealing with high-level NPCs, because they are unable to calculate the NPC’s actions precisely). Whether this hypothetical game situation is isomorphic to the universe we live in is an open problem, and I do not make definite claims that it is.
It’s a good distinction, definitely. “Thinking about choices” is executing the decision making algorithm, including generating the algorithm itself. I was referring to thinking about the origin of both of those. It may or may not be what you are referring to.
Yes, that’s where we differ, in the very last sentence. There is no separate bit of an algorithm, we (or, in this case, the agents in the setup) are the algorithm. Yes, we are the computation which outputs the decision. And that’s precisely why our hand is forced! There is no other output possible even if it feels like it is.
Looks like this is the crux of the disagreement. the agents have no option not to think through the decisions. Once the universe is set in motion, the agents will execute their algorithms, including thinking through the decisions, generating the relevant abstractions, including the decision theory, then executing the decision to pay or not pay the counterfactual mugger. “If we did not think through the decisions” in not an option in this universe, except potentially as a (useless) subroutine in the agent’s algorithm. The agent will do what it is destined to do, and, while the making-of-decisions is not an illusion, since the decision is eventually made, the potential to make a different decision is definitely an illusion, just like the potential to not think through the decisions.
So, a (more) self-consistent approach to “thinking about thinking” is “Let’s see what decision theory, if any, my algorithm will generate, and how it will apply it to the problem at hand.” I am not sure whether there is any value in this extra layer, or if there is anything that can be charitably called “value” in this setup from an outside perspective. Certainly the OO does not need the abstraction we call “value” to predict anything, they can just emulate (or analyze) the agent’s algorithm.
So, my original point that “Do you give Omega $100?” is not a meaningful question as stated, since it assumes you have a choice in the matter. You can phrase the question differently, and more profitably, as “Do you think that you are the sort of agent who gives Omega $100?” or “Which agents gain more expected value in this setup?” There is no freedom to “self-modify” to be an agent that pays or doesn’t pay. You are one of the two, you just don’t yet know which. Best you can do is try to discover it ahead of time.