My post doesn’t engage with your framing at all. I think decision theory is the wrong tool entirely, because decision theory takes as a given the hardest part of the problem. I believe decision theory cannot solve this problem, and I’m working from a totally different paradigm.
Our disagreement is as wide as if you were a consequentialist and I was arguing from a Daoist perspective. (Actually, that might not be far from the truth. Some components of my post have Daoist influences.)
Don’t worry about trying to understand the footnote. Our disagreement appears to run much deeper than it.
because decision theory takes as a given the hardest part of the problem
What’s that?
My post doesn’t engage with your framing at all.
Sure, it was intended as a not-an-apology for not working harder to reframe implied desiderata behind the post in a way I prefer. I expect my true objection to remain the framing, but now I’m additionally confused about the “takes as a given” remark about decision theory, nothing comes to mind as a possibility.
It’s philosophical. I think it’d be best for us to terminate the conversation here. My objections against the over-use of decision theory are sophisticated enough (and distinct enough from what this post is about) that they deserve their own top-level post.
My short answer is that decision theory is based on Bayesian probability, and that Bayesian probability has holes related to a poorly-defined (in embedded material terms) concept of “belief”.
Thank you for the conversation, by the way. This kind of high-quality dialogue is what I love about LW.
Sure. I’d still like to note that I agree about Bayesian probability being a hack that should be avoided if at all possible, but I don’t see it as an important part (or any part at all) of framing agent design as a question of decision theory (essentially, of formulating desiderata for agent design before getting more serious about actually designing them).
For example, proof-based open source decision theory simplifies the problem to a ridiculous degree to more closely examine some essential difficulties of embedded agency (including self-reference), and it makes no use of probability, both in its modal logic variant and not. Updatelessness more generally tries to live without Bayesian updating.
My post doesn’t engage with your framing at all. I think decision theory is the wrong tool entirely, because decision theory takes as a given the hardest part of the problem. I believe decision theory cannot solve this problem, and I’m working from a totally different paradigm.
Our disagreement is as wide as if you were a consequentialist and I was arguing from a Daoist perspective. (Actually, that might not be far from the truth. Some components of my post have Daoist influences.)
Don’t worry about trying to understand the footnote. Our disagreement appears to run much deeper than it.
What’s that?
Sure, it was intended as a not-an-apology for not working harder to reframe implied desiderata behind the post in a way I prefer. I expect my true objection to remain the framing, but now I’m additionally confused about the “takes as a given” remark about decision theory, nothing comes to mind as a possibility.
It’s philosophical. I think it’d be best for us to terminate the conversation here. My objections against the over-use of decision theory are sophisticated enough (and distinct enough from what this post is about) that they deserve their own top-level post.
My short answer is that decision theory is based on Bayesian probability, and that Bayesian probability has holes related to a poorly-defined (in embedded material terms) concept of “belief”.
Thank you for the conversation, by the way. This kind of high-quality dialogue is what I love about LW.
Sure. I’d still like to note that I agree about Bayesian probability being a hack that should be avoided if at all possible, but I don’t see it as an important part (or any part at all) of framing agent design as a question of decision theory (essentially, of formulating desiderata for agent design before getting more serious about actually designing them).
For example, proof-based open source decision theory simplifies the problem to a ridiculous degree to more closely examine some essential difficulties of embedded agency (including self-reference), and it makes no use of probability, both in its modal logic variant and not. Updatelessness more generally tries to live without Bayesian updating.
Though there are always occasions to remember about probability, like the recent mystery about expected utility and updatelessness.