So the contributions of vnm theory are shrunken down into “intention”? Will you recapitulate that sort of framing (such as involving the interplay between total orders and real numbers) or are you feeling more like it’s totally wrong and should be thrown out?
So the contributions of vnm theory are shrunken down into “intention”?
(Background: I consider myself fairly well-read w.r.t. causal incentives, not very familiar with vnm theory, and well-versed in Pearlian causality. I have gotten a sneak peak at this sequence so have a good sense of what’s coming)
I’m not sure I understand VNM theory, but I would suspect the relationship is more like “VNM theory and <this agenda> are two takes on how to reason about the behavior of agents, and they both refer to utilities and Bayesian networks, but have important differences in their problem statements (and hence, in their motivations, methodologies, exact assumptions they make, etc)”.
I’m not terribly confident in that appraisal at the moment, but perhaps it helps explain my guess for the next question:
Will you recapitulate that sort of framing (such as involving the interplay between total orders and real numbers)
Based on my (decent?) level of familiarity with the causal incentives research, I don’t think there will be anything like this. Just because two research agendas use a few of the same tools doesn’t mean they’re answering the same research questions, let alone sharing methodologies.
...or are you feeling more like it’s totally wrong and should be thrown out?
When two different research agendas are distinct enough (as I suspect VNM and this causal-framing-of-AGI-safety are), their respective success/failures are quite independent. In particular, I don’t think the authors’ choice to pursue this research direction over the last few years should be taken by itself as a strong commentary on VNM.
But maybe I didn’t fully understand your comment, since I haven’t read up on VNM.
I’m not sure I entirely understand the question, could you elaborate? Utility functions will play a significant role in follow-up posts, so in that sense we’re heavily building on VNM.
Yeah, what I meant was that “goals” or “preferences” are often emphasized front and center, but here not so much, because it seems like you want to reframe that part under the banner of “intention”
A range of other other relevant concepts also build on causality:
It just felt a little odd to me that so much bubbled up from your decomposition except utility, but you only mention “goals” as this thing that “causes” behaviors without zeroing in on a particular formalism. So my guess was that vnm would be hiding behind this “intention” idea.
Preferences and goals are obviously very important. But I’m not sure they are inherently causal, which is why they don’t have their own bullet point on that list. We’ll go into more detail in subsequent posts
So the contributions of vnm theory are shrunken down into “intention”? Will you recapitulate that sort of framing (such as involving the interplay between total orders and real numbers) or are you feeling more like it’s totally wrong and should be thrown out?
(Background: I consider myself fairly well-read w.r.t. causal incentives, not very familiar with vnm theory, and well-versed in Pearlian causality. I have gotten a sneak peak at this sequence so have a good sense of what’s coming)
I’m not sure I understand VNM theory, but I would suspect the relationship is more like “VNM theory and <this agenda> are two takes on how to reason about the behavior of agents, and they both refer to utilities and Bayesian networks, but have important differences in their problem statements (and hence, in their motivations, methodologies, exact assumptions they make, etc)”.
I’m not terribly confident in that appraisal at the moment, but perhaps it helps explain my guess for the next question:
Based on my (decent?) level of familiarity with the causal incentives research, I don’t think there will be anything like this. Just because two research agendas use a few of the same tools doesn’t mean they’re answering the same research questions, let alone sharing methodologies.
When two different research agendas are distinct enough (as I suspect VNM and this causal-framing-of-AGI-safety are), their respective success/failures are quite independent. In particular, I don’t think the authors’ choice to pursue this research direction over the last few years should be taken by itself as a strong commentary on VNM.
But maybe I didn’t fully understand your comment, since I haven’t read up on VNM.
I’m not sure I entirely understand the question, could you elaborate? Utility functions will play a significant role in follow-up posts, so in that sense we’re heavily building on VNM.
Yeah, what I meant was that “goals” or “preferences” are often emphasized front and center, but here not so much, because it seems like you want to reframe that part under the banner of “intention”
It just felt a little odd to me that so much bubbled up from your decomposition except utility, but you only mention “goals” as this thing that “causes” behaviors without zeroing in on a particular formalism. So my guess was that vnm would be hiding behind this “intention” idea.
Preferences and goals are obviously very important. But I’m not sure they are inherently causal, which is why they don’t have their own bullet point on that list. We’ll go into more detail in subsequent posts