I had the impression that it was more than just that, given the line: “In light of recent news, it is worth comprehensively re-evaluating which sub-problems of AI risk are likely to be solved without further intervention from the AI risk community (e.g. perhaps deceptive alignment), and which ones will require more attention.” and the further attention devoted to deceptive alignment.
I appreciate these predictions, but I am not as interested in predicting personal of public opinions. I’m more interested in predicting regulatory stringency, quality, and scope.
If you have any you think faithfully represent a possible disagreement between us go ahead. I personally feel it will be very hard to operationalize objective stuff about policies in a satisfying way. For example, a big issue with the market you’ve made is that it is about what will happen in the world, not what will happen without intervention from AI x-risk people. Furthermore it has all the usual issues with forecasting on complex things 12 years in advance, regarding the extent to which it operationalizes any disagreement well (I’ve bet yes on it, but think it’s likely that evaluating and fixing deceptive alignment will remain mostly unsolved in 2035 conditional on no superintelligence, especially if there were no intervention from x-risk people).
I had the impression that it was more than just that
Yes, the post was about more than that. To the extent I was arguing against a single line of work, it was mainly intended as a critique of public advocacy. Separately, I asked people to re-evaluate which problems will be solved by default, to refocus our efforts on the most neglected, important problems, and went into detail about what I currently expect will be solved by default.
If you have any you think faithfully represent a possible disagreement between us go ahead.
I offered a concrete prediction in the post. If people don’t think my prediction operationalizes any disagreement, then I think (1) either they don’t disagree with me, in which case maybe the post isn’t really aimed at them, or (2) they disagree with me in some other way that I can’t predict, and I’d prefer they explain where they disagree exactly.
a big issue with the market you’ve made is that it is about what will happen in the world, not what will happen without intervention from AI x-risk people.
It seems relatively valueless to predict on what will happen without intervention, since AI x-risk people will almost certainly intervene.
Furthermore it has all the usual issues with forecasting on complex things 12 years in advance, regarding the extent to which it operationalizes any disagreement well (I’ve bet yes on it, but think it’s likely that evaluating and fixing deceptive alignment will remain mostly unsolved in 2035, especially if there were no intervention from x-risk people).
I mostly agree. But I think it’s still better to offer a precise prediction than to only offer vague predictions, which I perceive as the more common and more serious failure mode in discussions like this one.
I had the impression that it was more than just that, given the line: “In light of recent news, it is worth comprehensively re-evaluating which sub-problems of AI risk are likely to be solved without further intervention from the AI risk community (e.g. perhaps deceptive alignment), and which ones will require more attention.” and the further attention devoted to deceptive alignment.
If you have any you think faithfully represent a possible disagreement between us go ahead. I personally feel it will be very hard to operationalize objective stuff about policies in a satisfying way. For example, a big issue with the market you’ve made is that it is about what will happen in the world, not what will happen without intervention from AI x-risk people. Furthermore it has all the usual issues with forecasting on complex things 12 years in advance, regarding the extent to which it operationalizes any disagreement well (I’ve bet yes on it, but think it’s likely that evaluating and fixing deceptive alignment will remain mostly unsolved in 2035 conditional on no superintelligence, especially if there were no intervention from x-risk people).
Yes, the post was about more than that. To the extent I was arguing against a single line of work, it was mainly intended as a critique of public advocacy. Separately, I asked people to re-evaluate which problems will be solved by default, to refocus our efforts on the most neglected, important problems, and went into detail about what I currently expect will be solved by default.
I offered a concrete prediction in the post. If people don’t think my prediction operationalizes any disagreement, then I think (1) either they don’t disagree with me, in which case maybe the post isn’t really aimed at them, or (2) they disagree with me in some other way that I can’t predict, and I’d prefer they explain where they disagree exactly.
It seems relatively valueless to predict on what will happen without intervention, since AI x-risk people will almost certainly intervene.
I mostly agree. But I think it’s still better to offer a precise prediction than to only offer vague predictions, which I perceive as the more common and more serious failure mode in discussions like this one.