So, there’s this general problem in economics where economists want to talk about what we “should” do in policy debates, and that justifies quantifying things in terms of e.g. social surplus (or whatever), on the basis that we want policies to increase social surplus (or whatever).
The problem with this is that such metrics are not chosen for robust generalization to many different use-cases, so unsurprisingly they don’t generalize very well to other use-cases. For instance, if we want to make predictions about the probable trajectory of AI based on the smoothness of some metric of economic impact of technologies, social surplus does not seem like a particularly great metric for that purpose.
I don’t think that’s what I mean. If we use 1950s real prices, then we’re overestimating the value of the transistor production because we’re multiplying quantity by the price very early on the marginal utility curve, when they’re still marginally fulfilling extremely high value use cases. Conversely, if we use current prices, we’re underestimating the GDP contribution. So it seemed to me that we should integrate along the willingness to pay curve, which I think gets us something like total surplus.
(There are a few wrinkles in that the rest of the economy has also changed since the 1950s, and so i imagine that will throw some more subtle problems.)
That would indeed be the right way to estimate total surplus. The problem is that total surplus is not obviously the right metric to worry about. For a use case like forecasting AI, for instance, it’s not particularly central.
No opinion because I haven’t thought about that use case. My comment was intended to answer “how do you actually measure an idealized version of a GDP growth curve”—minimizing strangeness which depends on the reference year—without considering its usefulness for forecasting AI.
So, there’s this general problem in economics where economists want to talk about what we “should” do in policy debates, and that justifies quantifying things in terms of e.g. social surplus (or whatever), on the basis that we want policies to increase social surplus (or whatever).
The problem with this is that such metrics are not chosen for robust generalization to many different use-cases, so unsurprisingly they don’t generalize very well to other use-cases. For instance, if we want to make predictions about the probable trajectory of AI based on the smoothness of some metric of economic impact of technologies, social surplus does not seem like a particularly great metric for that purpose.
I don’t think that’s what I mean. If we use 1950s real prices, then we’re overestimating the value of the transistor production because we’re multiplying quantity by the price very early on the marginal utility curve, when they’re still marginally fulfilling extremely high value use cases. Conversely, if we use current prices, we’re underestimating the GDP contribution. So it seemed to me that we should integrate along the willingness to pay curve, which I think gets us something like total surplus.
(There are a few wrinkles in that the rest of the economy has also changed since the 1950s, and so i imagine that will throw some more subtle problems.)
That would indeed be the right way to estimate total surplus. The problem is that total surplus is not obviously the right metric to worry about. For a use case like forecasting AI, for instance, it’s not particularly central.
No opinion because I haven’t thought about that use case. My comment was intended to answer “how do you actually measure an idealized version of a GDP growth curve”—minimizing strangeness which depends on the reference year—without considering its usefulness for forecasting AI.