Shalizi’s post also points out that if you relax any of the requirements, you can get answers much more quickly, and also notice that modern computers & algorithms run vastly faster. As a matter of fact, linear optimization is one of the best examples of progress:
Grötschel, an expert in optimization, observes that a benchmark production planning model solved using linear programming would have taken 82 years to solve in 1988, using the computers and the linear programming algorithms of the day. Fifteen years later – in 2003 – this same model could be solved in roughly 1 minute, an improvement by a factor of roughly 43 million.
(1988 is, incidentally, way after the cited short paper pointing out the impossibility of computing in time, IIRC.)
Given that CEV is all about extrapolating, making consistent, simplifying and unifying aggregate preferences, I wouldn’t take linear programming as much more relevant to CEV as, say, various ruminations about NP or EXP-time.
The best requirement to relax, in my opinion, is that of optimality (which, incidentally, is a strong reason to be an adaptation executor rather than a utility maximizer!). My professional research is into optimization heuristics that just focus on getting good solutions and not worrying if they’re the best ones- which allows tackling problems that are immensely larger. For many problems, it’s simply not worth the time to ensure that no better solution exists- it’s a lot of effort for little payout.
Shalizi’s post also points out that if you relax any of the requirements, you can get answers much more quickly, and also notice that modern computers & algorithms run vastly faster.
Yes, but he still considers it impossible even with modern computers and algorithms.
Given that CEV is all about extrapolating, making consistent, simplifying and unifying aggregate preferences, I wouldn’t take linear programming as much more relevant to CEV as, say, various ruminations about NP or EXP-time.
I’m not sure how extrapolating, making consistent, simplifying, unifying, and implementing the aggregate preferences-in-general of everybody on Earth would be easier than simply implementing the resource-related preferences of everybody in a single nation.
Plans Happen: I should re-iterate that Kantorovich-style planning is entirely possible when the planners can be given good data, an unambiguous objective function, and a problem of sufficiently limited scope. Moreover, what counts as “sufficiently limited” is going to grow as computing power does. The difficulties are about scale, not principle; complexity, not computability. Probably more importantly, there are other forms of control, with good claims on the name “planning”, which are not this sort of mathematical programming, and plausibly have much lower computational complexity. (Central banks, for instance, are planning bodies which set certain prices.) In particular, intervening in existing market institutions, or capitalist firms, or creating non-market institutions to do things — none of these are subject to the same critique as Kantorovich-style planning. They may have their own problems, but that’s a separate story. I should have been clearer about this distinction.
Let me also add that I focused on the obstacles in the way of planning because I was, at least officially, writing about Red Plenty. Had the occasion for the post been the (sadly non-existent) Red, White, and Blue Plenty, it would have been appropriate to say much more about the flaws of capitalism, not just as we endure it but also it in its more idealized forms.
...Cockshott, and More Equations The most important issue raised, I think, was the claim that Cockshott has shown that central planning is computationally tractable after all. I don’t agree, but unfortunately, there’s going to need to be a bit more math....When I talked about the complexity of solving the planning problem, I was talking about the complexity of this linear programming problem, and I was allowing for it to be solved only up to an accuracy of — the solution only had to come to within of the optimum, and in fact only to within of satisfting the constraints. Since the computational complexity of doing so only grows proportionally to , however, if we can do this at all we can ask for very good approximations. Or, pessimistically, if some other part of the problem, like the number of variables, is demanding lots of resources, we’d have to make the slop (literally) exponentially larger to make up for it.
(Incidentally, one issue which was not explicitly raised, but which I should have mentioned, was the possibility of replacing approximate optimization with satisficing, say taking the first plan where the value of the output was above some threshold, say , and all constraints were met. [This would still leave the computational-political problem of coming up with the value vector .] I have been unable to discover any literature on the complexity of linear satisficing, but I suspect it is no better than that of approximate linear programming, since you could use the former as a sub-routine to do the latter, by ratcheting up the threshold , with each satisficing Plans as the starting-point for the next round of the ratchet.)
Yes, but he still considers it impossible even with modern computers and algorithms.
His fundamental conclusion (see the section on non-convexity) is that it’s as hard for capitalism as planning, which isn’t really an issue for CEV. (‘OK, so fine, we’ll go with either system as convenient, and apply optimization to as large subunits as we can manage before it breaks down.’)
I’m not sure how extrapolating, making consistent, simplifying, unifying, and implementing the aggregate preferences-in-general of everybody on Earth would be easier than simply implementing the resource-related preferences of everybody in a single nation.
I thought it was obvious. The difficulty is related to the number of arbitrary distinct constraints being enforced. Reduce the number of constraints, and you reduce the difficulty.
Whether CEV is actually possible—whether the reduction in constraints happens and a Parfitian convergence of ethical criteria happens—is the fundamental question and doubted by many, but also unaffected by what kind of complexity linear optimization may be!
Shalizi’s post also points out that if you relax any of the requirements, you can get answers much more quickly, and also notice that modern computers & algorithms run vastly faster. As a matter of fact, linear optimization is one of the best examples of progress:
(1988 is, incidentally, way after the cited short paper pointing out the impossibility of computing in time, IIRC.)
Given that CEV is all about extrapolating, making consistent, simplifying and unifying aggregate preferences, I wouldn’t take linear programming as much more relevant to CEV as, say, various ruminations about NP or EXP-time.
The best requirement to relax, in my opinion, is that of optimality (which, incidentally, is a strong reason to be an adaptation executor rather than a utility maximizer!). My professional research is into optimization heuristics that just focus on getting good solutions and not worrying if they’re the best ones- which allows tackling problems that are immensely larger. For many problems, it’s simply not worth the time to ensure that no better solution exists- it’s a lot of effort for little payout.
Yes, but he still considers it impossible even with modern computers and algorithms.
I’m not sure how extrapolating, making consistent, simplifying, unifying, and implementing the aggregate preferences-in-general of everybody on Earth would be easier than simply implementing the resource-related preferences of everybody in a single nation.
To followup from http://cscs.umich.edu/~crshalizi/weblog/919.html
His fundamental conclusion (see the section on non-convexity) is that it’s as hard for capitalism as planning, which isn’t really an issue for CEV. (‘OK, so fine, we’ll go with either system as convenient, and apply optimization to as large subunits as we can manage before it breaks down.’)
I thought it was obvious. The difficulty is related to the number of arbitrary distinct constraints being enforced. Reduce the number of constraints, and you reduce the difficulty.
Whether CEV is actually possible—whether the reduction in constraints happens and a Parfitian convergence of ethical criteria happens—is the fundamental question and doubted by many, but also unaffected by what kind of complexity linear optimization may be!