techno-economic analysis
There is a genre of engineering paper called “techno-economic analysis”. Generally, they involve:
Listing multiple related designs for accomplishing something. These are mostly selected from previous literature, existing objects, patents, and information from companies. Sometimes novel variations are analyzed.
Finding key numbers for properties/costs/etc. People look at previous literature, markets and listed prices, datasheets, etc.
Optimization of various parameters, generally using specialized software. Sometimes the software is open-source, and sometimes (eg Aspen Plus) it’s expensive.
Obviously such analysis can only be as good as its inputs. Ways that a techno-economic analysis can be bad include:
Analyzing something that doesn’t matter.
Not considering the best relevant designs.
Using incorrect prices, eg: analysis concluding that a renewable chemical process is economically competitive relative to public prices for low-volume chemicals when real production costs used internally are much lower.
Using incorrect numbers from earlier literature, or from startups lying about something.
Not considering incompatible choices, eg: using a different fluid/metal combination that would give fast corrosion.
But while many analyses have such problems, I’ve still read a lot of them and found them very useful. The authors collect lots of key numbers for you from disparate sources, which makes them useful for the same reasons survey papers are. I understand other fields well enough to evaluate the assumptions and quality of designs used in techno-economic analysis papers, so I can pick out good ones and ground my intuitions for net costs.
Also, techno-economic analysis as a field has developed good norms, which I think is partly due to separation of analysis from innovation. If your analysis of a new proposed process shows that it’s very expensive, that’s fine; if your analysis is good, the high costs aren’t your fault and it’s still publishable. Meanwhile, if you look at university press releases (especially MIT ones) they’ll often say: [trivial variation of earlier work] could lead to [technically possible but completely impractical application]!!! Computer security researchers have decent norms about designing secure systems, US chemical plant and aircraft designers have decent norms about designing safe systems, and techno-economic analysis has decent norms about estimating costs realistically.
about power generation
Electricity generation is foundational to modern civilization; costs are large but consumer surplus is much larger. Hedge funds may make more profit than power plants, but I know which is more important to civilization. However, I’m bemused by the people who think that cheaper electricity is the main thing holding back civilization. Look, California charges consumers literally 10x the production cost of electricity, but its economy has done OK, and its problems have different causes. In Germany, >half of electricity prices in 2021 were taxes. If the cost of electricity was really that important today, there are easier ways to bring it down than new generation technologies.
Solar-thermal power isn’t currently very important, but there are some reasons I picked it as a topic:
Some people here are apparently interested in power generation.
It’s renewable energy, related to global warming.
Large improvements from current installations seem possible, which is more fun than micro-optimization of gas turbine efficiency, and it’s currently far enough from viability that I don’t have to worry about saying something immediately valuable.
It’s not potentially dangerous like military UAVs or bioweapons or some AI stuff.
I think they look cool.
solar-thermal analysis
Competent estimates for the cost of solar-thermal power are typically around $0.11/kWh. That’s more expensive than US natural gas (~$0.04) and PV solar or wind in the US (~$0.03).
Things are actually worse than that, because such analysis usually assumes a sunny location, but a lot of power demand is in Europe and the US northeast. If you check a solar irradiance map, those aren’t the sunniest places, especially in winter. Plus, clouds are worse for concentrated solar than for solar panels.
Yes, you can run a HVDC cable from Morocco to Europe, and people are actually doing that, but it’s more expensive than burning LNG from the USA.
Also, the only reason for using solar-thermal power instead of solar panels is that storing heat is cheap, so you can use it to balance out renewables. But existing solar-thermal designs use steam, and using steam turbines intermittently is impractical.
OK, so if steam is out, then what? Here’s a recent open-access techno-economic analysis (hereafter “Linares”) of power tower type solar-thermal plants that use CO2 recompression cycles. There’s a competent and concise example of a techno-economic analysis, so you can take a look and see what they’re like.
Any time you have molten salt in heat exchangers, you have to consider corrosion, and chloride eutectics are generally worse than nitrates. So, they specified (nickel-based) Inconel 625 for heat exchangers, which is reasonable. But you have to keep the salt away from water and air, because oxychlorides are more corrosive. Corrosion is an issue for “solar salt” too, requiring (IIRC) stainless steel and (again) keeping it away from air/water.
improvements
Some people assume solar-thermal is less efficient than PV solar, but that’s wrong; Linares gets ~50% efficiency.
A lot of people assume mirrors are what makes concentrated solar power expensive. That’s wrong; mirror supports and drives are more expensive than the actual mirrors, and Linares has the entire solar reflector field at only ~15% of the total cost. Still, cost improvements are possible. Linares assumed $145/m^2 but $100/m^2 is feasible. (But the SunShot goal of $50/m^2 probably isn’t.) Note that a typical US house today is ~$2000/m^2 of floor. Early heliostats used open-loop controls, which required stable bases and careful calibration. The trend now is towards closed-loop control with cameras, PV panels to power drive systems, and wireless connections.
What’s more expensive, then? Per Linares Fig 15, more than half the investment cost is heat exchangers. I remember fans of molten-salt thorium reactors (weird thing to be a fan of) saying that “some fancy alloy has adequate corrosion resistance so heat exchangers aren’t a problem”. When I say people like that are clowns, part of what I mean is that they should do a proper techno-economic analysis.
Clearly, the conversion from concentrated sunlight is more expensive than the heliostats. If you want to get large cost reductions from optimized designs, you have to take a different approach, and there’s actually a very simple and obvious way to greatly reduce the cost of conversion from sunlight: eliminate it. A lot of electricity is used for lighting, but people tend to prefer sunlight, and heliostats can focus light on skylights or windows. A few buildings have actually done that, but it’s not very common; people like being able to see through windows as well as getting light from them, so generally building width is decreased instead of reflecting light, but it’s actually sort of practical to use heliostats for building lighting, even if you still need artificial lighting too. It would’ve been better back when incandescent lights were a thing.
While it’s not exactly economical yet, there are some compressed air energy storage (CAES) installations being built now. Combining a water-compensated CAES system with solar-thermal power cuts out some intermediate conversions, so you get better efficiency and lower cost, but potentially with more electricity transmission requirements.
Solar receivers on the tower are somewhat expensive, but there are 2 ways to mitigate that.
A boiling fluid means that you don’t have to worry about even heating; sodium metal is sometimes proposed for that.
If you have a fluid with black particles in it, and transparent tubes (eg fused quartz) you can use a “direct absorption solar receiver” which isn’t limited by heat transfer through metal walls, allowing for higher power density.
Heliostat costs have come down by >2x, but it has nothing to do with the “learning curves” finance types like to point at, it’s just a matter of how much time smart people spent thinking about them. Steam turbines are expensive, but maybe you use CO2 instead and have turbines 700x smaller, but then heat exchangers are too expensive, so maybe you use supercritical ethane instead for lower pressure, or a different thermodynamic cycle entirely, or a thermal energy storage system with less-corrosive stuff that allows for cheaper materials, or something. Well, cost estimates from historical data don’t mean anything without context. Even when people aren’t using fundamentally new designs or technology, the costs of large construction projects vary greatly. Predicting the future is always extrapolation, and historical data is only useful as grounding for parameters used for that extrapolation, but with no technical understanding you’re walking blind, and MBAs are liable to trip on a rock.
Anyway, as I’ve said before, it’s possible to make power-tower solar-thermal cheaply enough to sometimes be worthwhile in sunny locations. I could get into details of designs I like, but haven’t I posted enough on my blog already?
So I’ve worked as an analyst or consultant for the past ~12 years, and made (and read) many such analyses. Most a lot less technical than the ones discussed here. They’re all well within “All models are wrong but some are useful” territory.
When I read ones that are very technical and use a lot of data, they’re a great source of what assumptions I should use in my own thinking, but they tend to overlook something critical that makes the final output much less useful than the inputs and intermediate results. Like assuming automotive OEMs will pay aerospace prices for a material instead of using a cheaper grade. Or using a linear approximation for the impact of vehicle weight on MPG that implied a weightless car would only save 1⁄3 of the fuel of a normal one. Or assuming some comparison metric won’t also be undergoing iterative change and improvement over time. And so on.
That’s pretty much the standard explanation of what learning curves are, abstracted away from the specifics of a given process/product/industry.
But to your specific question here: I would definitely like to see more experimentation with solar thermal, especially for things like industrial process heat. Seems underexplored.
On electricity generation though, I think there are a few factors that make me think it’s unlikely to compete well with PV.
Electrical energy storage costs are falling. Realistically Li-ion will be <$100/kWh by ~2030, and they have much higher round trip efficiencies (>95%). I doubt vanadium flow or sodium ion or anything else will be at that kind of scale by then, but those could bring it even lower or limit how much costs spike with rising Li demand. We’ll also be building a lot of them no matter what, for EVs, home batteries, and the like, and many of those will interface with the grid fairly intelligently. We’re already starting to see some utilities install a few hours of battery backup, because they can already be more cost competitive than gas peaker plants.
As much as there’s ridiculous overhype about hydrogen, we’re likely to be making a lot of it in the future, because some applications will need hydrocarbons either for chemical feedstocks or for liquid fuels: eSAF, methanol for marine fuel, ammonia production. This means we’ll be way overbuilding renewable energy generation relative to immediate electricity demand on average, which will make it easier to deal with less-than-optimal production delays using demand response from electrolyzers. In principle these carriers can also be used to effectively ship solar power long distances using the same kinds of tanker and pipeline infrastructure we have today, but I doubt much of that capability will be used for electricity production except maybe in remote areas.
Fundamentally silicon has a near-optimal bandgap for PV and had a headstart because of its abundance and use in other electronics, and other technologies (III-V, CdTe, OPV, etc.) all have/had glaring weaknesses. I’m pretty hopeful about perovskites in a way I never was about those others. I think if you look back on this post in 2030 you might find we’re in a world of solar cells that are 1⁄4 the current price, have more stable output across different temperatures and light levels and under indirect light (and so produce more hours/day), weigh much less, and are multi-junction with overall efficiency >30% and plausibly >40%. The physics nerd in me hopes that someone will figure out cheap metamaterial waveguides that let us make thin film multijunction concentrated PV which would easily get us to much higher efficiencies, but I have no sense of a timeline for something like that.
As for your point about high electricity prices not holding things back, I think you might not be thinking the counterfactual through enough. In a world where electricity prices were 1/10th as high and mostly from renewables, what else would change over the following 10-30 years as people make choices based on this new information? A lot of things! All of a sudden:
Buying 40 kWh of electricity looks a lot better than burning a gallon of gasoline or the equivalent amount of natural gas in your car or factory.
It makes a lot of sense for houses even in more extreme climates to be built or renovated with air source heat pumps instead of furnaces.
Large-scale desalination and indoor agriculture start looking affordable enough to help a lot of people improve their quality of life, improve access to a varied diet of fresh foods, reduce the need to damage ecosystems to expand agricultural output, and improve our civilizational resilience to climate change.
Direct-air carbon capture stops looking absurdly expensive (or at least reduces to a capex problem of the kind that industry and engineering regularly overcome with normal kinds of efforts).
Extracting valuable minerals from waste (or seawater, etc.) becomes viable even if the process is energy intensive.
Data center operating costs fall by 50-60%.
Those are just top of mind. Markets assign prices, but costs come from atoms, joules, time spent, and ideas. Drop one of the input costs to almost zero, and that ripples through everything else by changing the tradeoffs.
First off, I’d like to say that we probably mostly agree. But...
There’s a big difference between curves based on “time smart people spent thinking” and curves based on “money spent”. That was my point.
No, I don’t think so. Maybe you were looking at BloombergNEF prices, but those were heavily weighted towards subsidized Chinese batteries for domestic vehicles. US battery pack prices were ~2x the BloombergNEF global average a couple years ago, and Tesla charged $265/kWh for their grid storage without supporting infrastructure.
Also, LiFePO4 lifetime is overstated because cycling and calendar life degradation interact because cycling cracks the anode SEI.
Water electrolysis requires slightly bigger subsidies than the IRA ones, which were just meant for bootstrapping. Natural gas will continue to be where hydrogen comes from.
Capital costs are bigger than the electricity costs. I wrote this.
Per above, that doesn’t make sense.
You mean, for dimethyl ether? For marine diesel engines? If anything, that makes more sense for diesel trucks because particulate pollution is a bigger problem with those.
Huh? CdTe works fine, it’s been used on a large scale, it’s just not quite as good as Si. And the good perovskites are unstable, and I don’t see that changing. If anything multilayer Si/CdTe seems more likely.
Sure, split-spectrum concentrated solar seems appealing in some ways, but it’s just not happening.
This decision is based on residential electricity prices. And again, California charges 10x the cost of production.
Even there, capital costs > electricity costs, tho there is some tradeoff between them. And desalination is already feasible.
No, they really don’t.
Also to add:
I am not at liberty to share some of the details, but I’ve seen 3rd party accelerated testing data showing perovskites from some companies stable with expected 20+ yr module lifetimes, and real world multi-year testing data with very little efficiency loss. In addition, while the nameplate efficiencies are definitely lower, they have much more stable output curves under a wider range of weather conditions, on a curve which in many climates would result in greater total kWh output per day/week/year, and spread more evenly throughout the day, compared to Si.
And yes, CdTe works fine at scale, but it’s not something we’re ever going to scale to TWp/year, there just aren’t enough cadmium and tellurium we can readily mine. And to get to an actually decarbonized world, we’re going to need to increase total electricity production several fold, and a few times more as more countries become more developed, so multiple TWp/yr is where we’re going to need to be. We’re already above 1 TWp/yr silicon pv production capacity, mostly in China.
As for batteries, it sounds like you’re talking about prices, but I’m talking about costs. I really don’t care what Tesla charges, I care what it’s going to cost them and the next dozen manufacturers to make batteries as they scale production. And yes, I’m aware of the NMC/LFP differences. I still think we’re going to see more and more shifting towards LFP, and those problems continuing to get less severe.
I should also add: I know residential scale solar tends to be extremely inefficient from a balance-of-plant and installation labor cost perspective, but I just don’t see a way to maintain a 10x difference between production cost and retail price in a place with net metering laws, a planned ban on ICE vehicles, lots of sunshine and stable weather through the year in many regions, and very high overall housing costs that make financing home improvements seem much less onerous proportionally. Not for the long term, anyway.
Hmm, I don’t see how that could be the case unless you’re talking about a greater total area, and as you probably know, support structures + land costs more than the actual solar panels these days, so lower efficiency for lower panel cost is a bad deal. (If it even actually would be lower per output, and I have some doubts.)
Oh, that’s what you meant? Yeah.
You can look at the economics of some Li-ion battery producers. The margins aren’t huge.
I said LiFePO4, which is LFP.
Heh, I agree—which is why I don’t think the net metering will stay.
Whether or not it checks out in the real world, it’s possible because PV conversion efficiencies are not constant. They’re a function of things including temperature, light level, direct vs indirect light, and incident light angle (even with antireflective coatings).
The power output from Si PV falls off quite a bit at high temperature, partial shade, or less direct light. Some semiconductors have much lower efficiency penalties under these conditions. So your Si might be, say, 22% efficient on a clear but temperate summer day at noon, and get you 220 W/m2. But it’s less than 22% efficient outside of the ~5 peak hours of daylight, or when the temperature of the panels rises above ~25C, or in winter.
So, an idealized panel that had a constant 16% efficiency all day, in all weather and all seasons, could make up for producing less power at noon by producing more power at 7am-10am and 5pm-8pm, and when there are some clouds, and when it’s very hot out, and in winter.
(Every time I think about this it reminds me of how in the 90s we compared CPUs on their clock speeds, and then the metric stopped making sense as we got better and more varied architectures and multi-core systems and such. The headline efficiency number just isn’t the only relevant point on a very multidimensional graph).
I also think we probably mostly agree. But to be clear, as I understand it, experience curves for production aren’t based on money spent, they’re based on cumulative units of product ever made.
And you’re obviously right about capex today and in the near future. My central point is that capex for any application is something that we should expect to fall over time with iteration and scaling, because that’s what industries do. But we’ll never get started if opex is so high no one bothers.
And no, I meant methanol for marine fuel, because of the rising orders for dual-fueled ships, ports talking about becoming methanol hubs, and projects being announced to make methanol for this purpose.
Eh, I’ve seen both. It doesn’t really matter here, right?
I don’t think that perspective makes sense if you consider the economy as a whole. Most opex is someone else’s capex, and capex depreciation is sort of opex. I don’t think raw material costs have become relatively more important over time vs processing costs, either.
Huh, that is a thing. But it’s a smaller thing than LNG fuel for ships, which makes sense, because LNG is economically better, with higher conversion cost but maybe half the fuel cost of methanol. I suspect methanol fuel is more of a cheap hedge against potential EU regulations. If it actually gets bought as fuel, it would probably be chinese methanol made from coal, and meanwhile they’d be proclaiming their readiness for e-fuels, lol.
True, but one is a much close proxy for time-spent-thinking-about-it and real-world-feedback-obtained than the other.
Also true, but if you’re starting from an assumption that something is infeasible because its total cost is high, and capex is the biggest but not overwhelmingly the biggest component of that, then dramatically reducing the price of the non-capex component reduces the problem from “This makes no economic sense whatsoever, and it isn’t something our industry can fix on its own anyway,” to “Anyone who manages to get capex way down can disrupt this.” It’s removing a systemic constraint on the value and usefulness of other innovations.