I see your distinction, but how much slack you think you have is necessarily a judgement based on how demanding your environment is. People mis-estimate that regularly, and it changes over time anyway. If it feels wrong to have available resources that you’re not using, it may be that you just need to lighten up. It also may be that you’re correctly, if not necessarily consciously, perceiving that your environment is competitive and you actually do need to buckle down (or go somewhere else). These are different problems with different solutions but similar symptoms.
Jay Molstad
Thanks to Scott Alexander, people on this blog typically use the term Moloch for the antithesis of slack. Moloch is a dynamic where intense competition forces everyone to spend all available resources to have a chance (not a guarantee) of success.
If one person is talking analytically and the other is talking about meaning-making, then you’re each trying to have different conversations. One of you is talking about how to do something and the other is talking about how to motivate people to do something. If at all possible you should let the first person lead; if they’re diligently working on the problem then they’re motivated enough.
To consider your support team example: they seem to be assuming that if their product works well, customers will be satisfied. That’s not a terrible strategy, and it puts the focus on something they can control (the product). If you could point to something else about the customer experience that’s causing customer dissatisfaction, they would probably understand the problem and deal with it. But if there’s nothing specific that needs addressing otherwise, it’s probably best just to let them focus on getting the instrument to work as well as possible.
And of course, maximizing customer satisfaction is itself a strategy toward achieving your real goal, which is profit. Companies don’t give their flagship products away for free*, no matter how much it would please the customers.
*With the exception of some loss leaders that are carefully calculated to grow revenues over the long term.
I think part of it is that contracts are mostly interpreted by trained humans. A computer works through each line of code before continuing to the next line. A human can look at a paragraph of standard legal language, understand that it does the standard thing, and move on in a second or so; reading a paragraph of non-standard language makes the human stop and think, which is much slower and often causes anxiety.
Even better, there are usually many court cases establishing exactly how the standard language should be interpreted in a wide variety of circumstances, which makes the standard language much more predictable and reliable. In software terms, it has already been debugged.
I’m not a gambler by temperment, so I’m just not very interested in betting.
In each of those cases, what worked was a fundamentally new approach. We didn’t breed leeches to the point where they could cure smallpox. Photovoltaics have been around since the 50s; if they were going to work at scale they’d have worked by now.
I think we’ve uncovered the basic disagreement and further discussion seems pointless.
We’ve been trying to make solar work for a very long time. I can remember when there were solar panels on the White House roof (Reagan had them removed). Things that have underperformed for decades almost never take off.
Since my side of the bet implies that the internet is not likely to exist by 2040 and I’d never find you if I won, this bet is not appealing. It is not possible to take a short financial position on civilization. However, if settlement could be arranged and the stakes weren’t chump change, in principle I’d take the bet.
Everything you’re saying fits the common narrative; I just think there’s a roughly 80 percent chance that it’s wrong.
I invite you to look at the Sankey diagrams for the US last year (2019). Despite decades of hard subsidies, solar power generated only 1.04 percent of the energy we used. Spain scaled up solar as much as they could, and despite significant advantages (sunny climate, lack of hurricanes) they only managed an EROEI value of 2.45 (for comparison, some estimates put the minimum EROEI for civilization as we know it at about 8-10, although optimists go as low as 3). Solar power has been ten years away for at least fifty years now, and it’s starting to look like it always will be.
Nuclear power is more realistic, as you noted—it generated 8.46 percent of our energy last year. Still, the ability to scale that up to 100% is questionable. Fission power requires rare earths, and they’re called rare for a reason. Fusion is great at generating neutrons* and high-level radioactive waste (when the neutrons impact the environment), but I’ve never heard of it coming anywhere near breakeven (EROEI=1) in energy terms (unless you count solar).
*There are aneutronic reactor proposals, but they’re pretty unrealistic even by fusion energy standards.
It seems like what you’re calling “progress studies” is what was called “modern history” until about 1960 and is derisively termed “Whig history” in the field these days. The basic premise is that material wealth went exponential in Europe starting around the 17th century, that this process (called “progress”) gave Europe the means to travel to and dominate the rest of the world, and that the central questions of modern history are what happened to initiate this “progress”, how it works, whether it will continue, and what forms it will take. Despite the change in academic fashions, these questions remain crucially important.
I tend to agree with what you call the “materialist” position. A barrel of oil has more energy than a decade of manual labor; without fossil fuels it is expensive to smelt metals and all but impossible to make useful semiconductors. Progress as we know it today is entirely dependent on metal (e.g. wires) and semiconductor-based computers. In principle nuclear power may be sufficient, but that’s an open question at this point.
More to the point, the models that contain photons that behave “realistically” sometimes lead to unsuccessful predictions (e.g. the double-slit experiment), and models that consistently give successful predictions include photon behavior that seems “unreal” to human intuition (but corresponds to experimentally-observed reality).
I’m not quite sure what you’re going for with the distinction between an “account of meaning” and a “belief”. It seems likely to cause problems elsewhere; language conveys meanings through socially-constructed, locally-verifiable means. A toddler learns from empirical experience what word to use to refer to a cat, but the word might be “kitty” or “gato” or “neko” depending on where the kid lives.
In practice, I suspect it more or less works out like my “inductive rule of thumb”.
On a deductive level, verificationism is self-defeating; if it’s true then it’s meaningless. On an inductive level, I’ve found it to be a good rule of thumb for determining which controversies are likely to be resolvable and which are likely to go nowhere.
This seems like a good place to mention the Bonewits scale (devised by a guy named Bonewits, whose name is perhaps too perfect for this) for evaluating the danger level of cultlike groups. It’s for evaluating an organization against 18 criteria like “censorship”, “isolation”, and “dropout control”; higher scores indicate a more dangerous group.
Let’s add 4: America was fighting on two theaters and the USSR was basically fighting on one (which isn’t to deny that their part of the war was by far the bloodiest). Subduing Japan and supporting the nationalists in China (the predecessors to the Taiwanese government) took enormous amounts of US military resources.
I’d downplay #2: WWII had all kinds of superweapon development programs, from the Manhattan Project to bioweapons to the Bat Bomb. The big secret, the secret that mattered, was which one would work. After V-J day the secret was out and any country with a hundred good engineers could build one, including South Africa. To the extent that nuclear nonproliferation works today, it works because isotope enrichment requires unusual equipment and leaves detectable traces that allow timely intervention.
Dishwashers treating restaurant plates like toxic waste is not based on a risk calculation, it’s based on our moral principles regarding purity.
I agree with most of what you’re saying in the post, but this bit strikes me as a bad example. Used dishes are likely to contain significant amounts of saliva, which is the primary transmission vector of the virus. Spraying dishes with water could easily result in a virus-laden aerosol, and infection through small cuts is also a concern. If you handle dishes from hundreds of people a day, the risk starts adding up. Although I agree that surfaces are rarely a significant concern, it seems that a restaurant dishwasher is a worst-case scenario for transmission by surfaces and extra precautions are justified.
I suspect that fresh bread was actually a luxury food at the time, with pottages more common among the poor.
I agree with many of your points, but have a few areas of disagreement that lead me to different conclusions:
There is considerable evidence of permanent lung damage, even in cases with no noticeable symptoms.
A one-time ten percent decrease in lung function will barely inconvenience a 20-year old. If the same person gets the same disease every year, (s)he won’t live to 30.
The linked article quotes studies indicating potentially permanent lung damage in 77% to 95% of the test subjects.
The virus is mutating in ways that complicate the development of treatments and vaccines.
Each person infected has a tiny chance of becoming host to a problematic mutation, and passing it on.
The fewer infected people, the less of a problem this will be.
I do not know (at this time) whether we will have a vaccine in a year, or ever. AFAIK we’ve never created a vaccine for a respiratory coronavirus before (we have some veterinary vaccines for intestinal coronaviruses, but not respiratory ones). Some vaccine trials for the related SARS-1 coronavirus made the disease worse, not better.
To me, this adds up to “coronavirus is potentially much more serious than you think, even for young people, and it would be better to be very cautious until the uncertainties are resolved”. I understand that the economy is doing very poorly, but I think the risks, at this time, militate against opening up. I strongly support measures to help those who’ve lost jobs because of the situation, though.
Note: This represents my opinion as of a particular time. As new information comes in, I expect to update my opinion accordingly.
Your baseline mortality rate implies an average life expectancy of 120 years. I’d double-check that source.
Also, COVID-19 can cause permanent lung damage, and possibly damage to other organs, even if people are otherwise asymptomatic. The possibility that many people, now young and with sufficient lung capacity to ignore the damage, may become disabled in 20 years or so is what worries me most.
I’ve always referred to that as the Law of Large Numbers. If there are enough chances, everything possible will happen. For example, it would be very surprising if I won the lottery, but not surprising if someone I don’t know won the lottery.