Agreed. Presence or absence of debate on an issue gives information about a nation’s culture, but very little about how hard it is to discover the facts of the matter. This is especially true in matters of social science, where the available evidence is never going to be strong enough to convince someone who has already made up his mind.
ewbrownv
Wow, look at all the straw men. Is there an actual reasoned position in there among the fashionable cynicism? If so, I can’t find it.
One of the major purposes of Less Wrong is allegedly the promotion of more rational ways of thinking among as large a fraction of the general population as we can manage to reach. Finding better ways to think clearly about politics might be an especially difficult challenge, but popularizing the result of such an attempt isn’t necessarily any harder than teaching people about the sunk costs fallacy.
But even if you think raising the level of public discourse is hopeless, being able to make accurate predictions of your own can also be quite valuable. Knowing things like “the Green’s formula for winning elections forces them to drive any country they control into debt and financial collapse”, or “the Blues hate the ethnic group I belong to, and will oppress us as much as they can get away with” can be rather important when deciding where to live and how to manage one’s investments, for example.
I tend to agree with your concern.
Discussing politics is hard because all political groups make extensive use of lies, propaganda and emotional appeals, which turns any debate into a quagmire of disputed facts and mind-killing argument. It can be tempting to dismiss the whole endeavor as hopeless and ignore it while cynically deriding those who stay involved.
Trouble is, political movements are not all equal. If they gain power, some groups will use it to make the country wealthy so they can pocket a cut of the money. Others will try to force everyone to join their religion, or destroy the economy in some wacky scheme that could never have worked, or establish an oppressive totalitarian regime and murder millions of people to secure their position. These results are not equal.
So while it might be premature to discuss actual political issues on Less Wrong, searching for techniques to make such discussions possible would be a very valuable endeavor. Political trends affect the well-being of hundreds of millions of people in substantial ways, so even a modest improvement in the quality of discourse could have a substantial payoff. At the very least, it would be nice if we could reliably identify the genocidal maniacs before they come into power...
Actually, I see a significant (at least 10%) chance that the person currently known as Quirrel was both the ‘Light Lord’ and the Dark Lord of the last war. His “Voldemort’ persona wasn’t actually trying to win, you see, he was just trying to create a situation where people would welcome a savior...
This would neatly explain the confusion Harry noted over how a rational, inventive wizard could have failed to take over England. It leaves open some questions about why he continued his reign of terror after that ploy failed, but there are several obvious possibilities there. The big question would be what actually happened to either A) stop him, or B) make him decide to fake his death and vanish for a decade.
If you agree that a superhuman AI is capable of being an existential risk, that makes the system that keeps it from running amok the most safety-critical piece of technology in history. There is no room for hopes or optimism or wishful thinking in a project like that. If you can’t prove with a high degree of certainty that it will work perfectly, you shouldn’t turn it on.
Or, to put it another way, the engineering team should act as if they were working with antimatter instead of software. The AI is actually a lot more dangerous than that, but giant explosions are a lot easier for human minds to visualize than UFAI outcomes...
Human children respond to normal child-rearing practices the way they do because of specific functional adaptations of the human mind. This general principle applies to everything from language acquisition to parent-child bonding to acculturation. Expose a monkey, dog, fish or alien to the same environment, and you’ll get a different outcome.
Unfortunately, while the cog sci community has produced reams of evidence on this point they’ve also discovered that said adaptations are very complex, and mapping out in detail what they all are and how they work is turning out to be a long research project. Partial results exist for a lot of intriguing examples, along with data on what goes wrong when different pieces are broken, but it’s going to be awhile before we have a complete picture.
An AI researcher who claims his program will respond like a human child is implicitly claiming either that this whole body of research is wrong (in which case I want to see evidence), or that he’s somehow implemented all the necessary adaptations in code despite the fact that no one knows how they all work (yea, right). Either way, this isn’t especially credible.
As an explanation for a society-wide shift in discourse that seems quite implausible. If such a change has actually happened the cause would most likely be some broad cultural or sociological change that took place within the same time frame.
Yes, it’s very similar to the problem of designing a macroscopic robot that can out-compete natural predators of the same size. Early attempts will probably fail completely, and then we’ll have a few generations of devices that are only superior in some narrow specialty or in controlled environments.
But just as with robots, the design space of nanotech devices is vastly larger than that of biological life. We can easily imagine an industrial ecology of Von Neumann machines that spreads itself across a planet exterminating all large animal life, using technologies that such organisms can’t begin to compete with (mass production, nuclear power, steel armor, guns). Similarly, there’s a point of maturity at which nanotech systems built with technologies microorganisms can’t emulate (centralized computation, digital communication, high-density macroscopic energy sources) become capable of displacing any population of natural life.
So I’d agree that it isn’t going to happen by accident in the early stages of nanotech development. But at some point it becomes feasible for governments to design such a weapon, and after that the effort required goes down steadily over time.
The theory is that Drexlerian nanotech would dramatically speed up progress in several technical fields (biotech, medicine, computers, materials, robotics) and also dramatically speed up manufacturing all at the same time. If it actually works that way the instability would arise from the sudden introduction of new capabilities combined with the ability to put them into production very quickly. Essentially, it lets innovators get inside the decision loop of society at large and introduce big changes faster than governments or the general public can adapt.
So yes, it’s mostly just quantitative increases over existing trends. But it’s a bunch of very large increases that would be impossible without something like nanotech, all happening at the same time.
Now you’re just changing the definition to try to win an argument. An xrisk is typically defined as one that, in and of itself, would result in the complete extinction of a species. If A causes a situation that prevents us from dealing with B when it finally arrives the xrisk is B, not A. Otherwise we’d be talking about poverty and political resource allocation as critical xrisks, and the term would lose all meaning.
I’m not going to get into an extended debate about energy resources, since that would be wildly off-topic. But for the record I think you’ve bought into a line of political propaganda that has little relation to reality—there’s a large body of evidence that we’re nowhere near running out of fossil fuels, and the energy industry experts whose livelihoods rely on making correct predictions mostly seem to be lined up on the side of expecting abundance rather than scarcity. I don’t expect you to agree, but anyone who’s curious should be able to find both sides of this argument with a little googling.
Yes, and that’s why you can even attempt to build a computer model. But you seem to be assuming that a climate model can actually simulate all those processes on a relatively fundamental level, and that isn’t the case.
When you set out to build a model of a large, non-linear system you’re confronted with a list of tens of thousands of known processes that might be important. Adding them all to your model would take millions of man-hours, and make it so big no computer could possibly run it. But you can’t just take the most important-looking processes and ignore the rest, because the behavior of any non-linear system tends to be dominated by unexpected interactions between obscure parts of the system that seem unrelated at first glance.
So what actually happens is you implement rough approximations of the effects the specialists in the field think are important, and get a model that outputs crazy nonsense. If you’re honest, the next step is a long process of trying to figure out what you missed, adding things to the model, comparing the output to reality, and then going back to the drawing board again. There’s no hard, known-to-be-accurate physics modeling involved here, because that would take far more CPU power than any possible system could provide. Instead it’s all rules of thumb and simplified approximations, stuck together with arbitrary kludges that seem to give reasonable results.
Or you can take that first, horribly broken model, slap on some arbitrary fudge factors to make it spit out results the specialists agree look reasonable, and declare your work done. Then you get paid, the scientists can proudly show off their new computer model, and the media will credulously believe whatever predictions you make because they came out of a computer. But in reality all you’ve done is build an echo chamber—you can easily adjust such a model to give any result you want, so it provides no additional evidence.
In the case of nuclear winter there was no preexisting body of climate science that predicted a global catastrophe. There was just a couple of scientists who thought it would happen, and built a model to echo their prediction.
An uncalibrated sim will typically give crazy results like ‘increasing atmospheric CO2 by 1% raises surface temperatures by 300 degrees’ or ‘one large forest fire will trigger a permanent ice age’. If you see an uncalibrated sim giving results that seem even vaguely plausible, this means the programmer has tinkered with its internal mechanisms to make it give those results. Doing that is basically equivalent to just typing up the desired output by hand—it provides evidence about the beliefs of the programmer, but nothing else.
Exactly.
I think the attitudes of most experts are shaped by the limits of what they can actually do today, which is why they tend not to be that worried about it. The risk will rise over time as our biotech abilities improve, but realistically a biological xrisk is at least a decade or two in the future. How serious the risk becomes will depend on what happens with regulation and defensive technologies between now and then.
This is a topic I frequently see misunderstood, and as a programmer who has built simple physics simulations I have some expertise on the topic, so perhaps I should elaborate.
If you have a simple, linear system involving math that isn’t too CPU-intensive you can build an accurate computer simulation of it with a relatively modest amount of testing. Your initial attempt will be wrong due to simple bugs, which you can probably detect just by comparing simulation data with a modest set of real examples.
But if you have a complex, non-linear system, or just one that’s too big to simulate in complete detail, this is no longer the case. Getting a useful simulation then requires that you make a lot of educated guesses about what factors to include in your simulation, and how to approximate effects you can’t calculate in any detail. The probability of getting these guesses right the first time is essentially zero—you’re lucky if the behavior of your initial model has even a hazy resemblance to anything real, and it certainly isn’t going to come within an order of magnitude of being correct.
The way you get to a useful model is through a repeated cycle of running the simulator, comparing the (wrong) results to reality, making an educated guess about what caused the difference, and trying again. With something relatively simple like, say, turbulent fluid dynamics, you might need a few hundred to a few thousand test runs to tweak your model enough that it generates accurate results over the domain of input parameters that you’re interested in.
If you can’t run real-world experiments to generate the phenomena you’re interested in, you might be able to substitute a huge data set of observations of natural events. Astronomy has had some success with this, for example. But you need a data set big enough to encompass a representative sample of all the possible behaviors of the system you’re trying to simulate, or else you’ll just gets a ‘simulator’ that always predicts the few examples you fed it.
So, can you see the problem with the nuclear winter simulations now? You can’t have a nuclear war to test the simulation, and our historical data set of real climate changes doesn’t include anything similar (and doesn’t collect anywhere near as many data points as a simulator needs, anyway). But global climate is a couple of orders of magnitude more complex than your typical physics or chemistry sims, so the need for testing would be correspondingly greater.
The point non-programmers tend to miss here is that lack of testing doesn’t just mean the model is a a little off. It means the model has no connection at all to reality, and either outputs garbage or echoes whatever result the programmer told it to give. Any programmer who claims such a model means something is committing fraud, plain and simple.
“We’re not sure if we could get back to our current tech level afterwards” isn’t an xrisk.
It’s also purely speculative. The world still has huge deposits of coal, oil, natural gas, oil sands and shale oil, plus large reserves of half a dozen more obscure forms of fossil fuel that have never been commercially developed because they aren’t cost-competitive. Plus there’s wind, geothermal, hydroelectric, solar and nuclear. We’re a long, long way away from the “all non-renewables are exhausted” scenario.
You don’t think freedom of speech, religion and association are important things for a society to defend? Well, in that case we don’t have much to talk about.
I will, however, suggest that you might do well to spend some time thinking about what your ideal society will be like after the principle that society (i.e. government) can dictate what people say, think and do to promote the social cause of the day becomes firmly entrenched. Do you really think your personal ideology will retain control of the government forever? What happens if a political group with views you oppose gets in power?
Well, 500 years ago there was plenty of brutal physical oppression going on, and I’d expect that kind of thing to have lots of other negative effects on top of the first-order emotional reactions of the victims.
But I would claim that if you did a big brain-scan survey of, say, Western women from 1970 to the present, you’d see very little correlation between their subjective feeling of oppression and their actual treatment in society.
Such a mechanism may be desirable, but it isn’t necessary for the existence of cities. There are plenty of third world countries that don’t bother with licensing, and still manage to have major metropolises.
But my point was just that when people talk about ‘trades and crafts on which the existence of the modern city depends’ they generally mean carpenters, plumbers, electricians and other hands-on trades, not clerks and bureaucrats.
The reason the life sciences are resistant to regulation is at least partially because they know that killer plagues are several orders of magnitude harder to make than Hollywood would like you to think. The biosphere already contains billions of species of microorganisms evolving at a breakneck pace, and they haven’t killed us all yet.
An artificial plague has no special advantages over natural ones until humans get better at biological design than evolution, which isn’t likely to happen for a couple of decades. Even then, plagues with 100% mortality are just about impossible—turning biotech from a megadeath risk to an xrisk requires a level of sophistication that looks more like Drexlerian nanotech than normal biology.
I think you have a point here, but there’s a more fundamental problem—there doesn’t seem to be much evidence that gun control affects the ability of criminals to get guns.
The problem here is similar to prohibition of drugs. Guns and ammunition are widely available in many areas, are relatively easy to smuggle, and are durable goods that can be kept in operation for many decades once acquired. Also, the fact that police and other security officials need them means that they will continue to be produced and/or imported into an area with even very strict prohibition, creating many opportunities for weapons to leak out of official hands.
So gun control measures are much better at disarming law-abiding citizens than criminals. Use of guns by criminals does seem to drop a bit when a nation adopts strict gun control policies for a long period of time, but the fact that the victims have been disarmed also means criminals don’t need as many guns. If your goal is disarming criminals it isn’t at all clear that this is a net benefit.