I absolutely love the poem. Unfortunately every reading I’ve ever heard is painfully bad, so maybe it isn’t a great choice for a spoken piece. The exception is this scene from Interstellar: [Trigger warning? Or is it just me?]
MarsColony_in10years
I’m a fan of both versions of Ozymandias, but here’s Shelley’s version:
I met a traveller from an antique land
Who said: “Two vast and trunkless legs of stone
Stand in the desert. Near them, on the sand,
Half sunk, a shattered visage lies, whose frown,
And wrinkled lip, and sneer of cold command,
Tell that its sculptor well those passions read
Which yet survive, stamped on these lifeless things,
The hand that mocked them and the heart that fed:
And on the pedestal these words appear:
‘My name is Ozymandias, king of kings:
Look on my works, ye Mighty, and despair!’
Nothing beside remains. Round the decay
Of that colossal wreck, boundless and bare
The lone and level sands stretch far away.”
Ah, that’s the definition about which we were talking past each other. I certainly wouldn’t say that “Reiki might work, and until we test it we just don’t know!” Perhaps it “works” somewhat through the placebo effect, but even in the unlikely event of a study showing some random placebo controlled health benefit, it would still be astronomically unlikely that ki was the mechanism. (That’s not to say that no one will look at the real mechanism after the fact, and try to pick out some superficial similarity to the idea of “ki”.)
But that’s beside the point. For hypotheses that are worth our time to test, we test them precisely because it’s an open question. Until we take the data, it remains an open question. (at least for certain definitions of “open question”) I think that’s the point the author was trying to get at with his infeasible historical example.
In passing, he gestured vaguely at a vague conception of science. I guess that doesn’t qualify as an argument, so perhaps there is no argument to steelman. But I think that the vague conception of science he was trying to gesture toward does correspond to a real thing that scientists sometimes do.
In the map-territory analogy, this might correspond to a fuzzy or blank region of the map. A scientifically minded person might well say “One reasonable hypothesis is that the Earth is flat the blank region looks like nearby regions, but until we have tools and techniques that can be used to prove or disprove that hypothesis, it is an open question.”
But here’s the idea I think the author was trying to gesture at. In my experience, most people are way too eager to try and solve problems they don’t fully understand. I’ve often heard scientists and engineers caution against this, but the most notable quote is from the rocket scientist Wernher Von Braun: “One good test is worth a thousand expert opinions”. I’ve seen people like Bill Nye repeat this, and seen plenty of science-themed reminders that test results are often surprising, since the world is often much more complex that we give it credit for.
As for the historical commentary, I completely agree. The scenario isn’t historically plausible. The scientific revolution would have had to happen earlier just to produce someone capable of saying the quote, and society would have had to somehow go through a scientific revolution without noticing that the earth was round.
Yeah, but if you steel-man it, I think he was trying to make something similar to a map-territory distinction. It’s often useful to make a distinction between the data and our best interpretation of the data. Some conclusions don’t require much extrapolation, but others require a great deal.
On LW we happily discuss with very long inferential distances, and talk about regions of hypothesis space with high densities of unknown unknowns. Most scientists, however, work over much smaller inferential distances, with the intent of meticulously build up a rock solid body of knowledge. If things are “open questions” until they are above a confidence interval of, say, 0.99, then just about everything we discuss here is an open question, as the quote suggests.
Using a historical example which happens to be false just complicates things. If I recall, philosophers first hypothesized a round earth around 600 BCE, but didn’t prove it experimentally until 300 BCE.
Try thinking of it as a case study, not a comprehensive literature review. I didn’t really take anything in there as claiming that if I install Musk’s mental software then I will succeed at anything I try. The author explicitly mentions several times that Musk thought SpaceX was more likely to fail than succeed. Similarly, there’s bits like this:
Likewise, when an artist or scientist or businessperson chef reasons independently instead of by analogy, and their puzzling happens to both A) turn out well and B) end up outside the box, people call it innovation and marvel at the chef’s ingenuity.
It makes a lot more sense if you read it as a case study. He’s positing a bunch of hypotheses, some of which are better worded than others. If you steel-man the ones with obvious holes, most seem plausible. (For example, one of the ones that really annoyed me was the way he worded a claim that older children are less creative, which he blamed on schooling but made no mention of a control group.) But the thing was already pretty long, so I can excuse some of that. He’s just hypothesizing a bunch of qualities that are necessary but not sufficient.
I agree that OP was leaning a bit heavy on the advertising methods, and that advertising is almost 100% appeal to emotion. However, I’m not sure that 0% emotional content is quite right either. (For reasons besides argument to moderation.) Occasionally it is necessary to ground things in emotion, to some degree. If I were to argue that dust specs in 3^^^3 people’s eyes is a huge amount of suffering, I’d likely wind up appealing to empathy for that vastly huge unfathomable amount of suffering. The argument relies almost exclusively on logic, but the emotional content drives the point home.
However, maybe a more concrete example of the sorts of methods EAs might employ will make it clearer whether or not they are a good idea. If we do decide to use some emotional content, this seems to be an effective science-based way to do it: http://blog.ncase.me/the-science-of-social-change/
Aside from just outlining some methods, the author deals briefly with the ethics. They note that children who read George Washington’s Cherry Tree were inspired to be more truthful, while the threats implicit in Pinocchio and Boy Who Cried Wolf didn’t motivate them to lie less than the control group. I have no moral problem with showing someone a good role model, and setting a good example, even if that evokes emotions which influence their decisions. That’s still similar to an appeal to emotion, although the Aristotelian scheme the author mentions would classify it as Ethos rather than Pathos. I’m not sure I’d classify it under Dark Arts. (This feels like it could quickly turn into a confusing mess of different definitions for terms. My only claim is that this is a counterexample, where a small non-rational component of a message seems to be permissible.)
It seems worth noting that EAs are already doing this, to some degree. Here are a couple EA and LW superheroes, off of the top of my head:
Norman Borlough saved a billion lives from starvation by making sweeping improvements in crop yields using industrial agriculture. https://80000hours.org/2011/11/high-impact-science/
Viktor Zhdanov convinced the World Health Assembly, by a margin of only 2 votes, to eradicate Smallpox, saving perhaps hundreds of millions of lives. https://80000hours.org/2012/02/in-praise-of-viktor-zhdanov/
Stanislav Petrov day was celebrated here a bit over a month ago, although there are others who arguably averted closer cold war near-misses, but on days less convenient to make a holiday out of.
One could argue that we should only discuss these sorts of people purely for how their stories inform the present. However, if their stories have an aspirational impact, then it seems reasonable to share that. I’d have a big problem if EA turned into a click-maximizing advertising campaign, or launched infomercials. I agree with you there. There are some techniques which we definitely shouldn’t employ. But some methods besides pure reason legitimately do seem advisable. But guilting someone out of pocket change is significantly different from acquiring new members by encouraging them to aspire to something, and then giving them the tools to work toward that common goal. It’s not all framing.
Guilty. I’ve spent most of my life trying to articulate and rigorously define what our goals should be. It takes an extra little bit of cognitive effort to model others as lacking that sense of purpose, rather than merely having lots of different well-defined goals.
(EDIT, to avoid talking past each other: Not that people don’t have any well defined sub-goals, mind you. Just not well defined terminal values, and well defined knowledge of their utility function. No well-defined answers to Life, The Universe, And Everything.)
This is a good point. Perhaps an alternative target audience to “emotionally oriented donars” would be “Geeks”. Currently, EA is heavily focused on the Nerd demographic. However, I don’t see any major problems with branching out from scientists to science fans. There are plenty of people who would endorse and encourage effectiveness in charities, even if they suck at math. If EA became 99.9% non-math people, it would obviously be difficult maintain a number crunching focus on effectiveness. However this seems unlikely, and compared to recruiting “emotionally-oriented” newbies it seems like there would be much less risk of losing our core values.
Maybe “Better Giving Through SCIENCE!” would make a better slogan than “Be A Superdonor”? I’ve only given this a few minutes of thought, so feel free to improve on or correct any of these ideas.
Excellent point. Most people aren’t trying and failing to achieve their dreams. We aren’t even trying. We don’t have well-articulated dreams, so trying isn’t even a reasonable course of action until we have a clear objective. I’d guess that most adults still don’t know what they want to be when they grow up, and still haven’t figured it out by the time they retire.
So, all arguments which do not make different predictions are extensionally equal, but are not intensional. From the Wikipedia page:
Consider the two functions f and g mapping from and to natural numbers, defined as follows:
To find f(n), first add 5 to n, then multiply by 2.
To find g(n), first multiply n by 2, then add 10.
These functions are extensionally equal; given the same input, both functions always produce the same value. But the definitions of the functions are not equal, and in that intensional sense the functions are not the same.
That provided me with some perspective. I’d only been thinking of cases where we imposed limitations, such as those we use with Alcohol and addictive drugs. But, as you point out, there are also regulations which push us toward immediate gratification, rather than away. If, after much deliberation, we collectively decide that 99% of potential values are long term, then perhaps we’d wind up abolishing most or all such regulations, assuming that most System 2 values would benefit.
However, at least some System 2 values are likely orthogonal to these sorts of motivators. For instance, perhaps NaNoWriMo participation would go down in a world with fewer social and economic safety nets, since many people would be struggling up Maslow’s Hierarchy of Needs instead of writing. I’m not sure how large of a fraction of System 2 values would be aided by negative reinforcement. There would be a large number of people who would abandon their long-term goals in order to remove the negative stimuli ASAP. If the shortest path to removing the stimuli gets them 90% of the way toward a goal, then I’d expect most people to achieve the remaining 10%. However, for goals that are orthogonal to pain and hunger, we might actually expect a lower rate of achievement.
If descriptive ethics research shows that System 2 preferences dominate, and if the majority of that weighted value is held back by safety nets, then it’ll be time to start cutting through red tape. If System 2 preferences dominate, and the majority of moral weight is supported by safety nets, then perhaps we need more cushions or even Basic Income. If our considered preference is actually to “live in the moment” (System 1 preferences dominate) then perhaps we should optimize for wireheading, or whatever that utopia would look like.
More likely, this is an overly simplified model, and there are other concerns that I’m not taking into account but which may dominate the calculation. I completely missed the libertarian perspective, after all.
Sounds like WoW is optimized for System 1 pleasures, and you explicitly reject this. I think that brings up an important point: How can we build a society/world where there are strong optimization forces to enable people to choose System 2 preferences? Once such a world iterated on itself for a couple generations, what might it look like?
I don’t think this would be a world with no WoW-like activities, because a world without any candy or simple pleasures strikes me as deeply lacking. My System 2 seems to place at least a little value on System 1 being happy. So I’d guess the world would just have many fewer of such activities, and be structured in such a way as to make it easy to avoid choices we’d regret the next day.
If this turns out to a physically impossible problem to overcome for some reason, then I could imagine a world with no System 1 pleasures, but such a world would be deeply lacking, even if that loss was more than made up for by gains in our System 2 values.
As a side note, it’d be an interesting question how much of the theoretical per capita maximum value falls into which categories. An easier question is how much of our currently actualized value is immediate gratifications. I’d expect that to be heavily biased toward System 1, since we suffer from Akrasia, but it might still be informative.
I’ve recently started using RSS feeds. Does anyone have LW-related feeds they’d recommend? Or for that matter, anything they’d recommend following which doesn’t have an RSS feed?
Here’s my short list so far, in case anyone else is interested:
Less Wrong Discussion
Less Wrong Main (ie promoted)
Slate Star Codex
Center for the Study of Existential Risk
Future of Life Institute [they have a RSS button, but it appears to be broken. They just updated their webpage, so I’ll subscribe once there’s something to subscribe to.]
Global Priorities Project
80,000 Hours
SpaceX [an aerospace company, which Elon Musk refuses to take public until they’ve started a Mars colony]
These obviously have an xrisk focus, but feel free to share anything you think any Less-Wrongers might be interested in, even if it doesn’t sound like I would be.
For anyone looking to start using RSS, I’d recommend using the Bamboo Feed Reader extension in FireFox, and deleting all the default feeds. I started out using Sage as a feed aggregator, but didn’t like the sidebar style or the tiled reader.
Ah, thanks for the explanation. I interpreted the statement as you trying to demonstrate that number of nuclear winters / number of near misses = 1⁄100. You are actually asserting this instead, and using the statement to justify ignoring other categories of near misses, since the largest will dominate. That’s a completely reasonable approach.
I really wish there was a good way to estimate the accidents per near miss ratio. Maybe medical mistakes? They have drastic consequences if you mess up, but involve a lot of routine paperwork. But this assumes that the dominant factors in the ratio are severity of consequences. (Probably a reasonable assumption. Spikes on steering wheels make better drivers, and bumpers make less careful forklift operators.) I’ll look into this when I get a chance.
Excellent start and setup, but I diverge from your line of thought here:
We will use a lower estimate of 1 in 100 for the ratio of near-miss to real case, because the type of phenomena for which the level of near-miss is very high will dominate the probability landscape. (For example, if an epidemic is catastrophic in 1 to 1000 cases, and for nuclear disasters the ratio is 1 to 100, the near miss in the nuclear field will dominate).
I’m not sure I buy this. We have two types of near misses (biological and nuclear). Suppose we construct some probability distribution for near-misses, ramping up around 1⁄100 and ramping back down at 1/1000. That’s what we have to assume for any near-miss scenario, if we know nothing additional. I’ll grant that if we roll the dice enough times, the 1⁄100 cases will start to dominate, but we only have 2 categories of near misses. That doesn’t seem like enough to let us assume a 1⁄100 ratio of catastrophes to near misses.
Additionally, there does seem to be good reason to believe that the rate of near misses has gone down since the cold war ended. (Although if any happened, they’d likely still be classified.) That’s not to say that our current low rate is a good indicator, either. I would expect our probability of catastrophe to be dominated by the probability of WWIII or another cold war.
We had 2 world wars in the first 50 years of last century, before nuclear deterrence substantially lowered the probability of a third. If that’s a 10x reduction, then we can expect 0.4 a century instead of 4 a century. If there’s a 100x reduction, then we might expect 0.04 world wars a century. Multiply that by the probability of nuclear winter given WWIII to get the probability of disaster.
However, I suspect that another cold war is more likely. We spent ~44 of the past 70 years in a cold war. If that’s more or less standard, then on average we might expect to spend 63% of any given century in a cold war. This can give us a rough range of probabilities of armageddon:
1 near miss a year spent in cold war 63 years spent in cold war per century 1 nuclear winter per 100 near misses = 63% chance of nuclear winter per century
0.1 near miss a year spent in cold war 63 years spent in cold war per century 1 nuclear winter per 3000 near misses = 0.21% chance of nuclear winter per century
For the record, this range corresponds to a projected half life between roughly 1 century and ~100 centuries. That’s much more broad then your 50-100 year prediction. I’m not even sure where to start to guesstimate the risk of an engineered pandemic.
The problem here of course is how selective to be about rules to let into this protected level
Couldn’t this be determined experimentally? Ignore the last hundred years or so, or however much might influence our conclusion based on modern politics. Find a list of the people who had a large counterfactual impact on history. Which rules lead to desirable results?
For example, the trial of Socrates made him a martyr, significantly advancing his ideas. That’s a couple points for “die for the principle of the matter” as an ethical injunction. After Alexander the great died, anti-Macedonian sentiment in Athens caused Aristotle to flee, saying “I will not allow the Athenians to sin twice against philosophy”. Given this, perhaps Socrates’s sacrifice didn’t achieve as much as one might think, and we should update a bit in the opposite direction. Then again, Aristotle died a year later, having accomplished nothing noteworthy in that time.
All the happiness that the warm thought of an afterlife ever produced in humanity, has now been more than cancelled by the failure of humanity to institute systematic cryonic preservations after liquid nitrogen became cheap to manufacture. And I don’t think that anyone ever had that sort of failure in mind as a possible blowup, when they said, “But we need religious beliefs to cushion the fear of death.” That’s what black swan bets are all about—the unexpected blowup.
That’s a fantastic quote.
Today, October 27th, is the 53rd anniversary of the day Vasili Arkhipov saved the world. I realize Petrov Day was only a month ago, and there was a post then. Although I appreciate our Petrov ceremony, I personally think Arkhipov had a larger counterfactual impact than Petrov, (since nukes might not have been launched even if Petrov hadn’t been on shift at the time) and so I’d like to remember Vasili Arkhipov as well.
A world without complex novelty would be lacking. But so would a world without some simple pleasures. There are people who really do enjoy woodworking. I can’t picture a utopia where no one ever whittles. And a few of them will fancy it enough to get really, really good at it, for pretty much the same reason that there are a handful of devoted enthusiasts. Even without Olympic competitions and marathons, I’d bet there would still be plenty of runners, who did so purely for it’s own sake, rather than to get better or to compete, or for novelty. Given an infinite amount of time, everyone is likely to spend a great deal of time on such non-novel things. So, what’s most disturbing about carving 162,329 table legs is that he altered his utility function to want to do it.
Perhaps I’m missing something, but it seems to me that any mind capable of designing a turning-complete computer can, in principle, understand any class of problem. I say “class of problem”, because I doubt we can even wrap our brains around a 10x10x10x10 Rubik’s Cube. But we are aware of simpler puzzles of that class. (And honestly, I’m just using an operational definition of “classes of problem”, and haven’t fleshed out the notion.) There will always be harder logic puzzles, riddles, and games. But I’m not sure there exist entirely new classes of problems, waiting to be discovered. So we may well start running out of novelty of that type after a couple million years, or even just a couple thousand years.