Oh, excellent—thanks so much! Side note: I really look forward to making some of the London meet ups when work pressure subsides a little, seems like these meet ups are excellent.
Sean_o_h
Would you (or anyone else) have good suggestions for index funds for those living and earning in the UK/Europe? Thanks!
Thank you! We appear to have been successful with our first foundation grant; however, the official award T&C letter comes next week, so we’ll know then what we can do with it, and be able to say something more definitive. We’re currently putting the final touches on our next grant application (requesting considerably more funds).
I think the sentence in question refers to a meeting on existential/extreme technological risk we will be holding in Berlin, in collaboration with the German Government, on 19th of September. We hope to use this as an opportunity to forge some collaborations in relevant areas of risk with European research networks, and with a bit of luck, to put existential risk mitigation a little higher on the European policy agenda. We’ll be releasing a joint press release with the German Foreign Office as soon as we’ve got this grant out of the way!
Nearly certainly, unfortunately that communication didn’t involve me so I don’t know which one it is! But I’ll ask him when I next see him, and send you a link. http://www.econ.cam.ac.uk/people/crsid.html?crsid=pd10000&group=emeritus
“A journalist doesn’t have any interest not to engage in sensationalism.”
Yes. Lazy shorthand in my last lw post, apologies. I should have said something along the lines of “in order to clarify our concerns , and not give the journalist the honest impression we though these things all represented imminent doom, which might result in sensationalist coverage”—as in, sensationalism resulting from misunderstanding. If the journalist chooses deliberately to engage in sensationalism, that’s a slightly different thing—and yes, it sells newspapers.
“Editors want to write articles that the average person understands. It’s their job to simplify. That still has a good chance of leaving the readers more informed than they were before reading the article.”
Yes. I merely get concerned when “scientists think we need to learn more about this, and recommend use of the precautionary principle before engaging” gets simplified to “scientists say ’don’t do this”, as in that case it’s not clear to me that readers come away with a better understanding of the issue. There’s a lot of misunderstanding of science due to simplified reporting. Anders Sandberg and Avi Roy have a good article on this in health (as do others): http://theconversation.com/the-seven-deadly-sins-of-health-and-science-reporting-21130
“It’s not the kind of article that I would sent people who have an background and who approach you. On the other hand it’s quite fine for the average person.”
Thanks, helpful.
Thanks, reassuring. I’ve mainly been concerned about a) just how silly the paperclip thing looks in the context it’s been put b) the tone, a bit—as one commenter on the article put it
“I find the light tone of this piece—“Ha ha, those professors!” to be said with an amused shake of the head—most offensive. Mock all you like, but some of these dangers are real. I’m sure you’ll be the first to squeal for the scientists to do something if one them came true. Price asks whether I have heard of the philosophical conundrum the Prisoner’s Dilemma. I have not. Words fail me. Just what do you know then son? Once again, the Guardian sends a boy to do a man’s job.”
Thanks. Re: your last line, quite a bit of this is possible: we’ve been building up a list of “safe hands” journalists at FHI for the last couple of years, and as a result, our publicity has improved while the variance in quality has decreased.
In this instance, we (CSER) were positively disposed towards the newspaper as a fairly progressive one with which some of our people had had a good set of previous interactions. I was further encouraged by the journalist’s request for background reading material. I think there was just a bit of a mismatch: they sent a guy who was anti-technology in a “social media is destroying good society values” sort of way to talk to people who are concerned about catastrophic risks from technology (I can see how this might have made sense to an editor).
Hi,
I’d be interested on LW’s thoughts on this. I was quite involved in the piece, though I suggested to the journalist it would be more appropriate to focus on the high-profile names involved. We’ve been lucky at FHI/Cambridge with a series of very sophisticated tech-savvy journalists with whom the inferential distance has been very low (see e.g. Ross Andersen’s Aeon/Atlantic pieces); this wasn’t the case here, and although the journalist was conscientious and requested reading material beforehand, I found that communicating on these concepts more difficult than expected.
In my view the interview material turned out better than expected, given the clear inferential gap. I am less happy with the ’catastrophic scenarios″ which I was asked for. The text I sent (which I circulated to FHI/CSER members) was distinctly less sensational, and contained a lot more qualifiers. E.g. for geoengineering I had: “Scientific consensus is against adopting it without in depth study and broader societal involvement in the decisions made, but there may be very strong pressure to adopt once the impacts of climate change become more severe.” and my pathogen modification example did not go nearly as far. While qualifiers can seem like unnecessary padding to editors, it can really change the tone of a piece. Similarly, in a pre-emptive line to ward off sensationalism, I included “I hope you can make it clear these are “worst case possibilities that currently appear worthy of study” rather than “high-likelihood events”. Each of these may only have e.g. a 1% likelihood of occurring. But in the same way an aeroplane passenger shouldn’t accept a 1% possibility of a crash, society should not accept a 1% possibility of catastrophe. I see our role as (like airline safety analysts) figuring out which risks are plausible, and for those, working to reduce the 1% to 0.00001%”; this was sort-of-addressed, but not really.
That said, the basic premises—that a virus could be modified for greater infectivity and released by a malicious actor, ‘termination risk’ for atmospheric aerosol geoengineering, future capabilities of additive manufacturing for more dangerous weapons—are intact.
Re: ‘paperclip maximiser’. I mentioned this briefly in conversation, after we’d struggled for a while with inferential gaps on AI (and why we couldn’t just outsmart something smarter than us, etc), presenting it as a ‘toy example’ used in research papers on AI goals, meant to encapsulate the idea that seemingly harmless or trivial but poorly thought through goals can result in unforseen and catastrophic consequences when paired with the kind of advanced resource utilisation and problem-solving ability a future AI might have. I didn’t expect it it to be taken as a literal doomsday concern—and it wasn’t in the text I sent—and to my mind it looks very silly in there, possibly deliberately so. However, I feel that Huw and Jaan’s explanations were very good, and quite well-presented..
We’ve been considering whether we should limit ourselves to media opportunities where we can write the material ourselves, or have the opportunity to view and edit the final material before publishing. MIRI has significantly cut back on its media engagement, and this seems on the whole sensible (FHI’s still doing a lot, some turns out very good, some not so good).
Lesson to take away: 1) this stuff can be really, really hard. 2) Getting used to v sophisticated, science/tech-savvy journalists and academics can leave you unprepared. 3) Things that are v reasonable with qualifies can become v unreasonable if you remove the qualifiers—and editors often just see the qualifiers as unnecessary verbosity (or want the piece to have stronger, more senational claims)
Right now, I’m leaning fairly strongly towards ‘ignore and let quietly slip away’ (the guardian has a small UK readership, so how much we ‘push’ this will probably make a difference), but I’d be interested in whether LW sees this as net positive or net negative on balance for existential risk in the public. However, I’m open to updating. I asked a couple of friends unfamiliar with the area what their take away impression was, and it was more positive than I’d anticipated.
Without knowing the content of your talk (or having time to Skype at present, apologies), allow me to offer a few quick points I would expect a reasonably well-informed, skeptical audience member to make (part-based on what I’ve encountered):
1) Intelligence explosion requires AI to get to a certain point of development before it can really take off (let’s set aside that there’s still a lot we need to figure out about where that point is, or whether there are multiple different versions of that point). People have been predicting that we can reach that stage of AI development “soon” since the Dartmouth conference. Why should we worry about this being on the horizon (rather than a thousand years away) now?
2) There’s such a range of views on this topic by apparent experts in AI and computer science that an analyst might conclude “there is no credible expertise on “path/timeline to super intelligent AI”. Why should we take MIRI/FHI’s arguments seriously?
3) Why are mathematician/logician/philosophers/interdisciplinary researchers the community we should be taking most seriously when it comes to these concerns? Shouldn’t we be talking to/hearing from the cutting edge AI “builders”?
4) (Related). MIRI (and also FHI, but not to such a ‘primary’ extend’) focuses on developing theoretical safety designs, and friendly-AI/safety-relevant theorem proving and maths work ahead of any efforts to actually “build” AI. Would we not be better to be more grounded in the practical development of the technology—building, stopping, testing, trying, adapting as we see what works and what doesn’t, rather than trying to lay down such far-reaching principles ahead of the technology development?
Speaking as someone who speaks about X-risk reasonably regularly: I have empathy for the OP’s desire for no surprises. IMO there are many circumstances in which surprises are very valuable—one on one discussions, closed seminars and workshops where a productive, rational exchange of ideas can occur, boards like LW where people are encouraged to interact in a rational and constructive way.
Public talks are not necessarily the best places for surprises, however. Unless you’re an extremely skilled orator, the combination of nerves, time limitations, crowd dynamics, and other circumstances can make it quite difficult to engage in an ideal manner. Crowd perception of how you “handle” a point, particularly a criticism, can do a huge amount in how the overall merit of you, your talk, and your topic, are perceived—even if the criticism is invalid or your response adequate. My experience is also that the factors above can push us into less nuanced, more “strong”-seeming positions than we would ideally take. In a worst-case scenario, a poor presentation/defence of an important idea can impact perception of the idea itself outside the context of the talk (if the talk is widely enough disseminated).
These are all reasons why I think it’s an excellent idea to consider the best and strongest possible objections to your argument, and to think through what an ideal and rational response would be—or, indeed, if the objection is correct, in which case it should be addressed in the talk. This may be the OP’s only to expose his audience to these ideas.
Thank you for this post, extremely helpful and I’m very grateful for the time you put into writing/researching it.
A question: what’s your opinion of when “level of exercise” goes from “diminishing returns” to “negative returns” in health and longevity? Background: I used to train competitively for running, 2xday for 2hrs total time/day, 15hrs week total (a little extra at the weekend) which sounds outlandish but is pretty standard in competitive long-distance running/cycling/triathlon. I quit because a) it wasn’t compatible with doing my best in work and b) I began to worry that pushing my body this hard was not actually good for long term health (for reasons like inflammation load, heart effects etc).
These days I train usually 1 hr/day, 6 days a week split about 50:50 between running and lifting/strength, and still pretty intensely (partly because I’m otherwise prone to weight gain unless I control my diet carefully, which I prefer not to have to worry about, partly because it’s a good antidote to a tendency towards anxiety/depression). I expect, realistically, I’m a low-level exercise addict, and I certainly have some obsessive tendencies. Two questions I’m interested in are: a) Am I still in “potentially not doing myself long-term favours” territory—i.e. would cutting from 360min/week to 300 min.week be actually better for my health? b) Even if a) isn’t true, are the benefits so diminished that I should cut to e.g. 5xweek at 45mins/day (225mins) for pure efficiency of time use reasons (am I throwing away 2+hrs a week of valuable time)? My schedule involves either running home from work and taking in a gym trip, or training when I need a break from work, so it’s reasonably efficient, but these days every hour I can squeeze out of a week seems to count. I also walk 30 mins to work every morning, not included in the above. Other than this my lifestyle’s quite sedentary (no active hobbies at present, spend most waking hours at a computer/in meetings).
Some emerging concerns I’m aware of for really serious runners: heart problems due to thickened heart wall, skin cancer (just due to being out in the sun so much, sweating off sunscreen). Potential causes for concern: lots of cortisol production from hard aerobic exercise, inflammation.
Fascinating, thank you for this!
For a lot of people running should be fine for their knees if done properly.
As far as I can tell, running is most likely to damage your knees if you’re (a) very big/heavy (b) have poor running technique (most people don’t learn to run properly/efficiently) (c) run a lot on bad surfaces (avoid running extensively on surfaces that are banked, or where you may step in potholes!) (d) have a genetic predisposition to knee problems or have brought on ostearthritis-type conditions through poor diet (happens sometimes with exercise anorexics).
As a past competitive runner, I’ve spent a lot of time with running “lifers” (>10,000 miles on the legs) and knee problems don’t seem to be particularly common (though obviously there are some selection effects there). Anecdotally, I have no knee problems after 6 years of 100mile/week training and most of my sports friends who do have them as a result of acute injuries (usually soccer).
That said, there’s enough weak evidence to suggest that this kind of heavy aerobic training may not be good for long-term health and longevity to cause me to reduce my running to 20-30 mins/day (supplemented by weight training).
I haven’t researched this, but I was told by a neuroscientist/martial arts-practicing friend that particular sports (like martial arts) where you have to practice a very wide and varied range of technical motions and thus challenge/develop neuromuscular systems widely may be particularly good for general brain health and plasticity (in the same way that varying your routine widely, etc, apparently is). Seems plausible, but I repeat that I haven’t researched it.
I’ve gradually inched up the %cocoa in the chocolate I eat. Now 85% dark is my go to for chocolate cravings, dessert/sweet snack cravings etc, and I find sweeter deserts less appealing. I go through A LOT of 85% dark now though, so if anyone knows of associated health concerns, you should probably let me know!
In my experience, in communicating on these matters to the public or generalists, it’s definitely good to highlight benefits as well as risks—and that style of onion strategy sounds about right and is roughly the type of approach I take (unless in public/general discussion I’m e.g. specifically asked to comment on a particular risk concern).
In speaking to public and policymakers here (outer layer 1 to layer 1.5, if you will), I’ve found a “responsible innovation” type framing to be effective. I’m pro-progress, the world has a lot of problems that advances in technology will need to play a role in solving, and some of the benefits of synthetic biology, artificial intelligence etc will be wonderful. However, we can make progress most confidently if the scientific community (along with other key players) devotes some resources towards identifying, evaluating, and if necessary taking proactive steps to prevent the occurrence of extreme negative scenarios. In such presentations/discussions, I present CSER and FHI as aiming to lead and coordinate such work. I sometimes make the analogy to an insurance policy: we hope that the risks we work on would never come to pass, but if the risk is plausible and the impact would be big enough, then we can only progress with confidence if we take steps ahead of time to protect our interests. This seems to be effective particularly with UK policymakers and industry folk—I can have risk concerns received better if I signal that I’m pro-progress, not irrationally risk-averse or fear-mongering, and can hint at a reasonably sophisticated understanding of what these technologies entail and what benefits they can be expected to bring.
I would add a small caution on “astronomical stakes”. It works very well in some rhetoric-friendly public speaking/writing settings (and I’ve used it), but for certain individuals and audiences it can produce a bit of a knee-jerk negative reaction as being a grandiose, slightly self-important perspective (perhaps this applies more in Europe than in the US though, where the level of public rhetoric is a notch or two lower).
I’m happy to send a copy to anyone who messages me. Stuart, my understanding of the legalese is that we could put up the preprint/final draft pdf—would MIRI be willing to host it?
We’ve also redesigned and relaunched our website, with more information on our areas of interest and planned research: http://cser.org/
I agree that this would be a good idea, and agree with the points below. Some discussion of this took place in this thread last Christmas: http://lesswrong.com/r/discussion/lw/je9/donating_to_miri_vs_fhi_vs_cea_vs_cfar/
On that thread I provided information about FHI’s room for more funding (accurate as of start of 2014) plus the rationale for FHI’s other, less Xrisk/Future of Humanity-specific projects (externally funded). I’d be happy to do the same at the end of this year, but instead representing CSER’s financial situation and room for more funding.