I have given David an abstract, which goes as follows:
The Singularity Institute for Artificial Intelligence has, in conjunction with Oxford University’s Future of Humanity Institute, pioneered the application of debiasing to predicting the future and making policy suggestions for technology related issues.
The mental skills and traits that everyday folk live their lives with are very different from the skills required to accurately predict the future of technology and human civilization; most importantly prediction of complex future scenarios requires debiasing—realizing that our brains have in-built weaknesses that prevent us from forming accurate beliefs about the world. 30 years’ worth of work on human cognitive biases has been examined and explained on the Overcoming Bias and Less Wrong blogs. Academics from SIAI and FHI have done a significant amount of original work applying our knowledge of human cognitive biases to the issues that futurists and visionaries have traditionally thought about. In particular, Eliezer Yudkowsky and Marcello Herreshoff, Singularity Institute researchers, have outlined the risks that smarter-than-human AI systems pose, and have proposed a research paradigm to counter the risks. SIAI researchers Anna Salamon and Steve Rayhawk have developed computer models, including The Uncertain Future (web application, www.theuncertainfuture.com), to combine various opinions and beliefs that we have about the future. Often simply taking existing beliefs and showing that they are probabilistically inconsistent can generate insight; inability to check one’s beliefs for global consistency is a fearsome human cognitive bias as far as predicting the future goes.
I have given David an abstract, which goes as follows:
I think it’s Herreshoff.
Thanks.
Thanks, I like this a lot.