I regularly find myself in situations where I want to convince people that AI safety is important but I have very little time before they lose interest. If you had one minute to convince someone with no or almost no previous knowledge, how would you do it ? (I have considered printing eliezer’s tweet about nuclear)
A survey was conducted in the summer of 2022 of approximately 4271 researchers who published at the conferences NeurIPS or ICML in 2021, and received 738 responses, some partial, for a 17% response rate. When asked about impact of high-level machine intelligence in the long run, 48% of respondents gave at least 10% chance of an extremely bad outcome (e.g. human extinction).
For an extremely brief summary of the problem, I like this from Zvi:
The core reason we so often have seen creation of things we value win over destruction is, once again, that most of the optimization pressure by strong intelligences was pointing in that directly, that it was coming from humans, and the tools weren’t applying intelligence or optimization pressure. That’s about to change.
I regularly find myself in situations where I want to convince people that AI safety is important but I have very little time before they lose interest. If you had one minute to convince someone with no or almost no previous knowledge, how would you do it ? (I have considered printing eliezer’s tweet about nuclear)
A survey was conducted in the summer of 2022 of approximately 4271 researchers who published at the conferences NeurIPS or ICML in 2021, and received 738 responses, some partial, for a 17% response rate. When asked about impact of high-level machine intelligence in the long run, 48% of respondents gave at least 10% chance of an extremely bad outcome (e.g. human extinction).
Slightly better perhaps to quote it, I believe it was Outcome: “Extremely bad (e.g. human extinction)”
Might be good to follow up with something like this What we’re doing here (planned-obsolescence.org)
For an extremely brief summary of the problem, I like this from Zvi:
https://thezvi.wordpress.com/2023/03/28/response-to-tyler-cowens-existential-risk-ai-and-the-inevitable-turn-in-human-history/