Peter Norvig is at least in principle aware of some of the issues; see e.g. this article about the current edition of Norvig&Russell’s AIAMA (which mentions a few distinct way in which AI could have very bad consequences and cites Yudkowsky and Omohundro).
I don’t know what Google’s attitude is to these things, but if it’s bad then either they aren’t listening to Peter Norvig or they have what they think are strong counterarguments, and in either case an outsider having a polite word is unlikely to make a big difference.
Peter Norving was a resident at Hacker School while I was there, and we had a brief discussion about existential risks from AI. He basically told me that he predicts AI won’t surpass humans in intelligence by so much that we won’t be able to coerce it into not ruining everything. It was pretty surprising, if that is what he actually believes.
I don’t know what Google’s attitude is to these things, but if it’s bad then either they aren’t listening to Peter Norvig or they have what they think are strong counterargument...
My guess is that most people at Google, who are working on AI, take those risks somewhat seriously (i.e. less seriously than MIRI, but still acknowledge them) but think that the best way to mitigate risks associated with AGI is to research AGI itself, because the problems are intertwined.
Peter Norvig is at least in principle aware of some of the issues; see e.g. this article about the current edition of Norvig&Russell’s AIAMA (which mentions a few distinct way in which AI could have very bad consequences and cites Yudkowsky and Omohundro).
I don’t know what Google’s attitude is to these things, but if it’s bad then either they aren’t listening to Peter Norvig or they have what they think are strong counterarguments, and in either case an outsider having a polite word is unlikely to make a big difference.
Peter Norving was a resident at Hacker School while I was there, and we had a brief discussion about existential risks from AI. He basically told me that he predicts AI won’t surpass humans in intelligence by so much that we won’t be able to coerce it into not ruining everything. It was pretty surprising, if that is what he actually believes.
My guess is that most people at Google, who are working on AI, take those risks somewhat seriously (i.e. less seriously than MIRI, but still acknowledge them) but think that the best way to mitigate risks associated with AGI is to research AGI itself, because the problems are intertwined.