I’m not sure what’s going on, but the presentation can be viewed here: https://files.catbox.moe/qdwops.mp4
As some people here have said, it’s not a great presentation. The message is important, though.
I’m not sure what’s going on, but the presentation can be viewed here: https://files.catbox.moe/qdwops.mp4
As some people here have said, it’s not a great presentation. The message is important, though.
For reasons I can’t remember (random Amazon recommendation?) I read Life 3.0 five years ago, and I’ve been listening to podcasts about AI alignment ever since. I work with education at a national level, and this January I wrote and published a book “AI in Education” to help teachers use ChatGPT in a sensible way – and, in its own chapter, make more people aware of the risks with AI. I’ve been giving a lot of talks about AI and education since then, and I end each presentation with some words about AI risks. I am sure that most people reading the book or listening to my talks don’t care about existential risk from AI, or simply don’t get it. But I’m also sure that there are people who are made more aware.
I believe that reaching out to specific groups of people (such as teachers) is a good complement to creating a public debate about (existential) AI risks. It is fairly easy to get people interested in a book or a talk about using ChatGPT in their profession, and adding a section on AI risks is a good way of reaching the fraction of the audience who can grasp and work with the risks.
All this being said, I also want to say that I am not convinced that AGI is immanent, or that AGI necessarily means apocalypse. But I am sufficiently convinced about AI posing real and serious risks – also today – that the risks must get much more attention than they get today. I also believe that this attitude has a better chance of getting an audience than a doomsayer attitude, but I think it’s a bad idea to use a particular attitude as a make up to get a message across – better is for everyone to be sincere (perhaps combined with selecting public voices partly based on how well the voices can get across).
Concerning this TED Talk: It seems to me that Eliezer has run too far on his track to be able to get across to those who are new to existential AI risk. He is (obviously) quite worried, on the verge of despairing. The we (as the society) are dealing with AI risk must seem like a bad comedy to him – but a comedy where everyone dies. For real. It is difficult to get your message out when feeling like that. I think his strongest part of the presentation was when he sounded sincerely angry.
(Side note: I think the interview with Eliezer that Lex Fridman made was good, so I don’t agree that Eliezer isn’t a good public communicator.)
I’ve been thinking about limitations and problems with CIRL. Thanks for this post!
I haven’t done the math, but I’d like to explore a scenario where the AI learns from kids and might infer that eating sweets and playing video games is better than eating a proper meal and doing your homework (or whatever). This could of course be mitigated by learning preferences from parents, which could have a stronger impact on how the AI picks up preferences. But there is a strong parallel to how humanity treats this planet of ours. Wouldn’t an AI infer that we actually want to raise the global temperature, make a lot of species extinct, and generally be fairly short-sighted?