Max Tegmark’s new Time article on how we’re in a Don’t Look Up scenario [Linkpost]

Link post

https://​​time.com/​​6273743/​​thinking-that-could-doom-us-with-ai/​​

Max Tegmark has posted a Time article on AI Safety and how we’re in a “Don’t Look Up” scenario.

In a similar manner to Yudkowsky, Max went on Lex Fridman and has now posted a Time article on AI Safety. (I propose we get some more people into this pipeline)

Max, however, portrays a more palatable view regarding societal standards. With his reference to Don’t Look Up, I think this makes it one of my favourite pieces to send to people new to AI Risk, as I think it describes everything that your average joe needs to know quite well. (An asteroid with a 10% risk of killing humanity is bad)

In terms of general memetics, it will be a lot harder for someone like LeCun to come up with a genius equivalence between asteroid safety and airplane safety with this framing. (Which might be a shame since it’s one of the dumber counter-arguments I’ve heard.)

But who knows? He might just claim that scientists know and have always known how to use nuclear bombs to shoot the asteroid away or something.

What I wanted to say with the above is that I think Max is doing a great way of framing the problem, and with his respect from his earlier career as a physicist, I think it would be good to use his articles more in public discussions. I also did quite enjoy how he described alignment on the Lex Fridman podcast, and even though I don’t agree with all he says, it’s good enough.

Crossposted to EA Forum (0 points, 0 comments)