Scary Mark Zuckerberg interview on AI risks where the Facebook founder says:
“I think that along the way, we will also figure out how to make it safe. The dialogue today kind of reminds me of someone in the 1800s sitting around and saying: one day we might have planes and they may crash. Nonetheless, people developed planes first and then took care of flight safety. If people were focused on safety first, no one would ever have built a plane.”
Yes, but if the crash of a single airplane would cause the extermination of mankind we would all be dead. A better analogy is scientists in 1940 considering whether detonating an atomic bomb would ignite the atmosphere.
I wonder if Zuckerberg is familiar with the concept of “hard takeoff”. I’ve been under the impression the concept has become mainstream, but I’ve been in the OB/LW sphere for the entirety of my adult life, and I have no idea how big the inferential distance has gotten.
Yeah, I don’t understand why safety should equal ‘stop working on the thing’. If anything, AI friendliness will further the advancement of AI, allowing a more widespread use.
Yeah, I don’t understand why safety should equal ‘stop working on the thing’.
There is a good chance that if the first super-intelligent AI isn’t carefully designed to be friendly it will destroy us, but creating a friendly super-intelligent AI is much harder than merely creating an AI, so our species only chance of survival is to go very slow with AI development until we have put a lot more resources into researching friendliness. Imagine that it was 1850 and you knew that the crash of a single airplane would destroy mankind, but you couldn’t convince others of this. You would be scared if people started to work on creating airplanes.
I get that, but I think that “working to make a plane a lot safer” would still tick the box “working on a plane project”. I would say this is even what happens in reality, otherwise you could just strap a jet engine under a bus. I am all in favor of slowing down AI work to better focus on safety, and I would contest Zuckerberg telling him: “you know Mark, even if we are focusing on AI safety that doesn’t mean we are slowing down progress on AI, if anything, we are accelerating it.”
I worry that a lot of discussions about AI are all being done via metaphor or being based on past events while it’s easy to make up a metaphor that matches any given future scenario and it shouldn’t be easily assumed that building an artificial brain is (or isn’t!) anything like past events.
I agree that using metaphors to predict the future is problematic, but predicting the future is really hard and if we don’t have a good inside view of what’s likely to happen the best we can do is to extrapolate from what has happened in the past.
Scary Mark Zuckerberg interview on AI risks where the Facebook founder says:
Yes, but if the crash of a single airplane would cause the extermination of mankind we would all be dead. A better analogy is scientists in 1940 considering whether detonating an atomic bomb would ignite the atmosphere.
I wonder if Zuckerberg is familiar with the concept of “hard takeoff”. I’ve been under the impression the concept has become mainstream, but I’ve been in the OB/LW sphere for the entirety of my adult life, and I have no idea how big the inferential distance has gotten.
Yeah, I don’t understand why safety should equal ‘stop working on the thing’. If anything, AI friendliness will further the advancement of AI, allowing a more widespread use.
There is a good chance that if the first super-intelligent AI isn’t carefully designed to be friendly it will destroy us, but creating a friendly super-intelligent AI is much harder than merely creating an AI, so our species only chance of survival is to go very slow with AI development until we have put a lot more resources into researching friendliness. Imagine that it was 1850 and you knew that the crash of a single airplane would destroy mankind, but you couldn’t convince others of this. You would be scared if people started to work on creating airplanes.
I get that, but I think that “working to make a plane a lot safer” would still tick the box “working on a plane project”. I would say this is even what happens in reality, otherwise you could just strap a jet engine under a bus.
I am all in favor of slowing down AI work to better focus on safety, and I would contest Zuckerberg telling him: “you know Mark, even if we are focusing on AI safety that doesn’t mean we are slowing down progress on AI, if anything, we are accelerating it.”
I worry that a lot of discussions about AI are all being done via metaphor or being based on past events while it’s easy to make up a metaphor that matches any given future scenario and it shouldn’t be easily assumed that building an artificial brain is (or isn’t!) anything like past events.
I agree that using metaphors to predict the future is problematic, but predicting the future is really hard and if we don’t have a good inside view of what’s likely to happen the best we can do is to extrapolate from what has happened in the past.