I think there are misalignment demonstrations and capability demonstrations.
Misalignment skeptics believe that “once you become truly superintelligent you will reflect on humans and life and everything and realize you should be kind.”
Capability skeptics believe “AGI and ASI are never going to come for another 1000 years.”
Takeover skeptics believe “AGI will come, but humans will keep it under control because it’s impossible to escape your captors and take over the world even if you’re infinitely smart,” or “AGI will get smarter gradually and will remain controlled.”
Misalignment demonstrations can only convince misalignment skeptics. It can’t convince all of them because some may insist that the misaligned AI are not intelligent enough to realize their errors and become good. Misalignment demonstrations which deliberately tell the AI to be misaligned (e.g. ChaosGPT) also won’t convince some people, and I really dislike these demonstrations. The Chernobyl disaster was actually caused by people stress testing the reactor to make sure it’s safe. Aeroflot Flight 6502 crashed when the pilot was demonstrating to the first officer how to land an airplane with zero visibility. People have died while demonstrating gun safety.
Capability demonstrations can only convince capability skeptics. I actually think a lot of people changed their minds after ChatGPT. Capability skeptics do get spooked by capability demonstrations and do start to worry more.
Sadly, I don’t think we can do takeover demonstrations to convince takeover skeptics.
I think there are misalignment demonstrations and capability demonstrations.
Misalignment skeptics believe that “once you become truly superintelligent you will reflect on humans and life and everything and realize you should be kind.”
Capability skeptics believe “AGI and ASI are never going to come for another 1000 years.”
Takeover skeptics believe “AGI will come, but humans will keep it under control because it’s impossible to escape your captors and take over the world even if you’re infinitely smart,” or “AGI will get smarter gradually and will remain controlled.”
Misalignment demonstrations can only convince misalignment skeptics. It can’t convince all of them because some may insist that the misaligned AI are not intelligent enough to realize their errors and become good. Misalignment demonstrations which deliberately tell the AI to be misaligned (e.g. ChaosGPT) also won’t convince some people, and I really dislike these demonstrations. The Chernobyl disaster was actually caused by people stress testing the reactor to make sure it’s safe. Aeroflot Flight 6502 crashed when the pilot was demonstrating to the first officer how to land an airplane with zero visibility. People have died while demonstrating gun safety.
Capability demonstrations can only convince capability skeptics. I actually think a lot of people changed their minds after ChatGPT. Capability skeptics do get spooked by capability demonstrations and do start to worry more.
Sadly, I don’t think we can do takeover demonstrations to convince takeover skeptics.