Yeah, it’d be nice if researchers consulted FHI about any dangerous AI ideas. But I don’t really understand how that would work. Let’s say I’m a researcher at some university, I have a dangerous AI idea and contact FHI about it. What happens next?
Honestly I’m not sure. Maybe you still publish but are advised to withhold certain details? Maybe you keep it a secret? Maybe you use a woodchipper to destroy all evidence of the idea? I think much of the value would come from having both a visible and active norm of taking ideas you think are dangerous seriously and having someone to discuss dangerous ideas with so you don’t end up with researchers like the ones I met who feel they are alone and have to grapple with thinking through the consequences of their research in isolation.
Yeah, it’d be nice if researchers consulted FHI about any dangerous AI ideas. But I don’t really understand how that would work. Let’s say I’m a researcher at some university, I have a dangerous AI idea and contact FHI about it. What happens next?
Honestly I’m not sure. Maybe you still publish but are advised to withhold certain details? Maybe you keep it a secret? Maybe you use a woodchipper to destroy all evidence of the idea? I think much of the value would come from having both a visible and active norm of taking ideas you think are dangerous seriously and having someone to discuss dangerous ideas with so you don’t end up with researchers like the ones I met who feel they are alone and have to grapple with thinking through the consequences of their research in isolation.