Many times have I heard people talk about ideas they thought up that are ‘super infohazardous’ and ‘may substantially advance capabilities’ and then later when I have been made privy to the idea, realized that they had, in fact, reinvented an idea that had been publicly available in the ML literature for several years with very mixed evidence for its success – hence why it was not widely used and known to the person coming up with the idea.
I’d be very interested if anyone has specific examples of ideas like this they could share (that are by now widely known or obviously not hazardous). I’m sympathetic to the sorts of things the article says, but I don’t actually have any picture of the class of ideas it’s talking about.
I’m not “on the inside”, but my understanding is that some people at Conjecture came up with Chain of Thought prompting and decided that it was infohazardous, I gather fairly shortly before preprints describing it came out in the open AI literature. That idea does work well, but was of course obvious to any schoolteacher.
I’d be very interested if anyone has specific examples of ideas like this they could share (that are by now widely known or obviously not hazardous). I’m sympathetic to the sorts of things the article says, but I don’t actually have any picture of the class of ideas it’s talking about.
I’m not “on the inside”, but my understanding is that some people at Conjecture came up with Chain of Thought prompting and decided that it was infohazardous, I gather fairly shortly before preprints describing it came out in the open AI literature. That idea does work well, but was of course obvious to any schoolteacher.