IDK where else to say this, so I’ll say it here. I find many LW articles hard to follow because they use terms I don’t know. I assume everyone else knows, but I’m a newbie. Ergo I request a kindness: if your article uses a term that’s not common English use (GT3, alignment, etc.), define it the first time you use it.
Not sure if this is a good idea, but some of those links could be added automatically—then we do not need to worry about authors forgetting.
All we need is to maintain a list of [keyword, canonical link] and add a hyperlink to the first occurence of the keyword (or any of its synonyms) in the article. Perhaps these automatic links should look differently from the manually added ones (I imagine a small question mark symbol after the keyword), and perhaps the registered users should have an option to turn this off.
In the meanwhile, you can find most of those things in the list of tags.
Looking at your examples, “GPT” (and “ChatGPT”) is there. “Alignment” is apparently considered too general—it essentially means “making the AI want exactly what you want” (as opposed to making an AI that wants to do some random thing, and maybe obeys you while it is weak, but then starts doing its own thing when it becomes strong) -- so we only have pages on “inner/outer alignment”, “deceptive alignment”, and less frequently used “internal (human) alignment”, “chain of thought alignment” and “general alignment properties”.
IDK where else to say this, so I’ll say it here. I find many LW articles hard to follow because they use terms I don’t know. I assume everyone else knows, but I’m a newbie. Ergo I request a kindness: if your article uses a term that’s not common English use (GT3, alignment, etc.), define it the first time you use it.
Not sure if this is a good idea, but some of those links could be added automatically—then we do not need to worry about authors forgetting.
All we need is to maintain a list of [keyword, canonical link] and add a hyperlink to the first occurence of the keyword (or any of its synonyms) in the article. Perhaps these automatic links should look differently from the manually added ones (I imagine a small question mark symbol after the keyword), and perhaps the registered users should have an option to turn this off.
In the meanwhile, you can find most of those things in the list of tags.
Looking at your examples, “GPT” (and “ChatGPT”) is there. “Alignment” is apparently considered too general—it essentially means “making the AI want exactly what you want” (as opposed to making an AI that wants to do some random thing, and maybe obeys you while it is weak, but then starts doing its own thing when it becomes strong) -- so we only have pages on “inner/outer alignment”, “deceptive alignment”, and less frequently used “internal (human) alignment”, “chain of thought alignment” and “general alignment properties”.