My opinion is that you’re not going to be able to crack the alignment problem if you have a phobia of infohazards. Essentially you need a ‘Scout Mindset’. There’s already smart people working hard on the problem, including in public such as on podcasts, so realistically the best (or worst) could do on this forum is attempt to parse out what is known publicly about the scary stuff (eg agency) from DeepMind’s papers and then figure out if there is a path forward towards alignment.
Yeah, I tend to agree. Just wanted to make sure I’m not violating norms. In that case, my specific thoughts are as follows, with a thought to implementing AI transparency at the end.
There is the observation that the transformer architecture doesn’t have a hidden state like an LSTM. I thought for a while something like this was needed for intelligence, to have a compact representation of the state one is in. (My biased view, that I’ve updated away from, was that the weights represented HOW to think, and less about knowledge.) However, it’s really intractable to back propagate over so many time steps, and transformers have shown us that you really don’t need to. The long term memory is just in the weights.
So, one obvious thing is you could simply make the language model continue to update on its dialogues, including its own response and that of its conversation partner(s). Over time, it could in theory learn from its own conversations and incorporate them into its long term memory, perhaps learning that this type of dialogue involving ‘itself’ is special. (It can even use special tokens or something if need be.)
As for doing variable length computation, the networks themselves do that with the number of layers they have, but that’s really a fixed feature of the architecture. Instead, variable length computation would really be a data issue. Basically, having enough data that “shows your work”. The model can learn to output intermediate results and use attention over those to continue its thought process. There is a small potential for baking transparency into the model this way, where you could force the model, through limited context windows, to carry its thoughts out using human language basically. Then if the model is doing a very careful analysis on whether it’s ready to take over the world, it would need to go through those calculations, and you could read them.
Of course maybe it’ll learn to use code words or something, but it’s a thought. Maybe you would always be able to ask it what was it thinking about there, and maybe the explanations it gives might not always be convincing.
Though, imagine that the machine starts outputting gibberish intermediate steps, and tells the human operators that thinking using human zipfian distributed words is highly inefficient, and the operators shrug their shoulders and say “Oh cool. Makes sense. Carry on.” As I hear said around here, that’s a way to die with less dignity.
My opinion is that you’re not going to be able to crack the alignment problem if you have a phobia of infohazards. Essentially you need a ‘Scout Mindset’. There’s already smart people working hard on the problem, including in public such as on podcasts, so realistically the best (or worst) could do on this forum is attempt to parse out what is known publicly about the scary stuff (eg agency) from DeepMind’s papers and then figure out if there is a path forward towards alignment.
Yeah, I tend to agree. Just wanted to make sure I’m not violating norms. In that case, my specific thoughts are as follows, with a thought to implementing AI transparency at the end.
There is the observation that the transformer architecture doesn’t have a hidden state like an LSTM. I thought for a while something like this was needed for intelligence, to have a compact representation of the state one is in. (My biased view, that I’ve updated away from, was that the weights represented HOW to think, and less about knowledge.) However, it’s really intractable to back propagate over so many time steps, and transformers have shown us that you really don’t need to. The long term memory is just in the weights.
So, one obvious thing is you could simply make the language model continue to update on its dialogues, including its own response and that of its conversation partner(s). Over time, it could in theory learn from its own conversations and incorporate them into its long term memory, perhaps learning that this type of dialogue involving ‘itself’ is special. (It can even use special tokens or something if need be.)
As for doing variable length computation, the networks themselves do that with the number of layers they have, but that’s really a fixed feature of the architecture. Instead, variable length computation would really be a data issue. Basically, having enough data that “shows your work”. The model can learn to output intermediate results and use attention over those to continue its thought process. There is a small potential for baking transparency into the model this way, where you could force the model, through limited context windows, to carry its thoughts out using human language basically. Then if the model is doing a very careful analysis on whether it’s ready to take over the world, it would need to go through those calculations, and you could read them.
Of course maybe it’ll learn to use code words or something, but it’s a thought. Maybe you would always be able to ask it what was it thinking about there, and maybe the explanations it gives might not always be convincing.
Though, imagine that the machine starts outputting gibberish intermediate steps, and tells the human operators that thinking using human zipfian distributed words is highly inefficient, and the operators shrug their shoulders and say “Oh cool. Makes sense. Carry on.” As I hear said around here, that’s a way to die with less dignity.