not all features of our own (human) cognition are necessary for AGI — as an example, the absence of the pre-, sub-, & fully- conscious distinction has trivial effects on AGI.
Well, I think it’s premature to say what is or isn’t important for an AGI until we build one.
This reminds me of works like Capsule Network
Yeah I agree that capsule networks capture part of that, even if it’s not the whole story.
Regarding the threat of AGI — one perspective is that people accidentally stumble upon AGI architecture (perhaps a simple one, but nonetheless one), don’t recognize it …
Sure, I was talking about digging into the gory details of the neocortical algorithm. What do the layer 2 neurons calculate and how? That kind of thing. Plenty of people are doing that all the time, of course, and making rapid progress in my opinion. I find that fact a bit nerve-wracking, but oh well, what can you do? Hope for the best, I suppose, and meanwhile work as fast as possible on the “what if we succeed” question. I mean, I do actually have idiosyncratic opinions about what layer 2 neurons are calculating and how, or whatever, but wouldn’t think to blog about them, on the off chance that I’m actually right. :-P
Bigger-picture thinking, like you’re talking about, is more likely to be a good thing, I figure, although the details matter. Like, creating common knowledge that a certain path will imminently lead to AGI could lead to a frantic race between teams around the world where safety gets thrown out the window. But some big-picture knowledge is necessary for “what if we succeed”. Of course I blog on big-picture stuff myself. I think I’m pushing things in the right direction, but who knows :-/
I’m going slightly off-topic but couldn’t help but notice that your website says that you’re doing this in your spare time. I’m surprised that you’ve covered so much ground. If you don’t mind me the question—how do you keep abreast of the AI field with so many papers published every year? Like do you attend periodic meet-ups in your circle of friends/colleagues to discuss such matters? do you opt to read summaries of papers instead of the long paper?
Oh, there are infinity papers on AI per month. I couldn’t dream of reading them all. Does anyone?
I think I’m at least vaguely aware of what the ML people are all talking about and excited about, through twitter.
Mainstream ML papers tend to be pretty low on my priority list compared to computational neuroscience, and neuroscience in general, and of course AGI safety and strategy.
Learning is easier when you have specific questions you’re desperately trying to answer :-)
Beyond that, I dunno. I don’t watch TV, and I quit my other hobbies to clear more time for this one. It is a bit exhausting sometimes. Maybe I should take a break. Oh but I do really want to write up that next blog post! And the one after that… :-)
Well, I think it’s premature to say what is or isn’t important for an AGI until we build one.
Yeah I agree that capsule networks capture part of that, even if it’s not the whole story.
Sure, I was talking about digging into the gory details of the neocortical algorithm. What do the layer 2 neurons calculate and how? That kind of thing. Plenty of people are doing that all the time, of course, and making rapid progress in my opinion. I find that fact a bit nerve-wracking, but oh well, what can you do? Hope for the best, I suppose, and meanwhile work as fast as possible on the “what if we succeed” question. I mean, I do actually have idiosyncratic opinions about what layer 2 neurons are calculating and how, or whatever, but wouldn’t think to blog about them, on the off chance that I’m actually right. :-P
Bigger-picture thinking, like you’re talking about, is more likely to be a good thing, I figure, although the details matter. Like, creating common knowledge that a certain path will imminently lead to AGI could lead to a frantic race between teams around the world where safety gets thrown out the window. But some big-picture knowledge is necessary for “what if we succeed”. Of course I blog on big-picture stuff myself. I think I’m pushing things in the right direction, but who knows :-/
I’m going slightly off-topic but couldn’t help but notice that your website says that you’re doing this in your spare time. I’m surprised that you’ve covered so much ground. If you don’t mind me the question—how do you keep abreast of the AI field with so many papers published every year? Like do you attend periodic meet-ups in your circle of friends/colleagues to discuss such matters? do you opt to read summaries of papers instead of the long paper?
Oh, there are infinity papers on AI per month. I couldn’t dream of reading them all. Does anyone?
I think I’m at least vaguely aware of what the ML people are all talking about and excited about, through twitter.
Mainstream ML papers tend to be pretty low on my priority list compared to computational neuroscience, and neuroscience in general, and of course AGI safety and strategy.
Learning is easier when you have specific questions you’re desperately trying to answer :-)
Beyond that, I dunno. I don’t watch TV, and I quit my other hobbies to clear more time for this one. It is a bit exhausting sometimes. Maybe I should take a break. Oh but I do really want to write up that next blog post! And the one after that… :-)