If you think spiking neural nets are more compute intensive, then why does this matter? It seems like we’d just get AGI faster with regular neural nets? (I think compute is more likely to be the bottleneck than data, so the data efficiency doesn’t seem that relevant.)
Perhaps you think that if we use spiking neural nets, then we only need to train it for 15 human-years-equivalent to get AGI (similar to the Lifetime anchor in the bio anchors report), but that wouldn’t be true if we used regular neural nets? Seems kinda surprising.
Maybe you think that the Lifetime anchor in bio anchors is the best anchor to use and so you have shorter timelines?
I wrote this specifically aimed at the case of “in the thought experiment where humanity got 12 orders of magnitude more compute this year, what would happen in the next 12 months?”
I liked the post that Daniel wrote about that and wanted to expand on it. My claim is that even if everything that was mentioned in that post was tried and failed, that there would still be these things to try. They are algorithms which already exist, which could be scaled up if we suddenly had an absurd amount of compute. Not all arguments about why standard approaches like Transformers fail also apply to these alternate approaches.
Right, but I don’t yet understand what you predict happens. Let’s say we got 12 OOMs of compute and tried these things. Do we now have AGI? I predict no.
Ah, gotcha. I predict yes, with quite high confidence (like 95%), for 12 OOMs and using the Blue Brain Project. The others I place only small confidence in (maybe 5% each).
I really think the BBP has enough detail in its model to make something very like a human neocortex, and capable of being an AGI, if scaled up.
If you think spiking neural nets are more compute intensive, then why does this matter? It seems like we’d just get AGI faster with regular neural nets? (I think compute is more likely to be the bottleneck than data, so the data efficiency doesn’t seem that relevant.)
Perhaps you think that if we use spiking neural nets, then we only need to train it for 15 human-years-equivalent to get AGI (similar to the Lifetime anchor in the bio anchors report), but that wouldn’t be true if we used regular neural nets? Seems kinda surprising.
Maybe you think that the Lifetime anchor in bio anchors is the best anchor to use and so you have shorter timelines?
I wrote this specifically aimed at the case of “in the thought experiment where humanity got 12 orders of magnitude more compute this year, what would happen in the next 12 months?” I liked the post that Daniel wrote about that and wanted to expand on it. My claim is that even if everything that was mentioned in that post was tried and failed, that there would still be these things to try. They are algorithms which already exist, which could be scaled up if we suddenly had an absurd amount of compute. Not all arguments about why standard approaches like Transformers fail also apply to these alternate approaches.
Right, but I don’t yet understand what you predict happens. Let’s say we got 12 OOMs of compute and tried these things. Do we now have AGI? I predict no.
Ah, gotcha. I predict yes, with quite high confidence (like 95%), for 12 OOMs and using the Blue Brain Project. The others I place only small confidence in (maybe 5% each). I really think the BBP has enough detail in its model to make something very like a human neocortex, and capable of being an AGI, if scaled up.