Yes definitely. Based on my own estimates of approximate brain scale it is likely that current largest. ML projects (GPT4) are within an OOM or so of effective parameter count already (+- 1-2 OOM) and we will definitely have brain-scale ML systems being quite common within a decade and probably less—hence short timelines. Strong agree that it is much easier to add compute/energy to ML models vs brains.
I’ve found Cannell’s post very dense/hard to read the times I’ve attempted it. I guess there’s a large inferential distance in some aspects, so lots of it go over my head.
Yes definitely. Based on my own estimates of approximate brain scale it is likely that current largest. ML projects (GPT4) are within an OOM or so of effective parameter count already (+- 1-2 OOM) and we will definitely have brain-scale ML systems being quite common within a decade and probably less—hence short timelines. Strong agree that it is much easier to add compute/energy to ML models vs brains.
Have you written your estimates of brain scale up anywhere?
I’ve written up some of my preliminary thought and estimates here: https://www.beren.io/2022-08-06-The-scale-of-the-brain-vs-machine-learning/.
Jacob Cannell’s post on brain efficiency https://www.lesswrong.com/posts/xwBuoE9p8GE7RAuhd/brain-efficiency-much-more-than-you-wanted-to-know is also very good
I’ll check your post out.
I’ve found Cannell’s post very dense/hard to read the times I’ve attempted it. I guess there’s a large inferential distance in some aspects, so lots of it go over my head.