Some quick comments based purely on the poster (which is probably the most important part of your funnel):
“Biological Anchors” is probably not a meaningful term for your audience.
We have a 50% chance of recreating that amount of relevant computation by 2060
This seems wrong in that we already have around brain training levels of computation now or will soon—far before 2060. The remaining uncertainty is over software/algorithms, not hardware. We already have the hardware or are about to.
Once AI is capable of ML programming, it could improve its algorithms, making itself better at ML programming
This is overly specific—why only ML programming? What if the lowest hanging fruit is actually in cuda programming? Or just moving to different hardware? Or designing new hardware? Or better networking tech? Or one wierd trick to make a trillion dollars and quickly scale to more hardware? Etc etc. The idea that there are enormous gains in further optimization of ML architecture alone, and that this unending cornucopia of optimization low hanging fruit will still be bountiful and limitless by the time we actually get AGI—this suggests a very naive view of ML & neuroscience.
Just replace “ML programming” with “science and engineering R&D” or similar.
Training AI requires us to select an objective function to be maximized, yet coming up with an unproblematic objective function is really hard.
Many smart people will bounce hard off this, because they have many many examples where coming up with an unproblematic objective function isn’t really hard at all. It’s trivial to write the correct objective function for Chess or Go. It was trivial to design the correct utility function for atari, for minecraft even (which doesn’t have a score!), it was also trivial for optimizing datacenter power usage, for generating high quality images from text, for every other modern example of DL, etc etc.
I would change this to something like:
“Training AI requires us to select an objective function to be maximized, yet coming up with an unproblematic objective function for AGI—agents with general intelligence beyond that of humans—seems really hard”.
Thanks, Jacob! This is helpful. I’ve made the relevant changes to my copy of the poster.
Regarding the ‘biological anchors’ point, I intended to capture the notion that it is not just the level/amount of computation that matters by prefixing with the word ‘relevant’. When expanding on that point in conversation, I am careful to point out that generating high levels of computation isn’t sufficient for creating human-level intelligence. I agree with what you say. I also think you’re right about the term “biological anchors” not being very meaningful to my audience. Given that, from my experience, many academics see the poster but don’t ask questions, it’s probably a good idea for me to substitute this term for another. Thanks!
Some quick comments based purely on the poster (which is probably the most important part of your funnel):
“Biological Anchors” is probably not a meaningful term for your audience.
This seems wrong in that we already have around brain training levels of computation now or will soon—far before 2060. The remaining uncertainty is over software/algorithms, not hardware. We already have the hardware or are about to.
This is overly specific—why only ML programming? What if the lowest hanging fruit is actually in cuda programming? Or just moving to different hardware? Or designing new hardware? Or better networking tech? Or one wierd trick to make a trillion dollars and quickly scale to more hardware? Etc etc. The idea that there are enormous gains in further optimization of ML architecture alone, and that this unending cornucopia of optimization low hanging fruit will still be bountiful and limitless by the time we actually get AGI—this suggests a very naive view of ML & neuroscience.
Just replace “ML programming” with “science and engineering R&D” or similar.
Many smart people will bounce hard off this, because they have many many examples where coming up with an unproblematic objective function isn’t really hard at all. It’s trivial to write the correct objective function for Chess or Go. It was trivial to design the correct utility function for atari, for minecraft even (which doesn’t have a score!), it was also trivial for optimizing datacenter power usage, for generating high quality images from text, for every other modern example of DL, etc etc.
I would change this to something like:
“Training AI requires us to select an objective function to be maximized, yet coming up with an unproblematic objective function for AGI—agents with general intelligence beyond that of humans—seems really hard”.
Thanks, Jacob! This is helpful. I’ve made the relevant changes to my copy of the poster.
Regarding the ‘biological anchors’ point, I intended to capture the notion that it is not just the level/amount of computation that matters by prefixing with the word ‘relevant’. When expanding on that point in conversation, I am careful to point out that generating high levels of computation isn’t sufficient for creating human-level intelligence. I agree with what you say. I also think you’re right about the term “biological anchors” not being very meaningful to my audience. Given that, from my experience, many academics see the poster but don’t ask questions, it’s probably a good idea for me to substitute this term for another. Thanks!