It’s clear, however, that a bee’s brain can perform a wide range of tasks beside efew-shot image classification, while the machine learning model developed in (Lee et al., 2019) cannot.
The abstract objection here is “if you choose an ML model that has been trained just for a specific task (few-shot learning), on priors you should expect it to be more efficient than an evolution-designed organism that has been trained for a whole bunch of stuff, of which the task you’re considering is just one example”. This can cash out in several ways:
1. Bees presumably use their eyesight for all sorts of things (e.g. detecting movement, depth estimation, etc) whereas the ML model only needs to use features useful for classification.
2. Even restricting to classification, bees are probably way more general than ML models. I suspect the few-shot ML models you look at would probably not generalize to very different types of images (e.g. stylized images), whereas bees presumably would.
3. I don’t know how well bee training works; it seems plausible to me that sometimes bees just want to do something else and that’s why they don’t land on the right target. So maybe they can classify with 90% accuracy, but only actually go to the correct lever 60% of the time.
You could say that this means that a transformative model will also be way more efficient than a human at a transformative task—I think this is somewhat true, but mostly depends on whether transformative tasks will require a fairly broad and general understanding of the world + doing various tasks within it. My guess is that while they don’t need as much generality as a human has, the human-TAI ratio will be much smaller than the bee-image classifier ratio.
My rough guess is that I’d change the conclusion by a factor of 50 or so, suggesting that a transformative model would require 20x less compute than the human brain.
I mostly agree with your comment, but I’m actually very unsure about 2 here: I think I recall bees seeming surprisingly narrow and bad at abstract shapes. Guille would know more here.
I think Rohin’s second point makes sense. Bees are actually pretty good at classifying abstract shapes (I mention a couple of studies that refer to this in the appendix about my choice of benchmark, such as Giurfa (1996)), so they might plausibly be able to generalize to stylized images.
The abstract objection here is “if you choose an ML model that has been trained just for a specific task (few-shot learning), on priors you should expect it to be more efficient than an evolution-designed organism that has been trained for a whole bunch of stuff, of which the task you’re considering is just one example”. This can cash out in several ways:
1. Bees presumably use their eyesight for all sorts of things (e.g. detecting movement, depth estimation, etc) whereas the ML model only needs to use features useful for classification.
2. Even restricting to classification, bees are probably way more general than ML models. I suspect the few-shot ML models you look at would probably not generalize to very different types of images (e.g. stylized images), whereas bees presumably would.
3. I don’t know how well bee training works; it seems plausible to me that sometimes bees just want to do something else and that’s why they don’t land on the right target. So maybe they can classify with 90% accuracy, but only actually go to the correct lever 60% of the time.
You could say that this means that a transformative model will also be way more efficient than a human at a transformative task—I think this is somewhat true, but mostly depends on whether transformative tasks will require a fairly broad and general understanding of the world + doing various tasks within it. My guess is that while they don’t need as much generality as a human has, the human-TAI ratio will be much smaller than the bee-image classifier ratio.
My rough guess is that I’d change the conclusion by a factor of 50 or so, suggesting that a transformative model would require 20x less compute than the human brain.
I mostly agree with your comment, but I’m actually very unsure about 2 here: I think I recall bees seeming surprisingly narrow and bad at abstract shapes. Guille would know more here.
I think Rohin’s second point makes sense. Bees are actually pretty good at classifying abstract shapes (I mention a couple of studies that refer to this in the appendix about my choice of benchmark, such as Giurfa (1996)), so they might plausibly be able to generalize to stylized images.