There are a lot of different ways you can talk about “efficiency” here. The main thing I am thinking about with regard to the key question “how much FLOP would we expect transformative AI to require?” is whether, when using a neural net anchor (not evolution) to add a 1-3 OOM penalty to FLOP needs due to 2022-AI systems being less sample efficient than humans (requiring more data to produce the same capabilities) and with this penalty decreasing over time given expected algorithmic progress. The next question would be how much more efficient potential AI (e.g., 2100-AI not 2022-AI) could be given fundamentals of silicon vs. neurons, so we might know how much algorithmic progress could affect this.
I think it is pretty clear right now that 2022-AI is less sample efficient than humans. I think other forms of efficiency (e.g., power efficiency, efficiency of SGD vs. evolution) are less relevant to this.
I think it is pretty clear right now that 2022-AI is less sample efficient than humans. I think other forms of efficiency (e.g., power efficiency, efficiency of SGD vs. evolution) are less relevant to this.
To me this isn’t clear. Yes, we’re better one-shot learners, but I’d say the most likely explanation is that the human training set is larger and that much of that training set is hidden away in our evolutionary past.
It’s one thing to estimate evolution FLOP (and as Nuño points out, even that is questionable). It strikes me as much more difficult (and even more dubious) to estimate the “number of samples” or “total training signal (bytes)” over one’s lifetime / evolution.
There are a lot of different ways you can talk about “efficiency” here. The main thing I am thinking about with regard to the key question “how much FLOP would we expect transformative AI to require?” is whether, when using a neural net anchor (not evolution) to add a 1-3 OOM penalty to FLOP needs due to 2022-AI systems being less sample efficient than humans (requiring more data to produce the same capabilities) and with this penalty decreasing over time given expected algorithmic progress. The next question would be how much more efficient potential AI (e.g., 2100-AI not 2022-AI) could be given fundamentals of silicon vs. neurons, so we might know how much algorithmic progress could affect this.
I think it is pretty clear right now that 2022-AI is less sample efficient than humans. I think other forms of efficiency (e.g., power efficiency, efficiency of SGD vs. evolution) are less relevant to this.
To me this isn’t clear. Yes, we’re better one-shot learners, but I’d say the most likely explanation is that the human training set is larger and that much of that training set is hidden away in our evolutionary past.
It’s one thing to estimate evolution FLOP (and as Nuño points out, even that is questionable). It strikes me as much more difficult (and even more dubious) to estimate the “number of samples” or “total training signal (bytes)” over one’s lifetime / evolution.