Strong agree, WBE seems far out to me too, like 100 years, although really who knows. By contrast “understanding the brain and building AGI using similar algorithms” does not seem far out to me—well, it won’t happen in 5 years, but I certainly wouldn’t rule it out in 20 or 30 years.
Yes brains are big (and complicated), but I don’t know how much that can be avoided.
I think the bigness and complicatedness of brains consists in large part of things that will not be necessary to include in the source code of a future AGI algorithm. See the first half of my post here, for example, for why I think that.
I’d have thought that the main reason WBE would come up would be ‘understandability’ or ‘alignment’ rather than speed, though I can see why at first glance people would say ‘reverse engineering the brain (which exists) seems easier than making something new’ (even if that is wrong).
There’s a normative question of whether it’s (A) good or (B) bad to have WBE before AGI, and there’s a forecasting question of whether WBE-before-AGI is (1) likely by default, vs (2) possible with advocacy/targeted funding/etc., vs (3) very unlikely even with advocacy/targeted funding/etc. For my part, I vote for (3), and therefore I’m not thinking very hard about whether (A) or (B) is right.
I’m not sure whether this part of your comment is referring to the normative question or the forecasting question.
I’m also not sure if when you say “reverse engineering the brain” you’re referring to WBE. For my part, I would say that “reverse-engineering the brain” = “understanding the brain at the algorithm level” = brain-inspired AGI, not WBE. I think the only way to get WBE before brain-inspired AGI is to not understand the brain at the algorithm level.
I’m not sure whether this part of your comment is referring to the normative question or the forecasting question.
Normative. Say that aligning something much smarter/more complicated than you (or your processing**) is difficult. The obvious fix would be: can we make people smarter?* If digital enhancement is easier (which seems like it could be likely, at least for crude ways—more computers more processing (though this may be more difficult than it sounds—serial has to be finished, parallel has to be communicated, etc.).)
This might help with analysis, or something like bandwidth. (Being able to process more information might make it easier to analyze the output of a process—if we wanted to evaluate something GPT like thing’s ability to generate poetry, then if it can generate ‘poetry’ faster than we can read or rate, then we’re the bottle neck.)
*Or algorithms simpler/easier to understand.
**Better ways of analyzing chess might help someone understand a (current) position in a chess game better (or just allow phones to compete with humans (though I don’t know how much of this is better algorithms, versus more powerful phones)).
Thanks!
Strong agree, WBE seems far out to me too, like 100 years, although really who knows. By contrast “understanding the brain and building AGI using similar algorithms” does not seem far out to me—well, it won’t happen in 5 years, but I certainly wouldn’t rule it out in 20 or 30 years.
I think the bigness and complicatedness of brains consists in large part of things that will not be necessary to include in the source code of a future AGI algorithm. See the first half of my post here, for example, for why I think that.
There’s a normative question of whether it’s (A) good or (B) bad to have WBE before AGI, and there’s a forecasting question of whether WBE-before-AGI is (1) likely by default, vs (2) possible with advocacy/targeted funding/etc., vs (3) very unlikely even with advocacy/targeted funding/etc. For my part, I vote for (3), and therefore I’m not thinking very hard about whether (A) or (B) is right.
I’m not sure whether this part of your comment is referring to the normative question or the forecasting question.
I’m also not sure if when you say “reverse engineering the brain” you’re referring to WBE. For my part, I would say that “reverse-engineering the brain” = “understanding the brain at the algorithm level” = brain-inspired AGI, not WBE. I think the only way to get WBE before brain-inspired AGI is to not understand the brain at the algorithm level.
Normative. Say that aligning something much smarter/more complicated than you (or your processing**) is difficult. The obvious fix would be: can we make people smarter?* If digital enhancement is easier (which seems like it could be likely, at least for crude ways—more computers more processing (though this may be more difficult than it sounds—serial has to be finished, parallel has to be communicated, etc.).)
This might help with analysis, or something like bandwidth. (Being able to process more information might make it easier to analyze the output of a process—if we wanted to evaluate something GPT like thing’s ability to generate poetry, then if it can generate ‘poetry’ faster than we can read or rate, then we’re the bottle neck.)
*Or algorithms simpler/easier to understand.
**Better ways of analyzing chess might help someone understand a (current) position in a chess game better (or just allow phones to compete with humans (though I don’t know how much of this is better algorithms, versus more powerful phones)).