I’m just going to try to lay out my thoughts on this. Forgive me if it’s a bit of an aimless ramble.
If you want to calculate out how much computation power we need for TAI, we need to know two things. A method for creating TAI, and how much computation power such a method needs to create TAI. It seems like biological timelines are attempts to dodge this, and I don’t see how or why this would work. Maybe I’m mistaken here, but the report seems to just go “If we get ML methods to function at human brain levels, that will result in TAI” flatly. But, why is it that ML methods should create TAI if you give them the same computation power as the human brain? Where did that knowledge come from? We don’t know what the human brain is doing to create intelligence, how do we know that ML can perform as well at similar levels of computation power? We have this open question, “Can modern ML methods result in TAI, and if so, how much computation power does it need to do so?”, but the answer “Less than or equal to how much the human brain uses” doesn’t obviously connect to anything. Where does it come from, why is it the case? Why can’t modern ML only work to create TAI if you give it a million times more computation power than the human brain? What thing is true that makes that impossible? I don’t see it, maybe I’m missing something.
Edit: This is really bothering me, so this is an addendum to try and communicate where exactly my confusion lies. Any help would be appreciated.
I understand why we’re tempted to use the human brain as a comparison. It’s a concrete example of general intelligence. So, assuming our methods can be more efficient at using the computation available to the brain than whatever evolution cooked up, that much computation is enough for at least general intelligence, which is in the worst case enough to be transformative on its own, but almost certainly beaten by something easier.
So, the key question is about our efficiency versus evolution’s. And I don’t understand how anything could be said regarding this. What if, we go back to the point where programming was invented, and gave them computers capable of beating the computation of the human brain? It’d take them some amount of time to do it, obviously just the computing power isn’t enough. What if you gave them a basic course on machine learning? If you plug in the most basic ML techniques into such a computer, I don’t naively expect something rivaling general intelligence. So some amount of development is needed. So, we’re left with the question, “How much do we need to advance the field of Machine Learning before it can result in transformative AI?”. The problem is, that’s what we’re trying to answer in the first place! If you need to figure out how much research time is needed to develop AI in order to work out how long it’ll take to develop AI, I don’t see how that’s useful information.
I don’t now how to phrase this next bit in a way that doesn’t come across as insulting, please understand I’m genuinely trying to understand this and would appreciate more detail.
As far I can tell, the paper’s way of dealing with the question of the difference of efficiency, is thus; there is a race from the dawn of the neuron until the development of the human being, versus the dawn of computation until the point where we have human brain levels of computation available to us, and the author thinks those should be about the same based on their gut instinct, maybe an order of magnitude off.
This question, is the question. The amount by which we have a more efficient or less efficient design for an intelligent agent is, literally, the most important factor in determining how much computation power is needed to run it. If we have a design that needs ten times more computation power to function similarly, we need ten times more computation power. It’s one to one the answer to the question. And it’s answered on a gut feeling. I don’t get this at all, I’m sorry. I feel I must be wrong on this somewhere, but I don’t see where. I really would appreciate it if someone could explain it to me.
I’m just going to try to lay out my thoughts on this. Forgive me if it’s a bit of an aimless ramble.
If you want to calculate out how much computation power we need for TAI, we need to know two things. A method for creating TAI, and how much computation power such a method needs to create TAI. It seems like biological timelines are attempts to dodge this, and I don’t see how or why this would work. Maybe I’m mistaken here, but the report seems to just go “If we get ML methods to function at human brain levels, that will result in TAI” flatly. But, why is it that ML methods should create TAI if you give them the same computation power as the human brain? Where did that knowledge come from? We don’t know what the human brain is doing to create intelligence, how do we know that ML can perform as well at similar levels of computation power? We have this open question, “Can modern ML methods result in TAI, and if so, how much computation power does it need to do so?”, but the answer “Less than or equal to how much the human brain uses” doesn’t obviously connect to anything. Where does it come from, why is it the case? Why can’t modern ML only work to create TAI if you give it a million times more computation power than the human brain? What thing is true that makes that impossible? I don’t see it, maybe I’m missing something.
Edit: This is really bothering me, so this is an addendum to try and communicate where exactly my confusion lies. Any help would be appreciated.
I understand why we’re tempted to use the human brain as a comparison. It’s a concrete example of general intelligence. So, assuming our methods can be more efficient at using the computation available to the brain than whatever evolution cooked up, that much computation is enough for at least general intelligence, which is in the worst case enough to be transformative on its own, but almost certainly beaten by something easier.
So, the key question is about our efficiency versus evolution’s. And I don’t understand how anything could be said regarding this. What if, we go back to the point where programming was invented, and gave them computers capable of beating the computation of the human brain? It’d take them some amount of time to do it, obviously just the computing power isn’t enough. What if you gave them a basic course on machine learning? If you plug in the most basic ML techniques into such a computer, I don’t naively expect something rivaling general intelligence. So some amount of development is needed. So, we’re left with the question, “How much do we need to advance the field of Machine Learning before it can result in transformative AI?”. The problem is, that’s what we’re trying to answer in the first place! If you need to figure out how much research time is needed to develop AI in order to work out how long it’ll take to develop AI, I don’t see how that’s useful information.
I don’t now how to phrase this next bit in a way that doesn’t come across as insulting, please understand I’m genuinely trying to understand this and would appreciate more detail.
As far I can tell, the paper’s way of dealing with the question of the difference of efficiency, is thus; there is a race from the dawn of the neuron until the development of the human being, versus the dawn of computation until the point where we have human brain levels of computation available to us, and the author thinks those should be about the same based on their gut instinct, maybe an order of magnitude off.
This question, is the question. The amount by which we have a more efficient or less efficient design for an intelligent agent is, literally, the most important factor in determining how much computation power is needed to run it. If we have a design that needs ten times more computation power to function similarly, we need ten times more computation power. It’s one to one the answer to the question. And it’s answered on a gut feeling. I don’t get this at all, I’m sorry. I feel I must be wrong on this somewhere, but I don’t see where. I really would appreciate it if someone could explain it to me.