This made me think about… not sure if I can express it coherently… what is the relation between the AI capabilities and the total performance of humankind (machine tools included).
Basically, the idea is that if we can build a machine with IQ 1000000, it will be able to solve aging, because from its perspective, aging will be simple. Or it may kill us instead. (This is a metaphor, don’t take it literally.) Building one machine with IQ 300 will probably not be sufficient to solve aging, even if it is literally smarter than all existing humans. I mean, there are currently smart people working on that problem, then solve something, but it’s all very complicated, so they are still at the beginning. The machine with IQ 300 might solve a bit more, but the problem may be so complicated that it wouldn’t solve everything in 100 years anyway. What about thousand machines with IQ 300? Well, maybe yes, maybe no. (Also, there will be other urgent problems to solve, such as how to prevent wars, how to feed people, how to prevent bad people from building their IQ 300 machines, etc.)
Now from an opposite angle: even without AIs, just talking about normal people, sometimes organizations lack people, lack funding, lack talent. This is somewhat related, like if you have no money, you can’t hire people… sometimes you can run the organization with volunteers only, but that usually sucks because everyone is unreliable (because their actual job comes first). But, generally speaking, there may be a situation that an organization is bottle-necked on money (they know a few competent people who would like to work on the problem, but those people need to pay their bills, and the organization cannot afford to pay them), or a situation that an organization is bottle-necked on talent (they actually have enough funding to hire dozen people, but they can’t find the right person, either no one has the specific talent or people having the specific talent are not interested in working for this organization or the organization is unable to recognize who has a talent and who does not). And… where exactly are the current AIs (GPT-4) in this picture, and where are the AIs of tomorrow, who will be smarter than today but maybe not superhumanly smart yet?
For example, imagine that there is an organization doing advanced medical research, and you tell them all to take a one week break, and you teach them about GPT-4, what it can do and what it can’t do, how it works, so that they have a realistic idea about what to expect. Then they return to their original work, but you give each of them a paid GPT-4 account, plus maybe pay a personal assistant for each of them (with decent knowledge of medicine, but no superstars, who also happen to be good with computers) basically to provide a human user interface for the researchers (so that the researcher does not waste time with prompt engineering, and can tell the assistant to “figure this out, do some sanity check, and report to me the results”). Let’s assume the GPT-4 is also trained on Sci-Hub, or somehow connected to it. Would something like this help a lot… or not at all?
I think I am asking whether the actual bottleneck for current medical research is the… part where an extra intelligence could help… or rather something in real world (such as preparing the samples, injecting them to mice, and waiting what happens after a few weeks) where having a lot of cheap intelligence that is still below the level of current researchers would actually not make a big difference?
Or I guess the question is: at which moment exactly can AI start being significantly helpful at aging research? Do we need to wait until Singularity, or is there something we could do right now if we notice the opportunity?
This made me think about… not sure if I can express it coherently… what is the relation between the AI capabilities and the total performance of humankind (machine tools included).
Basically, the idea is that if we can build a machine with IQ 1000000, it will be able to solve aging, because from its perspective, aging will be simple. Or it may kill us instead. (This is a metaphor, don’t take it literally.) Building one machine with IQ 300 will probably not be sufficient to solve aging, even if it is literally smarter than all existing humans. I mean, there are currently smart people working on that problem, then solve something, but it’s all very complicated, so they are still at the beginning. The machine with IQ 300 might solve a bit more, but the problem may be so complicated that it wouldn’t solve everything in 100 years anyway. What about thousand machines with IQ 300? Well, maybe yes, maybe no. (Also, there will be other urgent problems to solve, such as how to prevent wars, how to feed people, how to prevent bad people from building their IQ 300 machines, etc.)
Now from an opposite angle: even without AIs, just talking about normal people, sometimes organizations lack people, lack funding, lack talent. This is somewhat related, like if you have no money, you can’t hire people… sometimes you can run the organization with volunteers only, but that usually sucks because everyone is unreliable (because their actual job comes first). But, generally speaking, there may be a situation that an organization is bottle-necked on money (they know a few competent people who would like to work on the problem, but those people need to pay their bills, and the organization cannot afford to pay them), or a situation that an organization is bottle-necked on talent (they actually have enough funding to hire dozen people, but they can’t find the right person, either no one has the specific talent or people having the specific talent are not interested in working for this organization or the organization is unable to recognize who has a talent and who does not). And… where exactly are the current AIs (GPT-4) in this picture, and where are the AIs of tomorrow, who will be smarter than today but maybe not superhumanly smart yet?
For example, imagine that there is an organization doing advanced medical research, and you tell them all to take a one week break, and you teach them about GPT-4, what it can do and what it can’t do, how it works, so that they have a realistic idea about what to expect. Then they return to their original work, but you give each of them a paid GPT-4 account, plus maybe pay a personal assistant for each of them (with decent knowledge of medicine, but no superstars, who also happen to be good with computers) basically to provide a human user interface for the researchers (so that the researcher does not waste time with prompt engineering, and can tell the assistant to “figure this out, do some sanity check, and report to me the results”). Let’s assume the GPT-4 is also trained on Sci-Hub, or somehow connected to it. Would something like this help a lot… or not at all?
I think I am asking whether the actual bottleneck for current medical research is the… part where an extra intelligence could help… or rather something in real world (such as preparing the samples, injecting them to mice, and waiting what happens after a few weeks) where having a lot of cheap intelligence that is still below the level of current researchers would actually not make a big difference?
Or I guess the question is: at which moment exactly can AI start being significantly helpful at aging research? Do we need to wait until Singularity, or is there something we could do right now if we notice the opportunity?