(1) Frankly when I look at the current llm’s results, I am seeing more resolution in many of the replies than humans are cognitively capable of.
See below. Very few living humans, if any at all, could generate this answer without help:
(remember, chatGPT is not actually accessing an internal linux terminal)
This tells me that chatGPT’s weights are probably worth more than a human synapse. They likely contain more usable bits.
(2) AGI may easily be worth enough money to just spend ~3000 times as much as we currently spend. GPT-3 cost ~450K to train.
If we have to spend 1.5 billion dollars to train a human equivalent AGI, and 15 billion for a superintelligence...where’s my checkbook. Any billionaire now would take that trade if they knew the chance of success were high.
Ditto running costs. Supposedly it costs a “few pennies” to generate reply. Let’s assume it’s one nickel, or $0.05. If we spend 3000 times as much money, that’s $150.
That’s only a little more than a top tier coder in Bay Area makes for a few ‘chatGPT’ grade replies right now. If we only could reduce the cost to run the model by a factor of ~10 it would become cheaper than most humans, maybe all humans capable of generating answers to this quality.
I just checked and while the other answers are perfect, math.log(2)**math.exp(2) is 0.06665771193088375. ChatGPT is off by almost an order of magnitude when given a quantitative question it can’t look up in its training data.
Yep. 2⁄3 is still beyond most human savants, but it is a failure that the machine won’t try to do “mental math” to see that it’s answer is off by a lot.
Obviously future versions of the product will just have isolated/containerized Linux terminals and python interpreters they can query so a temporary problem.
I only know of efforts to analyze how much noise and signal degradation the brain suffers from. Probably a lot—apparently neurons are just above the noise floor where they wouldn’t work at all.
This is good for power savings in an animal that has to find its own calories, bad if outright intelligence is pretty much all you care about.
Two problems:
(1) Frankly when I look at the current llm’s results, I am seeing more resolution in many of the replies than humans are cognitively capable of.
See below. Very few living humans, if any at all, could generate this answer without help:
(remember, chatGPT is not actually accessing an internal linux terminal)
This tells me that chatGPT’s weights are probably worth more than a human synapse. They likely contain more usable bits.
(2) AGI may easily be worth enough money to just spend ~3000 times as much as we currently spend. GPT-3 cost ~450K to train.
If we have to spend 1.5 billion dollars to train a human equivalent AGI, and 15 billion for a superintelligence...where’s my checkbook. Any billionaire now would take that trade if they knew the chance of success were high.
Ditto running costs. Supposedly it costs a “few pennies” to generate reply. Let’s assume it’s one nickel, or $0.05. If we spend 3000 times as much money, that’s $150.
That’s only a little more than a top tier coder in Bay Area makes for a few ‘chatGPT’ grade replies right now. If we only could reduce the cost to run the model by a factor of ~10 it would become cheaper than most humans, maybe all humans capable of generating answers to this quality.
I just checked and while the other answers are perfect, math.log(2)**math.exp(2) is 0.06665771193088375. ChatGPT is off by almost an order of magnitude when given a quantitative question it can’t look up in its training data.
Yep. 2⁄3 is still beyond most human savants, but it is a failure that the machine won’t try to do “mental math” to see that it’s answer is off by a lot.
Obviously future versions of the product will just have isolated/containerized Linux terminals and python interpreters they can query so a temporary problem.
You aware of any work on quantifying this? I’ve been wondering about this for years. Seems extremely important.
I only know of efforts to analyze how much noise and signal degradation the brain suffers from. Probably a lot—apparently neurons are just above the noise floor where they wouldn’t work at all.
This is good for power savings in an animal that has to find its own calories, bad if outright intelligence is pretty much all you care about.