If there are people who say “current AIs think many orders of magnitude faster than humans”, then I agree that those people are saying something kinda confused and incoherent, and I am happy that you are correcting them.
I don’t immediately recall hearing people say that, but sure, I believe you, people say all kinds of stupid things.
It’s just not a good comparison because current AIs are not doing the same thing that humans do when humans “think”. We don’t have real human-level AI, obviously. E.g. I can’t ask an LLM to go found a new company, give it some seed capital and an email account, and expect it to succeed. So basically, LLMs are doing Thing X at Speed A, and human brains are doing Thing Y at Speed B. I don’t know what it means to compare A and B.
There’s a different claim, “we will sooner or later have AIs that can think and act at least 1-2 orders of magnitude faster than a human”. I see that claim as probably true, although I obviously can’t prove it.
I agree that the calculation “1 GHz clock speed / 100 Hz neuron firing rate = 107” is not the right calculation (although it’s not entirely irrelevant). But I am pretty confident about the weaker claim of 1-2 OOM, given some time to optimize the (future) algorithms.
If there are people who say “current AIs think many orders of magnitude faster than humans”, then I agree that those people are saying something kinda confused and incoherent, and I am happy that you are correcting them.
Eliezer himself has said (e.g. in his 2010 debate with Robin Hanson) that one of the big reasons he thinks CPUs can beat brains is because CPUs run at 1 GHz while brains run at 1-100 Hz, and the only barrier is that the CPUs are currently running “spreadsheet algorithms” and not the algorithm used by the human brain. I can find the exact timestamp from the video of the debate if you’re interested, but I’m surprised you’ve never heard this argument from anyone before.
There’s a different claim, “we will sooner or later have AIs that can think and act at least 1-2 orders of magnitude faster than a human”. I see that claim as probably true, although I obviously can’t prove it.
I think this claim is too ill-defined to be true, unfortunately, but insofar as it has the shape of something I think will be true it will be because of throughput or software progress and not because of latency.
I agree that the calculation “1 GHz clock speed / 100 Hz neuron firing rate = 1e7” is not the right calculation (although it’s not entirely irrelevant). But I am pretty confident about the weaker claim of 1-2 OOM, given some time to optimize the (future) algorithms.
If the claim here is that “for any task, there will be some AI system using some unspecified amount of inference compute that does the task 1-2 OOM faster than humans”, I would probably agree with that claim. My point is that if this is true, it won’t be because of the calculation “1 GHz clock speed / 100 Hz neuron firing rate = 1e7”, which as far as I can tell you seem to agree with.
Thanks. I’m not Eliezer so I’m not interested in litigating whether his precise words were justified or not. ¯\_(ツ)_/¯
I’m not sure we’re disagreeing about anything substantive here.
If the claim here is that “for any task, there will be some AI system using some unspecified amount of inference compute that does the task 1-2 OOM faster than humans”, I would probably agree with that claim.
That’s probably not what I meant, but I guess it depends on what you mean by “task”.
For example, when a human is tasked with founding a startup company, they have to figure out, and do, a ton of different things, from figuring out what to sell and how, to deciding what subordinates to hire and when, to setting up an LLC and optimizing it for tax efficiency, to setting strategy, etc. etc.
One good human startup founder can do all those things. I am claiming that one AI can do all those things too, but at least 1-2 OOM faster, wherever those things are unconstrained by waiting-for-other-people etc.
For example: If the AI decides that it ought to understand something about corporate tax law, it can search through online resources and find the answer at least 10-100× faster than a human could (or maybe it would figure out that the answer is not online and that it needs to ask an expert for help, in which case it would find such an expert and email them, also 10-100× faster). If the AI decides that it ought to post a job ad, it can figure out where best to post it, and how to draft it to attract the right type of candidate, and then actually write it and post it, all 10-100× faster. If the AI decides that it ought to look through real estate listings to make a shortlist of potential office spaces, it can do it 10-100× faster. If the AI decides that it ought to redesign the software prototype in response to early feedback, it can do so 10-100× faster. If the AI isn’t sure what to do next, it figures it out, 10-100× faster. Etc. etc. Of course, the AI might use or create tools like calculators or spreadsheets or LLMs, just as a human might, when it’s useful to do those things. And the AI would do all those things really well, at least as well as the best remote-only human startup founder.
That’s what I have in mind, and that’s what I expect someday (definitely not yet! maybe not for decades!)
E.g. I can’t ask an LLM to go found a new company, give it some seed capital and an email account, and expect it to succeed.
In general, would you expect a human to succeed under those conditions? I wouldn’t, but then most of the humans I associate with on a regular basis aren’t entrepreneurs.
There’s a different claim, “we will sooner or later have AIs that can think and act at least 1-2 orders of magnitude faster than a human”. I see that claim as probably true, although I obviously can’t prove it.
Without bounding what tasks we want the computer perform faster than the person, one could argue that we’ve met that criterion for decades. Definitely a matter of “most people” and “most computers” for both, but there’s a lot of math that a majority of humans can’t do quickly or can’t do at all, whereas it’s trivial for most computers.
If there are people who say “current AIs think many orders of magnitude faster than humans”, then I agree that those people are saying something kinda confused and incoherent, and I am happy that you are correcting them.
I don’t immediately recall hearing people say that, but sure, I believe you, people say all kinds of stupid things.
It’s just not a good comparison because current AIs are not doing the same thing that humans do when humans “think”. We don’t have real human-level AI, obviously. E.g. I can’t ask an LLM to go found a new company, give it some seed capital and an email account, and expect it to succeed. So basically, LLMs are doing Thing X at Speed A, and human brains are doing Thing Y at Speed B. I don’t know what it means to compare A and B.
There’s a different claim, “we will sooner or later have AIs that can think and act at least 1-2 orders of magnitude faster than a human”. I see that claim as probably true, although I obviously can’t prove it.
I agree that the calculation “1 GHz clock speed / 100 Hz neuron firing rate = 107” is not the right calculation (although it’s not entirely irrelevant). But I am pretty confident about the weaker claim of 1-2 OOM, given some time to optimize the (future) algorithms.
Eliezer himself has said (e.g. in his 2010 debate with Robin Hanson) that one of the big reasons he thinks CPUs can beat brains is because CPUs run at 1 GHz while brains run at 1-100 Hz, and the only barrier is that the CPUs are currently running “spreadsheet algorithms” and not the algorithm used by the human brain. I can find the exact timestamp from the video of the debate if you’re interested, but I’m surprised you’ve never heard this argument from anyone before.
I think this claim is too ill-defined to be true, unfortunately, but insofar as it has the shape of something I think will be true it will be because of throughput or software progress and not because of latency.
If the claim here is that “for any task, there will be some AI system using some unspecified amount of inference compute that does the task 1-2 OOM faster than humans”, I would probably agree with that claim. My point is that if this is true, it won’t be because of the calculation “1 GHz clock speed / 100 Hz neuron firing rate = 1e7”, which as far as I can tell you seem to agree with.
Thanks. I’m not Eliezer so I’m not interested in litigating whether his precise words were justified or not. ¯\_(ツ)_/¯
I’m not sure we’re disagreeing about anything substantive here.
That’s probably not what I meant, but I guess it depends on what you mean by “task”.
For example, when a human is tasked with founding a startup company, they have to figure out, and do, a ton of different things, from figuring out what to sell and how, to deciding what subordinates to hire and when, to setting up an LLC and optimizing it for tax efficiency, to setting strategy, etc. etc.
One good human startup founder can do all those things. I am claiming that one AI can do all those things too, but at least 1-2 OOM faster, wherever those things are unconstrained by waiting-for-other-people etc.
For example: If the AI decides that it ought to understand something about corporate tax law, it can search through online resources and find the answer at least 10-100× faster than a human could (or maybe it would figure out that the answer is not online and that it needs to ask an expert for help, in which case it would find such an expert and email them, also 10-100× faster). If the AI decides that it ought to post a job ad, it can figure out where best to post it, and how to draft it to attract the right type of candidate, and then actually write it and post it, all 10-100× faster. If the AI decides that it ought to look through real estate listings to make a shortlist of potential office spaces, it can do it 10-100× faster. If the AI decides that it ought to redesign the software prototype in response to early feedback, it can do so 10-100× faster. If the AI isn’t sure what to do next, it figures it out, 10-100× faster. Etc. etc. Of course, the AI might use or create tools like calculators or spreadsheets or LLMs, just as a human might, when it’s useful to do those things. And the AI would do all those things really well, at least as well as the best remote-only human startup founder.
That’s what I have in mind, and that’s what I expect someday (definitely not yet! maybe not for decades!)
It is indeed tricky to measure this stuff.
In general, would you expect a human to succeed under those conditions? I wouldn’t, but then most of the humans I associate with on a regular basis aren’t entrepreneurs.
Without bounding what tasks we want the computer perform faster than the person, one could argue that we’ve met that criterion for decades. Definitely a matter of “most people” and “most computers” for both, but there’s a lot of math that a majority of humans can’t do quickly or can’t do at all, whereas it’s trivial for most computers.