This is incorrect, and you’re a world class expert in this domain.
This is a rather rude response. Can you rephrase that?
All tasks involved in building computer chips, robotic parts (and all lower level feeder tasks and power generation and mining and logistics) have objective and measurable feedback. Bolded because I think this is a key point and a key crux, you may not have realized this. Many of your “expert” domain tasks do not get such feedback, or the feedback is unreliable. For example an attorney who can argue 1 case in front of a jury every 6 months cannot reliably refine their policy based on win/loss because the feedback is so rare and depends on so many uncontrolled variables.
I don’t like this point. Many expert domain tasks have vast quantities of historical data we can train evaluators on. Even if the evaluation isn’t as simple to quantify, deep learning intuitively seems it can tackle it. Humans also manage to get around the fact that evaluation may be hard to gain competitive advantages as experts of those fields. Good and bad lawyers exist. (I don’t think it’s a great example as going to trial isn’t a huge part of a most lawyers’ jobs)
Having a more objective and immediate evaluation function, if that’s what you’re saying, doesn’t seem like an obvious massive benefit. The output of this evaluation function with respect to labor output over time can still be pretty discontinuous so it may not effectively be that different than waiting 6 months between attempts to know if success happened.
An example of this is it taking a long time to build and verify whether a new chip architecture improves speeds or having to backtrack and scrap ideas.
This is a rather rude response. Can you rephrase that?
If I were to rephrase I might say something like “just like historical experts Einstein and Hinton, it’s possible to be a world class expert but still incorrect. I think that focusing on the human experts at the top of the pyramid is neglecting what would cause AI to be transformative, as automating 90% of humans matters a lot more than automating 0.1%. We are much closer to automating the 90% case because...”
I don’t like this point. Many expert domain tasks have vast quantities of historical data we can train evaluators on. Even if the evaluation isn’t as simple to quantify, deep learning intuitively seems it can tackle it. Humans also manage to get around the fact that evaluation may be hard to gain competitive advantages as experts of those fields. Good and bad lawyers exist. (I don’t think it’s a great example as going to trial isn’t a huge part of a most lawyers’ jobs)
Having a more objective and immediate evaluation function, if that’s what you’re saying, doesn’t seem like an obvious massive benefit. The output of this evaluation function with respect to labor output over time can still be pretty discontinuous so it may not effectively be that different than waiting 6 months between attempts to know if success happened.
For lawyers: the confounding variables means a robust, optimal policy is likely not possible. A court outcome depends on variables like [facts of case, age and gender and race of the plaintiff/defendant, age and gender and race of the attorneys, age and gender and race of each juror, who ends up the foreman, news articles on the case, meme climate at the time the case is argued, the judge, the law’s current interpretation, scheduling of the case, location the trial is held...]
It would be difficult to develop a robust and optimal policy with this many confounding variables. It would likely take more cases than any attorney can live long enough to argue or review.
Contrast this to chip design. Chip A, using a prior design, works. Design modification A’ is being tested. The universe objectively is analyzing design A’ and measurable parameters (max frequency, power, error rate, voltage stability) can be obtained.
The problem can also be subdivided. You can test parts of the chip, carefully exposing it to the same conditions it would see in the fully assembled chip, and can subdivide all the way to the transistor level. It is mostly path independent—it doesn’t matter what conditions the submodule saw yesterday or an hour ago, only right now. (with a few exceptions)
Delayed feedback slows convergence to an optimal policy, yes.
You cannot stop time and argue a single point to a jury, and try a different approach, and repeatedly do it until you discover the method that works. {note this does give you a hint as to how an ASI could theoretically solve this problem}
I say this generalizes to many expert tasks like [economics, law, government, psychology, social sciences, and others]. Feedback is delayed and contains many confounding variables independent of the [expert’s actions].
While all tasks involved with building [robots, compute], with the exception of tasks that fit into the above (arguing for the land and mineral permits to be granted for the ai driven gigafactories and gigamines), offer objective feedback.
the confounding variables means a robust, optimal policy is likely not possible. A court outcome depends on variables like [facts of case, age and gender and race of the plaintiff/defendant, age and gender and race of the attorneys, age and gender and race of each juror, who ends up the foreman, news articles on the case, meme climate at the time the case is argued, the judge, the law’s current interpretation, scheduling of the case, location the trial is held...]
I don’t see why there is no robust optimal policy. A robust optimal policy doesn’t have to always win. The optimal chess policy can’t win with just a king on the board. It just has to be better than any alternative to be optimal as per the definition of optimal. I agree it’s unlikely any human lawyer has an optimal policy, but this isn’t unique to legal experts.
There are confounding variables, but you could also just restate evaluation as trial win-rate (or more succinctly trial elo) instead of as a function of those variables. Likewise you can also restate chip evaluation’s confounding variables as being all the atoms and forces that contribute to the chip. The evaluation function for lawyers, and many of your examples is objective. The case gets won, lost, settled, dismissed, etc.
The only difference is it takes longer to verify generalizations are correct if we go out of distribution with a certain case. In the case of a legal-expert-AI, we can’t test hypotheses as easily. But this still may not be as long as you think. Since we will likely have jury-AI when we approach legal-expert-AI, we can probably just simulate the evaluations relatively easily (as legal-expert-AI is probably capable of predicting jury-AI). In the real world, a combination of historical data and mock trials help lawyers verify their generalizations are correct, so it wouldn’t even be that different as it is today (just much better). In addition, process based evaluation probably does decently well here, which wouldn’t need any of these more complicated simulations.
You cannot stop time and argue a single point to a jury, and try a different approach, and repeatedly do it until you discover the method that works. {note this does give you a hint as to how an ASI could theoretically solve this problem}
Maybe not, but you can conduct mock trials and look at billions of historical legal cases and draw conclusions from that (human lawyers already read a lot). You can also simulate a jury and judge directly instead of doing a mock trial. I don’t see why this won’t be good enough for both humans and an ASI. The problem has high dimensionality as you stated, with many variables mattering, but a near optimal policy can still be had by capturing a subset of features. As for chip-expert-AI, I don’t see why it will definitely converge to a globally optimal policy.
All I can see is that initially legal-expert-AI will have to put more work in creating an evaluation function and simulations. However, chip-expert-AI has its own problem where it’s almost always working out of distribution, unlike many of these other experts. I think experts in other fields won’t be that much slower than chip-expert-AI. The real difference I see here is that the theoretical limits of output of chip-expert-AI are much higher and legal-expert-AI or therapist-expert-AI will reach the end of the sigmoid much sooner.
I say this generalizes to many expert tasks like [economics, law, government, psychology, social sciences, and others]. Feedback is delayed and contains many confounding variables independent of the [expert’s actions].
Is there something significantly different between a confounding variable that can’t be controlled like scheduling and unknown governing theoretical frameworks that are only found experimentally? Both of these can still be dealt with. For the former, you may develop different policies for different schedules. For the latter, you may also intuit the governing theoretical framework.
So in this context, I was referring to criticality. AGI criticality is a self amplifying process where the amount of physical materials and capabilities increases exponentially with each doubling time. Note it is perfectly fine if humans continue to supply as inputs the network of isolated AGI instances are unable to produce. (Vs others who imagine a singleton AGI on its own. Obviously eventually the system will be rate limited by available human labor if its limited this way, but will see exponential growth until then)
I think the crux here is that all is required is for AGI to create and manufacture variants on existing technology. At no point does it need to design a chip outside of current feature sizes, at no point does any robot it designs look like anything but a variation of robots humans designed already.
This is also the crux with Paul. He says the AGI needs to be as good as the 0.1 percent human experts at the far right side of the distribution. I am saying that doesn’t matter, it is only necessary to be as good as the left 90 percent of humans. Approximately , I go over how the AGI doesn’t even need to be that good, merely good enough there is net gain.
This means you need more modalities on existing models but not necessarily more intelligence.
It is possible because there are regularities in how the tree of millions of distinct manufacturing tasks that humans do now use common strategies. It is possible because each step and substep has a testable and usually immediately measurable objective. For example : overall goal. Deploy a solar panel. Overall measurable value : power flows when sunlight available. Overall goal. Assemble a new robot of design A5. Overall measurable objective: new machinery is completing tasks with similar Psuccess. Each of these problems is neatly dividable into subtasks and most subtasks inherit the same favorable properties.
I am claiming more than 99 percent of the sub problems of “build a robot, build a working computer capable of hosting more AGI” work like this.
What robust and optimal means is that little human supervision is needed, that the robots can succeed again and again and we will have high confidence they are doing a good job because it’s so easy to measure the ground truth in ways that can’t be faked. I didn’t mean the global optimal, I know that is an NP complete problem.
I was then talking about how the problems the expert humans “solve” are nasty and it’s unlikely humans are even solving many of them at the numerical success levels humans have in manufacturing and mining and logistics, which are extremely good at policy convergence. Even the most difficult thing humans do—manufacture silicon ICs—converges on yields above 90 percent eventually.
How often do lawyers unjustly lose, economists make erroneous predictions, government officials make a bad call, psychologists fail and the patient has a bad outcome, or social science uses a theory that fails to replicate years later.
Early AGI can fail here in many ways and the delay until feedback slows down innovation. How many times do you need to wait for a jury verdict to replace lawyers with AI. For AI oncologists how long does it take to get a patient outcome of long term survival. You’re not innovating fast when you wait weeks to months and the problem is high stakes like this. Robots deploying solar panels are low stakes with a lot more freedom to innovate.
This is a rather rude response. Can you rephrase that?
I don’t like this point. Many expert domain tasks have vast quantities of historical data we can train evaluators on. Even if the evaluation isn’t as simple to quantify, deep learning intuitively seems it can tackle it. Humans also manage to get around the fact that evaluation may be hard to gain competitive advantages as experts of those fields. Good and bad lawyers exist. (I don’t think it’s a great example as going to trial isn’t a huge part of a most lawyers’ jobs)
Having a more objective and immediate evaluation function, if that’s what you’re saying, doesn’t seem like an obvious massive benefit. The output of this evaluation function with respect to labor output over time can still be pretty discontinuous so it may not effectively be that different than waiting 6 months between attempts to know if success happened.
An example of this is it taking a long time to build and verify whether a new chip architecture improves speeds or having to backtrack and scrap ideas.
This is a rather rude response. Can you rephrase that?
If I were to rephrase I might say something like “just like historical experts Einstein and Hinton, it’s possible to be a world class expert but still incorrect. I think that focusing on the human experts at the top of the pyramid is neglecting what would cause AI to be transformative, as automating 90% of humans matters a lot more than automating 0.1%. We are much closer to automating the 90% case because...”
I don’t like this point. Many expert domain tasks have vast quantities of historical data we can train evaluators on. Even if the evaluation isn’t as simple to quantify, deep learning intuitively seems it can tackle it. Humans also manage to get around the fact that evaluation may be hard to gain competitive advantages as experts of those fields. Good and bad lawyers exist. (I don’t think it’s a great example as going to trial isn’t a huge part of a most lawyers’ jobs)
Having a more objective and immediate evaluation function, if that’s what you’re saying, doesn’t seem like an obvious massive benefit. The output of this evaluation function with respect to labor output over time can still be pretty discontinuous so it may not effectively be that different than waiting 6 months between attempts to know if success happened.
For lawyers: the confounding variables means a robust, optimal policy is likely not possible. A court outcome depends on variables like [facts of case, age and gender and race of the plaintiff/defendant, age and gender and race of the attorneys, age and gender and race of each juror, who ends up the foreman, news articles on the case, meme climate at the time the case is argued, the judge, the law’s current interpretation, scheduling of the case, location the trial is held...]
It would be difficult to develop a robust and optimal policy with this many confounding variables. It would likely take more cases than any attorney can live long enough to argue or review.
Contrast this to chip design. Chip A, using a prior design, works. Design modification A’ is being tested. The universe objectively is analyzing design A’ and measurable parameters (max frequency, power, error rate, voltage stability) can be obtained.
The problem can also be subdivided. You can test parts of the chip, carefully exposing it to the same conditions it would see in the fully assembled chip, and can subdivide all the way to the transistor level. It is mostly path independent—it doesn’t matter what conditions the submodule saw yesterday or an hour ago, only right now. (with a few exceptions)
Delayed feedback slows convergence to an optimal policy, yes.
You cannot stop time and argue a single point to a jury, and try a different approach, and repeatedly do it until you discover the method that works. {note this does give you a hint as to how an ASI could theoretically solve this problem}
I say this generalizes to many expert tasks like [economics, law, government, psychology, social sciences, and others]. Feedback is delayed and contains many confounding variables independent of the [expert’s actions].
While all tasks involved with building [robots, compute], with the exception of tasks that fit into the above (arguing for the land and mineral permits to be granted for the ai driven gigafactories and gigamines), offer objective feedback.
I don’t see why there is no robust optimal policy. A robust optimal policy doesn’t have to always win. The optimal chess policy can’t win with just a king on the board. It just has to be better than any alternative to be optimal as per the definition of optimal. I agree it’s unlikely any human lawyer has an optimal policy, but this isn’t unique to legal experts.
There are confounding variables, but you could also just restate evaluation as trial win-rate (or more succinctly trial elo) instead of as a function of those variables. Likewise you can also restate chip evaluation’s confounding variables as being all the atoms and forces that contribute to the chip.
The evaluation function for lawyers, and many of your examples is objective. The case gets won, lost, settled, dismissed, etc.
The only difference is it takes longer to verify generalizations are correct if we go out of distribution with a certain case. In the case of a legal-expert-AI, we can’t test hypotheses as easily. But this still may not be as long as you think. Since we will likely have jury-AI when we approach legal-expert-AI, we can probably just simulate the evaluations relatively easily (as legal-expert-AI is probably capable of predicting jury-AI). In the real world, a combination of historical data and mock trials help lawyers verify their generalizations are correct, so it wouldn’t even be that different as it is today (just much better). In addition, process based evaluation probably does decently well here, which wouldn’t need any of these more complicated simulations.
Maybe not, but you can conduct mock trials and look at billions of historical legal cases and draw conclusions from that (human lawyers already read a lot). You can also simulate a jury and judge directly instead of doing a mock trial. I don’t see why this won’t be good enough for both humans and an ASI. The problem has high dimensionality as you stated, with many variables mattering, but a near optimal policy can still be had by capturing a subset of features. As for chip-expert-AI, I don’t see why it will definitely converge to a globally optimal policy.
All I can see is that initially legal-expert-AI will have to put more work in creating an evaluation function and simulations. However, chip-expert-AI has its own problem where it’s almost always working out of distribution, unlike many of these other experts. I think experts in other fields won’t be that much slower than chip-expert-AI. The real difference I see here is that the theoretical limits of output of chip-expert-AI are much higher and legal-expert-AI or therapist-expert-AI will reach the end of the sigmoid much sooner.
Is there something significantly different between a confounding variable that can’t be controlled like scheduling and unknown governing theoretical frameworks that are only found experimentally? Both of these can still be dealt with. For the former, you may develop different policies for different schedules. For the latter, you may also intuit the governing theoretical framework.
So in this context, I was referring to criticality. AGI criticality is a self amplifying process where the amount of physical materials and capabilities increases exponentially with each doubling time. Note it is perfectly fine if humans continue to supply as inputs the network of isolated AGI instances are unable to produce. (Vs others who imagine a singleton AGI on its own. Obviously eventually the system will be rate limited by available human labor if its limited this way, but will see exponential growth until then)
I think the crux here is that all is required is for AGI to create and manufacture variants on existing technology. At no point does it need to design a chip outside of current feature sizes, at no point does any robot it designs look like anything but a variation of robots humans designed already.
This is also the crux with Paul. He says the AGI needs to be as good as the 0.1 percent human experts at the far right side of the distribution. I am saying that doesn’t matter, it is only necessary to be as good as the left 90 percent of humans. Approximately , I go over how the AGI doesn’t even need to be that good, merely good enough there is net gain.
This means you need more modalities on existing models but not necessarily more intelligence.
It is possible because there are regularities in how the tree of millions of distinct manufacturing tasks that humans do now use common strategies. It is possible because each step and substep has a testable and usually immediately measurable objective. For example : overall goal. Deploy a solar panel. Overall measurable value : power flows when sunlight available. Overall goal. Assemble a new robot of design A5. Overall measurable objective: new machinery is completing tasks with similar Psuccess. Each of these problems is neatly dividable into subtasks and most subtasks inherit the same favorable properties.
I am claiming more than 99 percent of the sub problems of “build a robot, build a working computer capable of hosting more AGI” work like this.
What robust and optimal means is that little human supervision is needed, that the robots can succeed again and again and we will have high confidence they are doing a good job because it’s so easy to measure the ground truth in ways that can’t be faked. I didn’t mean the global optimal, I know that is an NP complete problem.
I was then talking about how the problems the expert humans “solve” are nasty and it’s unlikely humans are even solving many of them at the numerical success levels humans have in manufacturing and mining and logistics, which are extremely good at policy convergence. Even the most difficult thing humans do—manufacture silicon ICs—converges on yields above 90 percent eventually.
How often do lawyers unjustly lose, economists make erroneous predictions, government officials make a bad call, psychologists fail and the patient has a bad outcome, or social science uses a theory that fails to replicate years later.
Early AGI can fail here in many ways and the delay until feedback slows down innovation. How many times do you need to wait for a jury verdict to replace lawyers with AI. For AI oncologists how long does it take to get a patient outcome of long term survival. You’re not innovating fast when you wait weeks to months and the problem is high stakes like this. Robots deploying solar panels are low stakes with a lot more freedom to innovate.