Agency is advancing pretty fast. Hard to tell how hard this problem is. But there is a lot of overhang. We are not seeing gpt-4 at its maximum potential.
meijer1973
Agree, human in the loop systems are very valuable and probably temporary. HITL systems provide valuable data for training allowing the next step. AI alone is indeed much faster and cheaper.
How will they feed us
One difference I suspect could be generality over specialization in the cognitive domain. It is assumed that specialization is better. But this might only be true for the physical domain. In the cognitive domain general reasoning skills might be more important. E.g. for an ASI the specialized knowledge of a lawyer might be small from the perspective of the ASI.
Good point. As I understood it, humans have an OOM more parameters than chimps. But chimps also have an OOM over a dog. So not all OOM’s are created equal (so I agree with your point).
I am very curious about the qualatative differences between humans and and superintelligence. Are there qualatative emergent capabilities aboven human leven intelligence that we cannot imagine or predict at the moment.
According to Deepmind we should aim at that little spot at the top ( I added the yellow arrow). This spot is still dangerous btw. Seems tricky to me.
Image from the deepmind paper on extreme risk.(https://arxiv.org/pdf/2305.15324.pdf)
The biggest issue I think is agency. In 2024 large improvements will be made to memory (a lot is happening in this regard). I agree that GPT-4 already has a lot of capability. Especially with fine-tuning it should do well on a lot of individual tasks relevant to AI development.
But the executive function is probably still lacking in 2024. Combining the tasks to a whole job will be challenging. Improving data is agency intensive (less intelligence intensive). You need to contact organizations, scrape the web, sift through the data etc. Also it would need to order the training run, get the compute for inference time, pay the bills etc. These require more agency than intelligence.
However, humans can help with the planning etc. And GPT-5 will probably boost productivity of AI developers.
note: depending on your definition of intelligence, agency or the executive function would/should be part of intelligence.
True, it depends on the ratio mundane and high stakes decisions. Athough there are high stakes decisions that are also time dependant. See the example about high frequency trading (no human in the loop and the algorithm makes trades in the millions).
Furthermore your conclusion that time independant high stakes decisions will be the tasks where humans provide most value seems true to me. AI will easily be superior when there are time constraint. Absent such constraints, humans will have a better chance of competing with AI. And economic strategic decisions are often times not extremely time constrained (at least a couple of hours or days of time).
In economic situations the amount of high stakes decisions will be limited (only a few people make desicions about large sums of money and strategy) . Given a multinational with a 100.000 employees, only very few will take high stake decisions. But these decisions might have a significant impact on competitiveness. Thus the multinational with a human ceo might out compete a full AI company.
In a strategic situation time might give more of an advantage (i am economist not a military expert so I am really guessing here). My guess would be that a drone without a human in the loop could have a significant advantage (thus pressures might rise to push for high stake decision making by drones (human lives)).
Time should also be a factor when comparing strength between AI alone and an AI-human team. Humans might add to correspondence chess but it will cost them a significant amount of time. Human-AI teams are very slow compared to AI alone.
For example in low latency algorithmic stock trading reaction times are below 10ms. Human reaction time is 250ms. A human-AI cooperation of stock traders would have a minimum reaction time of 250ms (if the human immediatly agrees when the AI suggests a trade), This is way to slow and means a serious competitive disadvantage.
Take this to strategically aware AI compared to a human working with a strategically aware AI. And suppose that the human can improve the strategic decision if given enough time. The AI alone would be at least a 100x faster that the AI-human team. A serious advantage for the AI alone.
For the more mundane human in the loop applications speed and cost will probably be a deciding factor. If chess was a job than most of the time a Magnus Carlson level move in a few seconds for a few cents will be sufficient. In rare cases (e.g. cutting edge science) it might be valuable to go for the absolute best decision at a higher cost in time and money.
So my guess is that human in the loop solutions will be a short fase in the coming transition. The human in the loop fase will provide valuable data for the AI, but soon monetary and time costs will move processes towards an AI alone setup instead of humans in te loop.
Even if in correspondence chess AI-human teams are better it probably does not transfer to a lot of real world applications.
Thanks for the addition. Vertical and indoor farming should improve on the current fragility (thus add to robustness) of the agricultural industry. Feeding 8 billion people will still cost a lot of resources.
Mining however is different in that mining cost will ever increase due to decreasing quality of ore and ores being mined in places that are harder to reach. This effect could be offset by technological progress for a limited time (unless we go to the stars). Vast improvements in recycling could be a solution, but that requires a lot of energy.
Solving the energy problem via fusion energy would really help a lot for the more utopian scenario’s.