To summarize: AGI may be easier or harder than robotaxis, but that’s not precisely the relevant parameter.
What matters is how much human investment goes into solving the problem, and is the problem solvable with the technical methods that we know exist.
In the post above and following posts, I define a transformative AI as a machine that can produce more resources of various flavors than it’s own costs. It need not be as comprehensive as a remote worker or top expert, but merely produce more than it costs, and things get crazy once it can perform robotic tasks to supply most of it’s own requirements.
You might note robotaxis do not produce more than they cost, they need a minimum very large scale to be profitable. That’s a crucial difference, others have pointed out that to equal OAI’s total annual spending, say it’s 1 billion USD, then they would need 4 million chatGPT subscribers. It only has to be useful enough to pay $20 a month for. That’s a very low bar.
While each robotaxi only makes the delta between it’s operating costs, and what it can charge for a ride, which is slightly less than the cost of an uber or lyft at first, but over time has to go much lower. At small scales that’s a negative number, operating costs will be high, partly from all the hardware, and partly simply because of scale. To run robotaxis there are fixed costs with each service/deployment center, with the infrastructure and servers, with the crew of remote customer service/operators, etc.
As for is the AGI problem solvable, I think yes, for certain subtasks, but not necessarily all subtasks. Many jobs humans do are not good RL problems and are not solvable with current techniques. Some of those jobs are done by remote workers (your definition) or top human experts (Paul’s definition). These aren’t the parameters that matter for transformative AI.
Here’s a post I made on this to one of your colleages:
https://www.greaterwrong.com/posts/ZRrYsZ626KSEgHv8s/self-driving-car-bets/comment/Koe2D7QLRCbH8erxM
To summarize: AGI may be easier or harder than robotaxis, but that’s not precisely the relevant parameter.
What matters is how much human investment goes into solving the problem, and is the problem solvable with the technical methods that we know exist.
In the post above and following posts, I define a transformative AI as a machine that can produce more resources of various flavors than it’s own costs. It need not be as comprehensive as a remote worker or top expert, but merely produce more than it costs, and things get crazy once it can perform robotic tasks to supply most of it’s own requirements.
You might note robotaxis do not produce more than they cost, they need a minimum very large scale to be profitable. That’s a crucial difference, others have pointed out that to equal OAI’s total annual spending, say it’s 1 billion USD, then they would need 4 million chatGPT subscribers. It only has to be useful enough to pay $20 a month for. That’s a very low bar.
While each robotaxi only makes the delta between it’s operating costs, and what it can charge for a ride, which is slightly less than the cost of an uber or lyft at first, but over time has to go much lower. At small scales that’s a negative number, operating costs will be high, partly from all the hardware, and partly simply because of scale. To run robotaxis there are fixed costs with each service/deployment center, with the infrastructure and servers, with the crew of remote customer service/operators, etc.
As for is the AGI problem solvable, I think yes, for certain subtasks, but not necessarily all subtasks. Many jobs humans do are not good RL problems and are not solvable with current techniques. Some of those jobs are done by remote workers (your definition) or top human experts (Paul’s definition). These aren’t the parameters that matter for transformative AI.