1. The problem with theories along the vein of AIXI is that they assume exploration is simple (as it is, in RL), but exploration is very expensive IRL
So if you want to think based on that framework, well, then AGI is as far away as it takes to build a robust simulation of the world in which we want it to operate (very far away)
2. In the world of mortals, I would say AGI is basically already here, but it’s not obvious because it’s impact is not that great.
We have ML-based systems that could in theory do almost any job, the real problem lies in the fact that they are much more expensive than humans to “get right” and in some cases (e.g. self driving) there are regulatory hurdles to cross.
The main problem with a physical human-like platform running an AGI is not that designing the algorithms for it to perform useful tasks is hard, the problem is that designing a human like platform is impossible with current technology and the closest alternatives we’ve got are still more expensive to build and maintain than just hiring a human.
Hence why companies are buying checkout machines to replace employees rather than buying checkout robots.
3. If you’re referring to “superintelligence” style AGI, i.e. something that is much more intelligent than a human, I’d argue we can’t tell how far away this is or if it can even exists (i.e. I think it’s non obvious that the bottleneck at the moment is intelligence and not physical limitations, see 1 + corrupt incentives structures, aka why smart humans are still not always used to their full potential).
1. The problem with theories along the vein of AIXI is that they assume exploration is simple (as it is, in RL), but exploration is very expensive IRL
I’m not sure what you mean by this. Does RL mean Reinforcement Learning, but IRL mean “in real life”. AIXI would be very efficient at using the minimum possible exploration. (And a lot of exploration can be done cheap. There is a lot of data online that can be downloaded for the cost of bandwidth, and sending a network packet to see what you get back is exploration.)
1. The problem with theories along the vein of AIXI is that they assume exploration is simple (as it is, in RL), but exploration is very expensive IRL
So if you want to think based on that framework, well, then AGI is as far away as it takes to build a robust simulation of the world in which we want it to operate (very far away)
2. In the world of mortals, I would say AGI is basically already here, but it’s not obvious because it’s impact is not that great.
We have ML-based systems that could in theory do almost any job, the real problem lies in the fact that they are much more expensive than humans to “get right” and in some cases (e.g. self driving) there are regulatory hurdles to cross.
The main problem with a physical human-like platform running an AGI is not that designing the algorithms for it to perform useful tasks is hard, the problem is that designing a human like platform is impossible with current technology and the closest alternatives we’ve got are still more expensive to build and maintain than just hiring a human.
Hence why companies are buying checkout machines to replace employees rather than buying checkout robots.
3. If you’re referring to “superintelligence” style AGI, i.e. something that is much more intelligent than a human, I’d argue we can’t tell how far away this is or if it can even exists (i.e. I think it’s non obvious that the bottleneck at the moment is intelligence and not physical limitations, see 1 + corrupt incentives structures, aka why smart humans are still not always used to their full potential).
I’m not sure what you mean by this. Does RL mean Reinforcement Learning, but IRL mean “in real life”. AIXI would be very efficient at using the minimum possible exploration. (And a lot of exploration can be done cheap. There is a lot of data online that can be downloaded for the cost of bandwidth, and sending a network packet to see what you get back is exploration.)