Since I’m fine with saying things that are wildly inefficient, almost any input/output that’s sufficient to reward modeling of the real world (rather than e.g. just playing the abstract game of chess) is sufficient. A present-day example might be self-driving car planning algorithms (though I don’t think any major companies actually use end to end NN planning).
Right, but what inputs and outputs would be sufficient to reward modeling of the real world? I think that might take some exploration and experimentation, and my 60% forecast is the odds of such inquiries succeeding by 2043.
Even with infinite compute, I think it’s quite difficult to build something that generalizes well without overfitting.
what inputs and outputs would be sufficient to reward modeling of the real world?
This is an interesting question but I think it’s not actually relevant. Like, it’s really interesting to think about a thermostat—something who’s only inputs are a thermometer and a clock, and only output is a switch hooked to a heater. Given arbitrarily large computing power and arbitrary amounts of on-distribution training data, will RL ever learn all about the outside world just from temperature patterns? Will it ever learn to deliberately affect the humans around it by turning the heater on and off? Or is it stuck being a dumb thermostat, a local optimum enforced not by the limits of computation but by the structure of the problem it faces?
But people are just going to build AIs attached to video cameras, or screens read by humans, or robot cars, or the internet, which are enough information flow by orders of magnitude, so it’s not super important where the precise boundary is.
Right, I’m not interested in minimum sufficiency. I’m just interested in the straightforward question of what data pipes would we even plug into the algorithm that would result in AGI. Sounds like you think a bunch of cameras and computers would work? To me, it feels like an empirical problem that will take years of research.
Since I’m fine with saying things that are wildly inefficient, almost any input/output that’s sufficient to reward modeling of the real world (rather than e.g. just playing the abstract game of chess) is sufficient. A present-day example might be self-driving car planning algorithms (though I don’t think any major companies actually use end to end NN planning).
Right, but what inputs and outputs would be sufficient to reward modeling of the real world? I think that might take some exploration and experimentation, and my 60% forecast is the odds of such inquiries succeeding by 2043.
Even with infinite compute, I think it’s quite difficult to build something that generalizes well without overfitting.
This is an interesting question but I think it’s not actually relevant. Like, it’s really interesting to think about a thermostat—something who’s only inputs are a thermometer and a clock, and only output is a switch hooked to a heater. Given arbitrarily large computing power and arbitrary amounts of on-distribution training data, will RL ever learn all about the outside world just from temperature patterns? Will it ever learn to deliberately affect the humans around it by turning the heater on and off? Or is it stuck being a dumb thermostat, a local optimum enforced not by the limits of computation but by the structure of the problem it faces?
But people are just going to build AIs attached to video cameras, or screens read by humans, or robot cars, or the internet, which are enough information flow by orders of magnitude, so it’s not super important where the precise boundary is.
Right, I’m not interested in minimum sufficiency. I’m just interested in the straightforward question of what data pipes would we even plug into the algorithm that would result in AGI. Sounds like you think a bunch of cameras and computers would work? To me, it feels like an empirical problem that will take years of research.