I was impressed by this post. I don’t have the mathematical chops to evaluate it as math—probably it’s fairly trivial—but I think it’s rare for math to tell us something so interesting and important about the world, as this seems to do. See this comment where I summarize my takeaways; is it not quite amazing that these conclusions about artificial neural nets are provable (or provable-given-plausible-conditions) rather than just conjectures-which-seem-to-be-borne-out-by-ANN-behavior-so-far? (E.g. conclusions like “Neural nets trained on very complex open-ended real-world tasks/environments will build, remember, and use internal models of their environments… for something which resembles expected utility maximization!”) Anyhow, I guess I shouldn’t focus on the provability because even that’s not super important. What matters is that this seems to be a fairly rigorous argument for a conclusion which many people doubt, that is pretty relevant to this whole AGI safety thing.
It’s possible that I’m making mountains out of molehills here so I’d be interested to hear pushback. But as it stands I feel like the ideas in this post deserve to be turned into a paper and more widely publicized.
‘this comment where I summarize my takeaways’ appears to link to a high-lumen lightbulb on Amazon. I’d be interested in the actual comment! Is it this?
I was impressed by this post. I don’t have the mathematical chops to evaluate it as math—probably it’s fairly trivial—but I think it’s rare for math to tell us something so interesting and important about the world, as this seems to do. See this comment where I summarize my takeaways; is it not quite amazing that these conclusions about artificial neural nets are provable (or provable-given-plausible-conditions) rather than just conjectures-which-seem-to-be-borne-out-by-ANN-behavior-so-far? (E.g. conclusions like “Neural nets trained on very complex open-ended real-world tasks/environments will build, remember, and use internal models of their environments… for something which resembles expected utility maximization!”) Anyhow, I guess I shouldn’t focus on the provability because even that’s not super important. What matters is that this seems to be a fairly rigorous argument for a conclusion which many people doubt, that is pretty relevant to this whole AGI safety thing.
It’s possible that I’m making mountains out of molehills here so I’d be interested to hear pushback. But as it stands I feel like the ideas in this post deserve to be turned into a paper and more widely publicized.
‘this comment where I summarize my takeaways’ appears to link to a high-lumen lightbulb on Amazon. I’d be interested in the actual comment! Is it this?
lol oops thank you!
Haha I was 99% sure, but I couldn’t tell if it was some elaborate troll or a joke I didn’t get (‘very bright idea’...?)