This seems like a prety cool perspective, especially since it might make analysis a little simpler vs. a paradigm where you kind of need to know what to look out for specifically. Are there any toy mathematical models or basically simulated words/stories, etc… to make this more concrete? I briefly looked at some of the slides you shared but it doesn’t seem to be there (though maybe I missed something, since I didn’t watch he entire video(s)).
I’m not honestly sure exactly what this would look like since I don’t fully understand much here beyond the notion that concentration of intelligence/cognition can lead to larger magnitude outcomes (which we probably already knew) and the idea that maybe we could measure this or use it to reason in some way (which maybe we aren’t doing so much). Maybe we could have some sort of simulated game where different agents get to control civilizations (i.e. like Civ 5) and among the things they can invest (their resources) in, there is some measure of “cognition” (i.e. maybe it lets them plan further ahead or gives them the ability to take more variables into consideration when making decisions or to see more of the map...). With that said, it’s not clear to me what would come out of this simulation other than maybe getting a notion of the relative value (in different contexts) of cognitive vs. physical investments (i.e. having smarter strategists vs. building a better castle). There’s not clear question or hypothesis that comes to mind right now.
It looks like from some other comments that literature on agents foundations might be relevant, but I’m not familiar. If I get time I might look into it in the future. Are these sorts of frameworks are useable for actual decision making right now (and if so how can we tell/not) or are they still exploratory?
Generally just curious if there’s a way to make this more concrete i.e. to understand it better.
That simulation sounds cool. The talk certainly doesn’t contain any details and I don’t have a mathematical model to share at this point. One way to make this more concrete is to think through Maxwell’s demon as an LLM, for example in the context of Feynman’s lectures on computation. The literature on thermodynamics of computation (various experts, like Adam Shai and Paul Riechers, are around here and know more than me) implicitly or explicitly touches on relevant issues.
This seems like a prety cool perspective, especially since it might make analysis a little simpler vs. a paradigm where you kind of need to know what to look out for specifically. Are there any toy mathematical models or basically simulated words/stories, etc… to make this more concrete? I briefly looked at some of the slides you shared but it doesn’t seem to be there (though maybe I missed something, since I didn’t watch he entire video(s)).
I’m not honestly sure exactly what this would look like since I don’t fully understand much here beyond the notion that concentration of intelligence/cognition can lead to larger magnitude outcomes (which we probably already knew) and the idea that maybe we could measure this or use it to reason in some way (which maybe we aren’t doing so much). Maybe we could have some sort of simulated game where different agents get to control civilizations (i.e. like Civ 5) and among the things they can invest (their resources) in, there is some measure of “cognition” (i.e. maybe it lets them plan further ahead or gives them the ability to take more variables into consideration when making decisions or to see more of the map...). With that said, it’s not clear to me what would come out of this simulation other than maybe getting a notion of the relative value (in different contexts) of cognitive vs. physical investments (i.e. having smarter strategists vs. building a better castle). There’s not clear question or hypothesis that comes to mind right now.
It looks like from some other comments that literature on agents foundations might be relevant, but I’m not familiar. If I get time I might look into it in the future. Are these sorts of frameworks are useable for actual decision making right now (and if so how can we tell/not) or are they still exploratory?
Generally just curious if there’s a way to make this more concrete i.e. to understand it better.
That simulation sounds cool. The talk certainly doesn’t contain any details and I don’t have a mathematical model to share at this point. One way to make this more concrete is to think through Maxwell’s demon as an LLM, for example in the context of Feynman’s lectures on computation. The literature on thermodynamics of computation (various experts, like Adam Shai and Paul Riechers, are around here and know more than me) implicitly or explicitly touches on relevant issues.