Addressing this objection is why I emphasized the relatively low information content that architecture / optimizers provide for minds, as compared to training data. We’ve gotten very far in instantiating human-like behaviors by training networks on human-like data. I’m saying the primacy of data for determining minds means you can get surprisingly close in mindspace, as compared to if you thought architecture / optimizer / etc were the most important.
Obviously, there are still huge gaps between the sorts of data that an LLM is trained on versus the implicit loss functions human brains actually minimize, so it’s kind of surprising we’ve even gotten this far. The implication I’m pointing to is that it’s feasible to get really close to human minds along important dimensions related to values and behaviors, even without replicating all the quirks of human mental architecture.
Addressing this objection is why I emphasized the relatively low information content that architecture / optimizers provide for minds, as compared to training data. We’ve gotten very far in instantiating human-like behaviors by training networks on human-like data. I’m saying the primacy of data for determining minds means you can get surprisingly close in mindspace, as compared to if you thought architecture / optimizer / etc were the most important.
Obviously, there are still huge gaps between the sorts of data that an LLM is trained on versus the implicit loss functions human brains actually minimize, so it’s kind of surprising we’ve even gotten this far. The implication I’m pointing to is that it’s feasible to get really close to human minds along important dimensions related to values and behaviors, even without replicating all the quirks of human mental architecture.