I haven’t read the Shard Theory work in comprehensive detail. But, fwiw I’ve read at least a fair amount of your arguments here and not seen anything that bridged the gap between “motivations are made of shards that are contextually activated” and “we don’t need to worry about Goodhart and misgeneralization of human values at extreme levels of optimization.”
I’ve heard you make this basic argument several times, and my sense is you’re pretty frustrated that people still don’t seem to have “heard” it properly, or something. I currently feel like I have heard it, and don’t find it compelling.
I did feel compelled by your argument that we should look to humans as an example of how “human values” got aligned. And it seems at least plausible that we are approaching a regime where the concrete nitty-gritty of prosaic ML can inform our overall alignment models in a way that makes the thought experiments of 2010 outdate.
But, like, a) I don’t actually think most humans are automatically aligned if naively scaled up (though it does seem safer than naive AI scaling), and b) while human-value-formation might be simpler than the Yudkowskian model predicts, it still doesn’t seem like the gist of “look to humans” gets us to a plan that is simple in absolute terms, c) it seems like there are still concrete reasons to expect training superhuman models to be meaningfully quite different from the current LLMs, which aren’t at a stage where I’d expect them to exhibit any of the properties I’d be worried about.
(Also, in your shard theory post, you skip over the example of ‘embarassment’ because you can’t explain it yet, and switch to sugar, and I’m like ‘but, the embarrassment one was much more cruxy and important!’)
I don’t expect to get to agreement in the comments here today, but, it feels like the current way you’re arguing this point just isn’t landing or having the effect you want and… I dunno what would resolve things for you or anyone else but I think it’d be better if you tried some different things for arguing about this point.
If you feel like you’ve explained the details of those things better in the second half of one of your posts, I will try giving it a more thorough read. (It’s been awhile since I read your Diamond Maximizer post, which I don’t remember in detail but don’t remember finding compelling at the time)
I haven’t read the Shard Theory work in comprehensive detail. But, fwiw I’ve read at least a fair amount of your arguments here and not seen anything that bridged the gap between “motivations are made of shards that are contextually activated” and “we don’t need to worry about Goodhart and misgeneralization of human values at extreme levels of optimization.”
I’ve heard you make this basic argument several times, and my sense is you’re pretty frustrated that people still don’t seem to have “heard” it properly, or something. I currently feel like I have heard it, and don’t find it compelling.
I did feel compelled by your argument that we should look to humans as an example of how “human values” got aligned. And it seems at least plausible that we are approaching a regime where the concrete nitty-gritty of prosaic ML can inform our overall alignment models in a way that makes the thought experiments of 2010 outdate.
But, like, a) I don’t actually think most humans are automatically aligned if naively scaled up (though it does seem safer than naive AI scaling), and b) while human-value-formation might be simpler than the Yudkowskian model predicts, it still doesn’t seem like the gist of “look to humans” gets us to a plan that is simple in absolute terms, c) it seems like there are still concrete reasons to expect training superhuman models to be meaningfully quite different from the current LLMs, which aren’t at a stage where I’d expect them to exhibit any of the properties I’d be worried about.
(Also, in your shard theory post, you skip over the example of ‘embarassment’ because you can’t explain it yet, and switch to sugar, and I’m like ‘but, the embarrassment one was much more cruxy and important!’)
I don’t expect to get to agreement in the comments here today, but, it feels like the current way you’re arguing this point just isn’t landing or having the effect you want and… I dunno what would resolve things for you or anyone else but I think it’d be better if you tried some different things for arguing about this point.
If you feel like you’ve explained the details of those things better in the second half of one of your posts, I will try giving it a more thorough read. (It’s been awhile since I read your Diamond Maximizer post, which I don’t remember in detail but don’t remember finding compelling at the time)