If you read my solution and think it wouldn’t work, let me know.
I don’t know about “wouldn’t help”, but it doesn’t seem to me on my initial skim to be a particularly promising approach for prosaic AI safety. I certainly don’t agree with the first two bullets in the above list.
Apologies, but I’ve only skimmed your post, so it’s possible I’m missing some important details.
I left a comment on your post explaining some aspects of why I think this.
I don’t know about “wouldn’t help”, but it doesn’t seem to me on my initial skim to be a particularly promising approach for prosaic AI safety. I certainly don’t agree with the first two bullets in the above list.
Apologies, but I’ve only skimmed your post, so it’s possible I’m missing some important details.
I left a comment on your post explaining some aspects of why I think this.
Thanks, will reply there!