I’ve gathered some information, that sometimes relate to this, over the years. I link it here: https://www.lesswrong.com/posts/KQfYieur2DFRZDamd/why-not-just-build-weak-ai-tools-for-ai-alignment-research?commentId=TDnKBaKRGb9TD6zJ3
EDIT: this may also be of interest https://archive.org/details/eassayonthepsych006281mbp
I’ve gathered some information, that sometimes relate to this, over the years. I link it here: https://www.lesswrong.com/posts/KQfYieur2DFRZDamd/why-not-just-build-weak-ai-tools-for-ai-alignment-research?commentId=TDnKBaKRGb9TD6zJ3
EDIT: this may also be of interest https://archive.org/details/eassayonthepsych006281mbp