It’s hard for me to write well for an audience I don’t know well. I went through a number of iterations of this just trying to clarify the conceptual contours of such a research direction in a single post that’s clear and coherent. I have like 5 follow up posts planned, hopefully I’ll keep going. But the premise is “here’s a stack of like 10 things that we want the AI to do, if it does these things it will be aligned. Further, this is all rooted in language use and not in biology, which seems useful because AI is not biological.” Actually getting an AI to conform to those things is like a nightmarish challenge, but it seems useful to have a coherent conceptual framework that defines what alignment is exactly and can explain why those 10 things and not some others. My essential thesis in other words is that at a high level, reframing the alignment problem in Habermasian terms makes the problem appear tractable.
I’m trying to be helpful by guessing at the gap between what you’re saying and this particular audience’s interests and concerns. You said this is your first post, it’s a new account, and the post didn’t get much interest, so I’m trying to help you guess what needs to be addressed in future posts or edits.
It’s hard for me to write well for an audience I don’t know well. I went through a number of iterations of this just trying to clarify the conceptual contours of such a research direction in a single post that’s clear and coherent. I have like 5 follow up posts planned, hopefully I’ll keep going. But the premise is “here’s a stack of like 10 things that we want the AI to do, if it does these things it will be aligned. Further, this is all rooted in language use and not in biology, which seems useful because AI is not biological.” Actually getting an AI to conform to those things is like a nightmarish challenge, but it seems useful to have a coherent conceptual framework that defines what alignment is exactly and can explain why those 10 things and not some others. My essential thesis in other words is that at a high level, reframing the alignment problem in Habermasian terms makes the problem appear tractable.
I’m trying to be helpful by guessing at the gap between what you’re saying and this particular audience’s interests and concerns. You said this is your first post, it’s a new account, and the post didn’t get much interest, so I’m trying to help you guess what needs to be addressed in future posts or edits.
I apologize if I’m coming off combative, I am genuinely appreciative for the help.