I generally agree with your commentary about the dire lack of research in this area now, and I want to be hopeful about solvability of alignment.
I want to propose that AI alignment is not only a problem for ML professionals. It is a problem for the whole society and we need to get as many people involved here as possible, soon. From lawyers and law-makers, to teachers and cooks. It is so for many reasons:
They can have wonderful ideas people with ML background might not. (Which may translate into technical solutions, or into societal solutions.)
It affects everyone, so everyone should be invited to address the problem.
We need millions of people working on this problem right now.
I want to show what we are doing at my company: https://conjointly.com/blog/ai-alignment-research-grant/ . The aim is to make social science PhDs aware of the alignment problem and get them involved in the way they can. Is it the right way to do it? I do not know.
I, for one, am not an LLM specialist. So I intend to be making noise everywhere I can with the resources I have. This weekend I will be writing to every member of the Australian parliament. Next weekend, I will be writing to every university in the country.
It looks like you haven’t yet replied to the comments on your post. The thing you are proposing is not obviously good, and in fact might be quite bad. I think you probably should not be doing this outreach just yet, with your current plan and current level of understanding. I dislike telling people what to do, but I don’t want you to make things worse. Maybe start by engaging with the comments on your post.
I generally agree with your commentary about the dire lack of research in this area now, and I want to be hopeful about solvability of alignment.
I want to propose that AI alignment is not only a problem for ML professionals. It is a problem for the whole society and we need to get as many people involved here as possible, soon. From lawyers and law-makers, to teachers and cooks. It is so for many reasons:
They can have wonderful ideas people with ML background might not. (Which may translate into technical solutions, or into societal solutions.)
It affects everyone, so everyone should be invited to address the problem.
We need millions of people working on this problem right now.
I want to show what we are doing at my company: https://conjointly.com/blog/ai-alignment-research-grant/ . The aim is to make social science PhDs aware of the alignment problem and get them involved in the way they can. Is it the right way to do it? I do not know.
I, for one, am not an LLM specialist. So I intend to be making noise everywhere I can with the resources I have. This weekend I will be writing to every member of the Australian parliament. Next weekend, I will be writing to every university in the country.
It looks like you haven’t yet replied to the comments on your post. The thing you are proposing is not obviously good, and in fact might be quite bad. I think you probably should not be doing this outreach just yet, with your current plan and current level of understanding. I dislike telling people what to do, but I don’t want you to make things worse. Maybe start by engaging with the comments on your post.