I think this is an important discussion to have but I suspect this post might not convince people who don’t already share similar beliefs.
1. I think the title is going to throw people off.
I think what you’re actually saying “stop the current strain of research focused on improving and understanding contemporary systems which has become synonymous with the term AI safety” but many readers might interpret this as if you’re saying “stop research that is aimed at reducing existential risks from AI”. It might be best to reword it as “stopping prosaic AI safety research”.
In fairness, the first, narrower definition of AI Safety certainly describes a majority of work under the banner of AI Safety. It certainly seems to be where most of the funding is going and describes the work done at industrial labs. It is certainly what educational resources (like the AI Safety Fundamentals course) focus on.
2. I’ve had a limited number of experiences informally having discussions with researchers on similar ideas (not necessarily arguing for stopping AI safety research entirely though). My experience is that people either agree immediately or do not really appreciate the significance of concerns about AI safety research largely being on the wrong track. Convincing people in the second category seems to be rather difficult.
To summarize what I’m trying to convey: I think this is a crucial discussion to have and it would be beneficial to the community to write this up into a longer post if you have the time.
Convincing people in the second category seems to be rather difficult.
I expect that it will prove much easier to convince people before they invest 1000s of hours in preparing themselves for and accumulating work experience in AI safety, and the way it is now, most young people contemplating a career in AI aren’t aware that many observers believe that is are no AI-safety research program whose expected helpfulness exceeds its expected harmfulness (or more precisely, if there is one, we cannot pick that program out from the crowd).
I think this is an important discussion to have but I suspect this post might not convince people who don’t already share similar beliefs.
1. I think the title is going to throw people off.
I think what you’re actually saying “stop the current strain of research focused on improving and understanding contemporary systems which has become synonymous with the term AI safety” but many readers might interpret this as if you’re saying “stop research that is aimed at reducing existential risks from AI”. It might be best to reword it as “stopping prosaic AI safety research”.
In fairness, the first, narrower definition of AI Safety certainly describes a majority of work under the banner of AI Safety. It certainly seems to be where most of the funding is going and describes the work done at industrial labs. It is certainly what educational resources (like the AI Safety Fundamentals course) focus on.
2. I’ve had a limited number of experiences informally having discussions with researchers on similar ideas (not necessarily arguing for stopping AI safety research entirely though). My experience is that people either agree immediately or do not really appreciate the significance of concerns about AI safety research largely being on the wrong track. Convincing people in the second category seems to be rather difficult.
To summarize what I’m trying to convey:
I think this is a crucial discussion to have and it would be beneficial to the community to write this up into a longer post if you have the time.
I expect that it will prove much easier to convince people before they invest 1000s of hours in preparing themselves for and accumulating work experience in AI safety, and the way it is now, most young people contemplating a career in AI aren’t aware that many observers believe that is are no AI-safety research program whose expected helpfulness exceeds its expected harmfulness (or more precisely, if there is one, we cannot pick that program out from the crowd).