Similarly, we argue that certain types of artificial intelligence research fall under the category of
dangerous technologies, and should be restricted. Narrow AI research, for example in the automation of
human behavior in a specific domain such as mail sorting or spellchecking, is certainly ethical, and does
not present an existential risk to humanity. On the other hand, research into artificial general intelligence,
without careful safety design in advance, is unethical.
Uh huh. So: who is proposed to be put in charge of regulating this field? The paper says: “AI research review boards” will be there to quash the research. Imposing regulatory barriers on researchers seems like a good way to make sure that others get to the technology first. Since that could potentially be bad, has this recomendation been properly thought through? The burdens of regulation impose a cost, that could pretty easily lead to a worse outcome. The regulatory body gets a lot of power—who ensures that they are trust-worthy? In short, is regulation really justified or needed?
Safety engineering for artificial general intelligence says:
Uh huh. So: who is proposed to be put in charge of regulating this field? The paper says: “AI research review boards” will be there to quash the research. Imposing regulatory barriers on researchers seems like a good way to make sure that others get to the technology first. Since that could potentially be bad, has this recomendation been properly thought through? The burdens of regulation impose a cost, that could pretty easily lead to a worse outcome. The regulatory body gets a lot of power—who ensures that they are trust-worthy? In short, is regulation really justified or needed?