I’m disappointed that my group’s proposal to work on AI containment wasn’t funded, and no other AI containment work was funded, either. Still, some of the things that were funded do look promising. I wrote a bit about what we proposed and the experience of the process here.
When considering possible failure modes for this proposal, one possibility I didn’t consider was that original research portions would look too much like summaries of existing work.
I am not an expert (not even an amateur) in the area, but I wonder if the AI containment work would be futile without corrigibility figured out, and superfluous once it is? What is the window of AI intelligence where it is not yet super-human (too late to contain), but already too smart to be contained by the standard means?
I feel for you. I agree with salvatier’s point in the linked page. Why don’t you try to talk to FHI directly? They should be able to get some funding your way.
I’m disappointed that my group’s proposal to work on AI containment wasn’t funded, and no other AI containment work was funded, either. Still, some of the things that were funded do look promising. I wrote a bit about what we proposed and the experience of the process here.
Oh man, that sucks. :(
I am not an expert (not even an amateur) in the area, but I wonder if the AI containment work would be futile without corrigibility figured out, and superfluous once it is? What is the window of AI intelligence where it is not yet super-human (too late to contain), but already too smart to be contained by the standard means?
I feel for you. I agree with salvatier’s point in the linked page. Why don’t you try to talk to FHI directly? They should be able to get some funding your way.