I like this direction of research, and it ties in with my own work on progressively impairing models by injecting increasing amounts of noise into the activations or parameters.
I think these impairment techniques present a strong argument that even quite powerful AI can be safely studied under controlled lab conditions.
I like this direction of research, and it ties in with my own work on progressively impairing models by injecting increasing amounts of noise into the activations or parameters.
I think these impairment techniques present a strong argument that even quite powerful AI can be safely studied under controlled lab conditions.
Thanks for the feedback. It would be great to learn more about your agenda and see if there are any areas where we may be able to help each other.