Your response seems rather defensive. Perhaps it would help to know that I just recently started thinking that some seemingly innocuous advances in FAI research might turn out to be dangerous, and that Eliezer’s secretive approach might be a good idea after all. So I’m surprised to see SIAI seeming to turn away from that approach, and would like to know whether that is actually the case, and if so what is the reason for it. With that in mind, perhaps you could consider giving more informative answers to my questions?
Sorry for my brevity, it’s just that I don’t have more substantive answers right now. These are discussions that need to be ongoing. I’m not aware of any strategy change.
I hope not to publish problem definitions that increase existential risk. I plan to publish problem definitions that decrease existential risk.
Your response seems rather defensive. Perhaps it would help to know that I just recently started thinking that some seemingly innocuous advances in FAI research might turn out to be dangerous, and that Eliezer’s secretive approach might be a good idea after all. So I’m surprised to see SIAI seeming to turn away from that approach, and would like to know whether that is actually the case, and if so what is the reason for it. With that in mind, perhaps you could consider giving more informative answers to my questions?
Sorry for my brevity, it’s just that I don’t have more substantive answers right now. These are discussions that need to be ongoing. I’m not aware of any strategy change.