Right. My issue is with framing the topic as “one of the most fundamental questions of Human-Robot Interaction: How to ensure safety in human-robot coexistence?”, then naming the whole thesis for Asimov’s first law, of all things. Then implying you made great strides in solving the “safety question”. Asimov wasn’t concerned with trajectory planning. Tying those together in the very title seems like grandiose equivocating to me.
From a cursory glance, I’d also object to the “solid piece of research”, but let’s not go down that particular rabbit hole.
I’m assuming the Asimov thing is a joke by the author who doesn’t think of friendly AGI research as an actual thing separate from mainstream safety engineering or of robots as anything more than very narrow AI industrial machines whose interactions with people conclude with either an unfortunate mangling or a satisfactory non-mangling of the person.
Right. My issue is with framing the topic as “one of the most fundamental questions of Human-Robot Interaction: How to ensure safety in human-robot coexistence?”, then naming the whole thesis for Asimov’s first law, of all things. Then implying you made great strides in solving the “safety question”. Asimov wasn’t concerned with trajectory planning. Tying those together in the very title seems like grandiose equivocating to me.
From a cursory glance, I’d also object to the “solid piece of research”, but let’s not go down that particular rabbit hole.
I’m assuming the Asimov thing is a joke by the author who doesn’t think of friendly AGI research as an actual thing separate from mainstream safety engineering or of robots as anything more than very narrow AI industrial machines whose interactions with people conclude with either an unfortunate mangling or a satisfactory non-mangling of the person.