Update: DeepMind will work under Jeff Dean at Google’s search team.
And, predictably:
“Things like the ethics board smack of the kind of self-aggrandizement that we are so worried about,” one machine learning researcher told Re/code. “We’re a hell of a long way from needing to worry about the ethics of AI.”
...despite the fact that AI systems already fly planes, drive trains, and pilot Hellfire-carrying aerial drones.
despite the fact that AI systems already fly planes, drive trains, and pilot Hellfire-carrying aerial drones.
It would be quite a reach to insist that we need to worry about the ethics of the control boards which calculate how to move elevons or how much to open a throttle in order to maintain certain course or speed. Autonomous UAVs able to open fire without a human in the loop are much more worrying.
I imagine that some of the issues the ethics board might have to deal with eventually would be related to self-agentizing tools, in Karfnofsky-style terminology. For example, if a future search engine receives queries whose answers depend on other simultaneous queries, it may have to solve game-theoretical problems, like optimizing traffic flows. These may some day include life-critical decisions, like whether to direct drivers to a more congested route in order to let emergency vehicles pass unimpeded.
Update: DeepMind will work under Jeff Dean at Google’s search team.
And, predictably:
...despite the fact that AI systems already fly planes, drive trains, and pilot Hellfire-carrying aerial drones.
NYTimes also links to LessWrong.
Quote:
It would be quite a reach to insist that we need to worry about the ethics of the control boards which calculate how to move elevons or how much to open a throttle in order to maintain certain course or speed. Autonomous UAVs able to open fire without a human in the loop are much more worrying.
I imagine that some of the issues the ethics board might have to deal with eventually would be related to self-agentizing tools, in Karfnofsky-style terminology. For example, if a future search engine receives queries whose answers depend on other simultaneous queries, it may have to solve game-theoretical problems, like optimizing traffic flows. These may some day include life-critical decisions, like whether to direct drivers to a more congested route in order to let emergency vehicles pass unimpeded.
They actually link to LessWrong in the article, namely to my post here.