I’ve only read the LW post, not the original (which tells you something about how concerned I am) but I’ll briefly remark that adding humans to something does not make it safe.
Indeed it doesn’t, but making something constrained by human power makes it less powerful and hence potentially less unsafe. Though that’s probably not what Spector wants to do.
Voted back up to zero because this seems true as far as it goes. The problem is that if he succeeds in doing something that has a useful AGI component at all, that makes it a lot more likely (at least according to how my brain models things) that something which doesn’t need a human in the loop will appear soon after, either through a modification of the original system, as a new system designed by the original system, or simply as a new system inspired by the insights from the original system.
I think so too—the comment on safety was a non sequitur, confusing human in the loop in the department of defense sense with human in the loop as a sensor/actuator for the google AI.
But adding a billion humans as intelligent trainers to the AI is a powerful way to train it. Google seems to consistently look for ways to leverage customer usage for value—other companies don’t seem to get that as much.
I’ve only read the LW post, not the original (which tells you something about how concerned I am) but I’ll briefly remark that adding humans to something does not make it safe.
Indeed it doesn’t, but making something constrained by human power makes it less powerful and hence potentially less unsafe. Though that’s probably not what Spector wants to do.
Just because humans are involved doesn’t mean that the whole system is constrained by the human element.
Voted back up to zero because this seems true as far as it goes. The problem is that if he succeeds in doing something that has a useful AGI component at all, that makes it a lot more likely (at least according to how my brain models things) that something which doesn’t need a human in the loop will appear soon after, either through a modification of the original system, as a new system designed by the original system, or simply as a new system inspired by the insights from the original system.
I think so too—the comment on safety was a non sequitur, confusing human in the loop in the department of defense sense with human in the loop as a sensor/actuator for the google AI.
But adding a billion humans as intelligent trainers to the AI is a powerful way to train it. Google seems to consistently look for ways to leverage customer usage for value—other companies don’t seem to get that as much.