It’s relevant to note that Legg is also doing a bunch of safety research, much of it listed here, I don’t see why it should be obvious that he’s making a less respectable attempt to solve the problem than other alignment researchers. (He’s working on the causal incentives framework, and on stuff related to avoiding wireheading.)
I’m glad, but if Hermann Goring retired from public leadership in 1936 and then spent the rest of his life making world peace posters, I still wouldn’t consider him a good person.
Also, wasn’t deepmind an early attempt at gathering researchers to be able to coordinate against arms races?
Sounds like a great rationalization for AI researchers who are intellectually concerned about their actions, but really want to make a boatload of money doing exactly what they were going to do in the first place. I don’t understand at all how that would work, and it sure doesn’t seem like it did.
Without necessarily disagreeing, I’m curious exactly how far back you want to push this. The natural outcome of technological development has been clear to sufficiently penetrating thinkers since the nineteenth century. Samuel Butler saw it. George Eliot saw it. Following Butler, should “every machine of every sort [...] be destroyed by the well-wisher of his species,” that we should “at once go back to the primeval condition of the race”?
I’m glad, but if Hermann Goring retired from public leadership in 1936 and then spent the rest of his life making world peace posters, I still wouldn’t consider him a good person.
Sounds like a great rationalization for AI researchers who are intellectually concerned about their actions, but really want to make a boatload of money doing exactly what they were going to do in the first place. I don’t understand at all how that would work, and it sure doesn’t seem like it did.
Without necessarily disagreeing, I’m curious exactly how far back you want to push this. The natural outcome of technological development has been clear to sufficiently penetrating thinkers since the nineteenth century. Samuel Butler saw it. George Eliot saw it. Following Butler, should “every machine of every sort [...] be destroyed by the well-wisher of his species,” that we should “at once go back to the primeval condition of the race”?
In 1951, Turing wrote that “it seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers [...] At some stage therefore we should have to expect the machines to take control”.
Turing knew. He knew, and he went and founded the field of computer science anyway. What a terrible person, right?
I don’t know. At least to Shane Legg.
According to Eliezer, free will is an illusion, so Shane doesn’t really have a choice.