I’m not pissed at the Indian rice farmer who doesn’t understand alignment and will be as much of a victim as me when DeepMind researchers accidentally kill me and my relatives.
I’m very not pissed at Robin Shah, who, whatever his beliefs, is making a highly respectable attempt to solve the problem and not contributing to it.
I am appropriately angry at the DeepMind researchers who push the capabilities frontier and for some reason err in their anticipation of the consequences.
I am utterly infuriated at the people who agree with me about the consequences and decide to help push that capabilities frontier anyways, either out of greed or some “science fiction protagonist” syndrome.
It’s subtle because few people explicitly believe that’s what they’re doing in their heads; they just agree on doomerism and then perform greed- or prestige-induced rationalizations that what they’re doing isn’t really contributing. For example, Shane Legg; he’ll admit that the chance of human extinction from AGI is somewhere “between 5-50%” but then go found DeepMind. Many people at OpenAI also fit the bill, for varying reasons.
It’s relevant to note that Legg is also doing a bunch of safety research, much of it listed here, I don’t see why it should be obvious that he’s making a less respectable attempt to solve the problem than other alignment researchers. (He’s working on the causal incentives framework, and on stuff related to avoiding wireheading.)
Also, wasn’t deepmind an early attempt at gathering researchers to be able to coordinate against arms races?
It’s relevant to note that Legg is also doing a bunch of safety research, much of it listed here, I don’t see why it should be obvious that he’s making a less respectable attempt to solve the problem than other alignment researchers. (He’s working on the causal incentives framework, and on stuff related to avoiding wireheading.)
I’m glad, but if Hermann Goring retired from public leadership in 1936 and then spent the rest of his life making world peace posters, I still wouldn’t consider him a good person.
Also, wasn’t deepmind an early attempt at gathering researchers to be able to coordinate against arms races?
Sounds like a great rationalization for AI researchers who are intellectually concerned about their actions, but really want to make a boatload of money doing exactly what they were going to do in the first place. I don’t understand at all how that would work, and it sure doesn’t seem like it did.
Without necessarily disagreeing, I’m curious exactly how far back you want to push this. The natural outcome of technological development has been clear to sufficiently penetrating thinkers since the nineteenth century. Samuel Butler saw it. George Eliot saw it. Following Butler, should “every machine of every sort [...] be destroyed by the well-wisher of his species,” that we should “at once go back to the primeval condition of the race”?
Fair enough. Have you already told Rohin Shah to go fuck himself?
Don’t split hairs. He’s an alignment researcher.
But he’s not a doomer like you. Aren’t you pissed at everyone who’s not a doomer?
I’m not pissed at the Indian rice farmer who doesn’t understand alignment and will be as much of a victim as me when DeepMind researchers accidentally kill me and my relatives.
I’m very not pissed at Robin Shah, who, whatever his beliefs, is making a highly respectable attempt to solve the problem and not contributing to it.
I am appropriately angry at the DeepMind researchers who push the capabilities frontier and for some reason err in their anticipation of the consequences.
I am utterly infuriated at the people who agree with me about the consequences and decide to help push that capabilities frontier anyways, either out of greed or some “science fiction protagonist” syndrome.
Who are those latest people? Do you have any examples?
It’s subtle because few people explicitly believe that’s what they’re doing in their heads; they just agree on doomerism and then perform greed- or prestige-induced rationalizations that what they’re doing isn’t really contributing. For example, Shane Legg; he’ll admit that the chance of human extinction from AGI is somewhere “between 5-50%” but then go found DeepMind. Many people at OpenAI also fit the bill, for varying reasons.
It’s relevant to note that Legg is also doing a bunch of safety research, much of it listed here, I don’t see why it should be obvious that he’s making a less respectable attempt to solve the problem than other alignment researchers. (He’s working on the causal incentives framework, and on stuff related to avoiding wireheading.)
Also, wasn’t deepmind an early attempt at gathering researchers to be able to coordinate against arms races?
I’m glad, but if Hermann Goring retired from public leadership in 1936 and then spent the rest of his life making world peace posters, I still wouldn’t consider him a good person.
Sounds like a great rationalization for AI researchers who are intellectually concerned about their actions, but really want to make a boatload of money doing exactly what they were going to do in the first place. I don’t understand at all how that would work, and it sure doesn’t seem like it did.
Without necessarily disagreeing, I’m curious exactly how far back you want to push this. The natural outcome of technological development has been clear to sufficiently penetrating thinkers since the nineteenth century. Samuel Butler saw it. George Eliot saw it. Following Butler, should “every machine of every sort [...] be destroyed by the well-wisher of his species,” that we should “at once go back to the primeval condition of the race”?
In 1951, Turing wrote that “it seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers [...] At some stage therefore we should have to expect the machines to take control”.
Turing knew. He knew, and he went and founded the field of computer science anyway. What a terrible person, right?
I don’t know. At least to Shane Legg.
According to Eliezer, free will is an illusion, so Shane doesn’t really have a choice.