Russell was in the industry, where he was a senior scientist and principal investigator on a wide range of human performance and social science research projects and provided strategic assessments for a number of different government organizations.[2][4] Russell joined Intelligence Advanced Research Projects Activity (IARPA) as a program manager.[2][4] He developed and managed a number of high-risk, high-payoff research projects for the Office of the Director of National Intelligence.[2] Russell joined DARPA as a program manager in July 2015.[2][4] His work there focused on new experimental platforms and tools to facilitate discovery, quantification and “big validation” of fundamental measures in social science, behavioral science and human performance.[2]
Hmm he’s done a lot of macho human-enhancement-adjacent stuff. I wonder if there were some centaurists involved here.
I previously noted a lot of research projects in neurotech research in DoD funding awards. I’m making a connection between this and a joke I heard recently on a navy seals podcast. “The guys often ask what they can do to deal with drones. So you start showing them how to work the jammer devices, or net guns, and their eyes glaze over, it’s not what they wanted, they’re disappointed. They’re thinking like, ‘no… how can I deal with it. Myself.’ ”
So even though alignment-by-merger is kinda obviously not going to work (you’d have to reverse-engineer two vats of inscrutable matrices, instead of one. And the fleshy pink one wasn’t designed to be read from and can only be read on a neuron-by-neuron level after being plastinated (which also kills it). AGI alignment is something that a neuralink cannot solve.), it’s conceivable that it’s an especially popular line of thought among military/sports types.
Otherwise, this kinda lines up with my confessions on manhattan projects for AGI. You arguably need an anthropologist to make decisions about what ‘aligned’ means. I don’t know if you really need one (a philosophically inclined decision theorist, likely to already be involved already, would be enough for me) but I wouldn’t be surprised to see an anthropologist appointed in the most serious projects.
If you wanna talk about the humanity(ies), well I looked up Chief Vision Officer of AISI Adam Russel, and he has an interesting profile.
Hmm he’s done a lot of macho human-enhancement-adjacent stuff. I wonder if there were some centaurists involved here.
I previously noted a lot of research projects in neurotech research in DoD funding awards. I’m making a connection between this and a joke I heard recently on a navy seals podcast. “The guys often ask what they can do to deal with drones. So you start showing them how to work the jammer devices, or net guns, and their eyes glaze over, it’s not what they wanted, they’re disappointed. They’re thinking like, ‘no… how can I deal with it. Myself.’ ”
So even though alignment-by-merger is kinda obviously not going to work (you’d have to reverse-engineer two vats of inscrutable matrices, instead of one. And the fleshy pink one wasn’t designed to be read from and can only be read on a neuron-by-neuron level after being plastinated (which also kills it). AGI alignment is something that a neuralink cannot solve.), it’s conceivable that it’s an especially popular line of thought among military/sports types.
Otherwise, this kinda lines up with my confessions on manhattan projects for AGI. You arguably need an anthropologist to make decisions about what ‘aligned’ means. I don’t know if you really need one (a philosophically inclined decision theorist, likely to already be involved already, would be enough for me) but I wouldn’t be surprised to see an anthropologist appointed in the most serious projects.