Basically, December of 2018 seems like a bad time to “go abstract” in favor of transhumanism, when the implementation details of transhumanism are finally being seriously discussed
Wouldn’t this then be the best time to go abstract, since it would necessarily distinguish bad things done in the name of transhumanism from the actual values of the philosophy.
I can see two senses for what you might be saying…
I agree with one of them (see the end of my response), but I suspect you intend the other:
First, it seems clear to me that the value of a philosophy early on is a speculative thing, highly abstract, oriented towards the future, and latent in the literal expected value of the actions and results the philosophy suggests and envisions.
However, eventually, the actual results of actual people whose hands were moved by brains that contain the philosophy can be valued directly.
Basically, the value of the results of a plan or philosophy screen off the early expected value of the plan or philosophy… not entirely (because the it might have been “the right play, given the visible cards” with the deal revealing low probability outcomes). However, bad results provide at least some Bayesian evidence of bad ideas without bringing more of a model into play.
So when you say that “the actual values of transhumanism” might be distinguished from less abstract “things done in the name of transhumanism” that feels to me like it could be a sort of category error related to expected value? If the abstraction doesn’t address and prevent highly plausible failure modes of someone who might attempt to implement the abstract ideas, then the abstraction was bad.
(Worth pointing out: The LW/OB subculture has plenty to say here, though mostly by Hanson, who has been pointing out for over a decade that much of medicine is actively harmful and exists as a costly signal of fitness as an alliance partner aimed at non-perspicacious third parties through ostensible proofs of “caring” that have low actual utility with respect to desirable health outcomes. Like… it is arguably PART OF OUR CULTURE that “standard non-efficacious bullshit medicine” isn’t “real transhumanism”. However, that part of our culture maybe deserves to be pushed forward a bit more right now?)
A second argument that seems like it could be unpacked from your statement, that I would agree with, is that well formulated abstractions might contain within them a lot of valuable latent potential, and in the press of action it could be useful to refer back to these abstractions as a sort of True North that might otherwise fall from the mind and leave one’s hands doing confused things.
When the fog of war descends, and a given plan seemed good before the fog descended, and no new evidence has arisen to the contrary, and the fog itself was expected, then sticking to the plan (however abstract or philosophical it may be) has much to commend it :-)
If this latter thing is all you meant, then… cool? :-)
Wouldn’t this then be the best time to go abstract, since it would necessarily distinguish bad things done in the name of transhumanism from the actual values of the philosophy.
I can see two senses for what you might be saying…
I agree with one of them (see the end of my response), but I suspect you intend the other:
First, it seems clear to me that the value of a philosophy early on is a speculative thing, highly abstract, oriented towards the future, and latent in the literal expected value of the actions and results the philosophy suggests and envisions.
However, eventually, the actual results of actual people whose hands were moved by brains that contain the philosophy can be valued directly.
Basically, the value of the results of a plan or philosophy screen off the early expected value of the plan or philosophy… not entirely (because the it might have been “the right play, given the visible cards” with the deal revealing low probability outcomes). However, bad results provide at least some Bayesian evidence of bad ideas without bringing more of a model into play.
So when you say that “the actual values of transhumanism” might be distinguished from less abstract “things done in the name of transhumanism” that feels to me like it could be a sort of category error related to expected value? If the abstraction doesn’t address and prevent highly plausible failure modes of someone who might attempt to implement the abstract ideas, then the abstraction was bad.
(Worth pointing out: The LW/OB subculture has plenty to say here, though mostly by Hanson, who has been pointing out for over a decade that much of medicine is actively harmful and exists as a costly signal of fitness as an alliance partner aimed at non-perspicacious third parties through ostensible proofs of “caring” that have low actual utility with respect to desirable health outcomes. Like… it is arguably PART OF OUR CULTURE that “standard non-efficacious bullshit medicine” isn’t “real transhumanism”. However, that part of our culture maybe deserves to be pushed forward a bit more right now?)
A second argument that seems like it could be unpacked from your statement, that I would agree with, is that well formulated abstractions might contain within them a lot of valuable latent potential, and in the press of action it could be useful to refer back to these abstractions as a sort of True North that might otherwise fall from the mind and leave one’s hands doing confused things.
When the fog of war descends, and a given plan seemed good before the fog descended, and no new evidence has arisen to the contrary, and the fog itself was expected, then sticking to the plan (however abstract or philosophical it may be) has much to commend it :-)
If this latter thing is all you meant, then… cool? :-)