Transhumanism uses tech to change bodies and minds, Nietzsche uses old pathways.
Yeah, that’s his mistake. He points at the right goal, but can’t say how to get there. As I said, no real work.
Transhumanism wants to boost everyone, Nietzsche only a select few.
I think that’s unfair to Freddy. His Zarathustra puppet goes around telling everyone to do it, but they aren’t interested. Obviously he was envisioning individual progress as opposed to inventing tech then distributing it to Muggles, so he thinks that if few people want to put in the effort then few people will get boosted.
Transhumanism likes individual liberties.
I don’t understand what Bostrom means by that. AFAICT, Fred is huge on individual liberties.
Transhumanism comes from the Enlightenment.
I fail to see the relevance.
What I got from reading Nietzsche (before I got any exposure to transhumanism) was an extremely pretty way of saying “Striving to improve yourself a lot is awesome”. No argument why, no proposed methods, some very sucky assumptions about what it’d be like. Just a cheer, and an invitation for people who share this goal to band together and work on it. Which is what transhumanists have done.
Nietzsche seems to always see the project of self-improvement in opposition to the project of building a functional society out of multiple people who don’t kill each other, and the second one always seemed more important to me.
It’s hard for me to understand what he’s saying because he doesn’t engage (much? at all?) with Actually True Morality, that is the utilitarian/”group is just a sum of individuals” paradigm. The question of whether it’s OK for the strong to bully the weak almost doesn’t seem to interest him.
One man is not a whole lot better than one ape, but a group of men is infinitely superior to a group of apes.
ETA: I often like to think of FAI as not the ultimate transhuman, but the ultimate institution/legal system/moral code.
You might say that Nietzsche takes opposition to the Repugnant Conclusion to an extreme: his philosophy values humanity by the $L^\infty$ norm rather than the $L^1$ norm.
That’s an emendation, not the original; in most of his mid-to-late works, he really does mean that the absolute magnitude of a character, without reference to its direction, is of value.
No one believes in the $L^1$ norm. There is only Nietzsche, who believes in $L_\infty$, and utilitarians, who believe in the integral.
In this thread: people using mathematics where it doesn’t belong.
I suppose. It’s a more efficient and fun form of communication then writing it out in English, but it loses big on the number of people who can understand it.
I know how it looked when you jumped in (presumably from the Recent Comments page), but both of us did know the proper math- it’s the analogy that we were ironing out.
I read from the start of the L^p talk to now, and I can’t think why both of you bothered to speak in that language. The major point of contention occurs in a lacuna in the L^p semantic space, so continuing in that vein is… hmmm.
It’s like arguing whether the moon is pale-green or pale-blue, and deciding that since plain English just doesn’t cut it, why not discuss the issue in Japanese?
deciding that since plain English just doesn’t cut it, why not discuss the issue in Japanese?
Why not, if you know Japanese, and it has more suitable means of expressing the topic? (I see your point, but don’t think the analogy stands as stated.)
No offense to Fred, but he’s a bitter loner. Idealistic nerd wants to make the world awesome, runs out and tells everyone, everyone laughs at him, idealistic nerd gives up in disgust and walks away muttering “I’ll show them! I’ll show them all!”.
Also, he thinks this project is really really important, worth declaring war against the rest of the world and killing whoever stands in the way of becoming cooler. (As you say, whether he thinks we can also kill people who don’t actively oppose it is unclear.) This is a dangerous idea (see the zillion glorious revolutions that executed critics and plunged happily into dictatorship) - though it is less dangerous when your movement is made of complete individualists. As it happens, becoming superhumans will not require offing any Luddites (though it does require offending them and coercing them by legal means), but I can’t confidently say it wouldn’t be worth it if it were the only way—even after correcting for historical failures.
By the same token, group rationality is in fact the way to go, but individual rationality does require telling society to take a hike every now and then.
FAI as not the ultimate transhuman, but the ultimate institution/legal system/moral code
It certaintly shouldn’t be a transhuman. Eliezer’s preferred metaphor is more like “the ultimate laws of physics”, which says quite a bit about how individualistic you and he are.
Nietzsche can’t know what the Superman will look like—nobody can. But he provides a great deal of assistance: he is extremely insightful about what people are doing today (well, late 1800s, but still applicable), how that tricks us into behaving and believing in certain ways, and what that means.
But he wrote these insights as poetry. If you wanted an argument spelled out logically or a methodology of scientific inquiry, you picked the wrong philosopher.
I didn’t see much transhumanism in Nietzsche, I just like reading him because he has a lot of interesting ideas while living in a quite distant intellectual context.
The source is Bostrom’s 2004 paper A history of transhumanist thought, page 4. I’ll paraphrase the difference he lists:
Yeah, that’s his mistake. He points at the right goal, but can’t say how to get there. As I said, no real work.
I think that’s unfair to Freddy. His Zarathustra puppet goes around telling everyone to do it, but they aren’t interested. Obviously he was envisioning individual progress as opposed to inventing tech then distributing it to Muggles, so he thinks that if few people want to put in the effort then few people will get boosted.
I don’t understand what Bostrom means by that. AFAICT, Fred is huge on individual liberties.
I fail to see the relevance.
What I got from reading Nietzsche (before I got any exposure to transhumanism) was an extremely pretty way of saying “Striving to improve yourself a lot is awesome”. No argument why, no proposed methods, some very sucky assumptions about what it’d be like. Just a cheer, and an invitation for people who share this goal to band together and work on it. Which is what transhumanists have done.
Nietzsche seems to always see the project of self-improvement in opposition to the project of building a functional society out of multiple people who don’t kill each other, and the second one always seemed more important to me.
It’s hard for me to understand what he’s saying because he doesn’t engage (much? at all?) with Actually True Morality, that is the utilitarian/”group is just a sum of individuals” paradigm. The question of whether it’s OK for the strong to bully the weak almost doesn’t seem to interest him.
One man is not a whole lot better than one ape, but a group of men is infinitely superior to a group of apes.
ETA: I often like to think of FAI as not the ultimate transhuman, but the ultimate institution/legal system/moral code.
You might say that Nietzsche takes opposition to the Repugnant Conclusion to an extreme: his philosophy values humanity by the $L^\infty$ norm rather than the $L^1$ norm.
(Assuming that individual value is nonnegative.)
That’s an emendation, not the original; in most of his mid-to-late works, he really does mean that the absolute magnitude of a character, without reference to its direction, is of value.
But certainly the people who believe in the $L^1$ norm don’t take the absolute value...
What? The L^1 norm is the integral of the absolute value of the function.
In this thread: people using mathematics where it doesn’t belong.
I should say:
No one believes in the $L^1$ norm. There is only Nietzsche, who believes in $L_\infty$, and utilitarians, who believe in the integral.
I suppose. It’s a more efficient and fun form of communication then writing it out in English, but it loses big on the number of people who can understand it.
Yes, that’s what I should have written.
I know how it looked when you jumped in (presumably from the Recent Comments page), but both of us did know the proper math- it’s the analogy that we were ironing out.
I read from the start of the L^p talk to now, and I can’t think why both of you bothered to speak in that language. The major point of contention occurs in a lacuna in the L^p semantic space, so continuing in that vein is… hmmm.
It’s like arguing whether the moon is pale-green or pale-blue, and deciding that since plain English just doesn’t cut it, why not discuss the issue in Japanese?
Why not, if you know Japanese, and it has more suitable means of expressing the topic? (I see your point, but don’t think the analogy stands as stated.)
If we extend the analogy to the above conversation, it’s an argument between non-Japanese otaku.
No offense to Fred, but he’s a bitter loner. Idealistic nerd wants to make the world awesome, runs out and tells everyone, everyone laughs at him, idealistic nerd gives up in disgust and walks away muttering “I’ll show them! I’ll show them all!”.
Also, he thinks this project is really really important, worth declaring war against the rest of the world and killing whoever stands in the way of becoming cooler. (As you say, whether he thinks we can also kill people who don’t actively oppose it is unclear.) This is a dangerous idea (see the zillion glorious revolutions that executed critics and plunged happily into dictatorship) - though it is less dangerous when your movement is made of complete individualists. As it happens, becoming superhumans will not require offing any Luddites (though it does require offending them and coercing them by legal means), but I can’t confidently say it wouldn’t be worth it if it were the only way—even after correcting for historical failures.
By the same token, group rationality is in fact the way to go, but individual rationality does require telling society to take a hike every now and then.
It certaintly shouldn’t be a transhuman. Eliezer’s preferred metaphor is more like “the ultimate laws of physics”, which says quite a bit about how individualistic you and he are.
Nietzsche can’t know what the Superman will look like—nobody can. But he provides a great deal of assistance: he is extremely insightful about what people are doing today (well, late 1800s, but still applicable), how that tricks us into behaving and believing in certain ways, and what that means.
But he wrote these insights as poetry. If you wanted an argument spelled out logically or a methodology of scientific inquiry, you picked the wrong philosopher.
I didn’t see much transhumanism in Nietzsche, I just like reading him because he has a lot of interesting ideas while living in a quite distant intellectual context.