One suggestion I’d make is to minimise (you can’t eliminate it) your reliance on philosophy. One example of problematic usage of philosophical concepts is definitely “consciousness”.
I’d much prefer we replace it with far more definable goals like “preservation of homo sapiens / genetic species” and the like.
In terms of inside-outside, I think you may have a point, but it’s important to consider the options available for us. Permanently preventing an intelligence explosion, if an explosion is possible, might be extremely difficult. So the level of safety would have to be considered relative to other developments.
I’d much prefer we replace it with far more definable goals like “preservation of homo sapiens / genetic species” and the like.
Then how about going and defining it? How much genes do you have to exchange via gene therapy for someone to stop being part of the genetic species homo sapiens?
I didn’t say I’D define it lol. Merely that it seems quite reasonable to say its more definable. I’m not sure I’m capable of formalising in code at anywhere near my current level of knowledge. However, it does occur to me that we categorise species in biology quite often—my current inclination is to go with a similar definition. Genetic classification of species is a relatively well explored scientific topic, and while I’m sure there are methodological disagreements, it’s NOTHING compared to philosophy. So it’s a very feasible improvement.
EDIT> Philosophically speaking, I think I might take a crack at defining it at some point, but not yet.
Transhumanism is a thing. With increasing technology we do have the ability of exchange a bunch of genes. Our ancestors in 10,000 years might share less DNA with us than Neanderthalers.
In general transhumanism thought genetic change isn’t something worth fighting. If we exchange a bunch of genes and raise our IQ’s to 200 while restoring native Vitamin C productions, that’s a good thing. We might move away from homo sapiens but there no reason that an FAI has to ensure that we stay at a certain suboptimal DNA composition.
Our ancestors in 10,000 years might share less DNA with us than Neanderthalers.
Well for that to occur we’ll almost certainly need a FAI long before then. So I’d suggest optimise for that first, and think about the fun stuff once survival is ensured.
certain suboptimal DNA composition.
Suboptimal to what goal? There no such thing as an “optimal” DNA composition that I’m aware of. Genetic survival is totaly contextual.
Well for that to occur we’ll almost certainly need a FAI long before then.
No.
I have seen multiple people face to face who have implants to be able to perceive magnetic fields today.
We have the technology to make grown monkeys perceive an additional color via gene therapy.
Cloning isn’t completely trivial today but it’s possible to clone mammals today. I would be pretty confident that we solve the technological issues in the next decades that make cloning harder than growing a normal human being.
At that point reading a Vitamin C gene is trivial. For a lot of enzyms we can search for the best version in neighboring species and replace the human version with that.
In the West we might not legally allow such products but I can very well imagine a few scientists in a freer legal environment to go along with such a plan.
So I’d suggest optimise for that first, and think about the fun stuff once survival is ensured.
If you write into the FAI that it prevents the fun stuff because the fun stuff is bad, we might not have that option.
OK fair comment, although I note that the genetic approach doesn’t (and imo shouldn’t) only consider the welfare of humans, but also other species. Human genetics would probably have to be the starting point for prioritising them though, otherwise we might end with a FAI governing a planet of plankton or something.
While I’m quite interested in the potential of things like wearable tech and cyborgism, I feel we ought to be fairly cautious with the gene side of things, because the unintentional potential for fashion eugenics, branching off competing species etc. I feel existential risk questions have to come first even if that’s not always the fun option. I see what you’re saying though, and I hope we find a way to have our cake and eat it too if possible.
Human genetics would probably have to be the starting point for prioritising them though, otherwise we might end with a FAI governing a planet of plankton or something.
Plankton doesn’t have what we consider consciousness. That’s why that goal is in the mission statement.
I feel we ought to be fairly cautious with the gene side of things, because the unintentional potential for fashion eugenics, branching off competing species
Given that you aren’t the only person who thinks that way Western countries might indeed be cautious but that doesn’t mean that the same goes for East Asia or entrepreneurs in Africa.
One suggestion I’d make is to minimise (you can’t eliminate it) your reliance on philosophy. One example of problematic usage of philosophical concepts is definitely “consciousness”.
http://citizensearth.wordpress.com/2014/08/23/is-placing-consciousness-at-the-heart-of-futurist-ethics-a-terrible-mistake-are-there-alternatives/
I’d much prefer we replace it with far more definable goals like “preservation of homo sapiens / genetic species” and the like.
In terms of inside-outside, I think you may have a point, but it’s important to consider the options available for us. Permanently preventing an intelligence explosion, if an explosion is possible, might be extremely difficult. So the level of safety would have to be considered relative to other developments.
Then how about going and defining it? How much genes do you have to exchange via gene therapy for someone to stop being part of the genetic species homo sapiens?
I didn’t say I’D define it lol. Merely that it seems quite reasonable to say its more definable. I’m not sure I’m capable of formalising in code at anywhere near my current level of knowledge. However, it does occur to me that we categorise species in biology quite often—my current inclination is to go with a similar definition. Genetic classification of species is a relatively well explored scientific topic, and while I’m sure there are methodological disagreements, it’s NOTHING compared to philosophy. So it’s a very feasible improvement.
EDIT> Philosophically speaking, I think I might take a crack at defining it at some point, but not yet.
Transhumanism is a thing. With increasing technology we do have the ability of exchange a bunch of genes. Our ancestors in 10,000 years might share less DNA with us than Neanderthalers.
In general transhumanism thought genetic change isn’t something worth fighting. If we exchange a bunch of genes and raise our IQ’s to 200 while restoring native Vitamin C productions, that’s a good thing. We might move away from homo sapiens but there no reason that an FAI has to ensure that we stay at a certain suboptimal DNA composition.
Well for that to occur we’ll almost certainly need a FAI long before then. So I’d suggest optimise for that first, and think about the fun stuff once survival is ensured.
Suboptimal to what goal? There no such thing as an “optimal” DNA composition that I’m aware of. Genetic survival is totaly contextual.
No.
I have seen multiple people face to face who have implants to be able to perceive magnetic fields today.
We have the technology to make grown monkeys perceive an additional color via gene therapy.
Cloning isn’t completely trivial today but it’s possible to clone mammals today. I would be pretty confident that we solve the technological issues in the next decades that make cloning harder than growing a normal human being.
At that point reading a Vitamin C gene is trivial. For a lot of enzyms we can search for the best version in neighboring species and replace the human version with that.
In the West we might not legally allow such products but I can very well imagine a few scientists in a freer legal environment to go along with such a plan.
If you write into the FAI that it prevents the fun stuff because the fun stuff is bad, we might not have that option.
OK fair comment, although I note that the genetic approach doesn’t (and imo shouldn’t) only consider the welfare of humans, but also other species. Human genetics would probably have to be the starting point for prioritising them though, otherwise we might end with a FAI governing a planet of plankton or something.
While I’m quite interested in the potential of things like wearable tech and cyborgism, I feel we ought to be fairly cautious with the gene side of things, because the unintentional potential for fashion eugenics, branching off competing species etc. I feel existential risk questions have to come first even if that’s not always the fun option. I see what you’re saying though, and I hope we find a way to have our cake and eat it too if possible.
Plankton doesn’t have what we consider consciousness. That’s why that goal is in the mission statement.
Given that you aren’t the only person who thinks that way Western countries might indeed be cautious but that doesn’t mean that the same goes for East Asia or entrepreneurs in Africa.