Human brain enhancement must happen extremely slowly in order to be harmless. Due to fundamental chaos theory such as the n-body problem (or even the three-body problem shown below), it is impossible to predict the results of changing one variable in the human brain because it will simultaneously change at least two other variables, and all three variables will influence eachother at once. The costs of predicting results (including risk estimates of something horrible happening to the person) skyrockets exponentially for every additional second into the future.
Rather than “upgrading” the first person to volunteers (which will be a race to the bottom), the only safe sane way to augment a person is to do simple and obviously safe and desirable things to a volunteer, such as making most people gain the ability to get a full nights rest in only 6 hours of sleep. After meticulous double-blind research to see what side effects are, and waiting for a number of years or even decades, people can try gradually layering a second treatment on top of that. The most recommended option is aging-related issues, since that extends the time limit to observe more interactions and locate any complex problems.
The transhumanist movement is built on a cultural obsession with rapid self-improvement, but that desire for rapid gains is not safe or sane and will result in people racing to the bottom to stir up all kinds of things in their brain and ending up in bizarre configurations. Rationalists with words should be the first to upgrade people, not transumanists with medical treatments, as medical treatments will yield unpredictable results and people are right to be suspicious of “theoretical medical upgrades” as being unsafe.
I’m curious as to why you think this since I mostly believe the opposite.
Do you mean general “induce an organism to gain a function” research (of which I agree shouldn’t be opposed) or specifically (probably what most people refer to / here) “cause a virus to become more pathogenic or lethal”?
Edit;
Your comment originally said you thought GoF research should go ahead. You’ve edited your comment to make a different point (viral GoF to transhumanist cognition enhancement).
I think they’re talking about AI gain of function. Though, they are very similar and will soon become exactly the same thing, as ai and biology merge into the same field; this is already most of the way through happening.
This introduces some interesting topics, but the part about “AI gain of function research” is false. I was saying nothing like that. I’ve never heard “gain of function research” be used to refer to AI before. I was referring to biology, and I have no opinion whatsoever on any use of AI for weapon systems or warfare.
[Quote removed at Trevor1′s request, he has substantially changed his comment since this one].
I expect that the opposite of this is closer to the truth. In particular, I expect that the more often power bends to reason, the easier it will become to make it do so in the future.
Human brain enhancement must happen extremely slowly in order to be harmless. Due to fundamental chaos theory such as the n-body problem (or even the three-body problem shown below), it is impossible to predict the results of changing one variable in the human brain because it will simultaneously change at least two other variables, and all three variables will influence eachother at once. The costs of predicting results (including risk estimates of something horrible happening to the person) skyrockets exponentially for every additional second into the future.
Rather than “upgrading” the first person to volunteers (which will be a race to the bottom), the only safe sane way to augment a person is to do simple and obviously safe and desirable things to a volunteer, such as making most people gain the ability to get a full nights rest in only 6 hours of sleep. After meticulous double-blind research to see what side effects are, and waiting for a number of years or even decades, people can try gradually layering a second treatment on top of that. The most recommended option is aging-related issues, since that extends the time limit to observe more interactions and locate any complex problems.
The transhumanist movement is built on a cultural obsession with rapid self-improvement, but that desire for rapid gains is not safe or sane and will result in people racing to the bottom to stir up all kinds of things in their brain and ending up in bizarre configurations. Rationalists with words should be the first to upgrade people, not transumanists with medical treatments, as medical treatments will yield unpredictable results and people are right to be suspicious of “theoretical medical upgrades” as being unsafe.
I’m curious as to why you think this since I mostly believe the opposite.
Do you mean general “induce an organism to gain a function” research (of which I agree shouldn’t be opposed) or specifically (probably what most people refer to / here) “cause a virus to become more pathogenic or lethal”?
Edit;
Your comment originally said you thought GoF research should go ahead. You’ve edited your comment to make a different point (viral GoF to transhumanist cognition enhancement).
I think they’re talking about AI gain of function. Though, they are very similar and will soon become exactly the same thing, as ai and biology merge into the same field; this is already most of the way through happening.
This introduces some interesting topics, but the part about “AI gain of function research” is false. I was saying nothing like that. I’ve never heard “gain of function research” be used to refer to AI before. I was referring to biology, and I have no opinion whatsoever on any use of AI for weapon systems or warfare.
ah, okay.
whoa wut. This is a completely different comment than it was before. Is it intended to be an equivalent meaning, from your perspective?
[Quote removed at Trevor1′s request, he has substantially changed his comment since this one].
I expect that the opposite of this is closer to the truth. In particular, I expect that the more often power bends to reason, the easier it will become to make it do so in the future.
I agree with this strongly with some complicated caveats I’m not sure how to specify precisely.