I’m also biased in favor of human-centric (as opposed to trans/posthuman) solutions to current global problems. This is mostly because, well, I’m human and I like it that way. I feel like if humanity can’t solve its own problems with its current level intelligence, it’s because we were too lazy, not because we weren’t smart enough.
I’m curious to unpack this a bit. I have a couple of conflicting interpretations of what you might be getting at here; could you clarify?
At first, it sounded to me as if you were saying that you consider intelligence increase to be “transhuman”, but laziness reduction (diligence increase?) to be not “transhuman”. Which made me wonder, why the distinction?
Then, I thought you might be saying that laziness/diligence is morally significant to you, while intelligence increase is not morally significant. In other words, if humanity fails because we are lazy, we deserved to fail.
Am I totally misreading you? I suspect I am, at least in one of the above interpretations.
I haven’t unpacked the value/bias for myself yet, and I’m pretty sure at least part of it is inconsistent with my other values.
I’m not necessarily morally opposed to artificial (i.e. drugs or cybernetic) intelligence OR diligence enhancements. But I would be disappointed if turned out that humanity NEEDED such enhancements in order to fix its own problems.
I believe that diligence is something that can be taught, without changing anything fundamental about human nature.
I’m curious to unpack this a bit. I have a couple of conflicting interpretations of what you might be getting at here; could you clarify?
At first, it sounded to me as if you were saying that you consider intelligence increase to be “transhuman”, but laziness reduction (diligence increase?) to be not “transhuman”. Which made me wonder, why the distinction?
Then, I thought you might be saying that laziness/diligence is morally significant to you, while intelligence increase is not morally significant. In other words, if humanity fails because we are lazy, we deserved to fail.
Am I totally misreading you? I suspect I am, at least in one of the above interpretations.
I haven’t unpacked the value/bias for myself yet, and I’m pretty sure at least part of it is inconsistent with my other values.
I’m not necessarily morally opposed to artificial (i.e. drugs or cybernetic) intelligence OR diligence enhancements. But I would be disappointed if turned out that humanity NEEDED such enhancements in order to fix its own problems.
I believe that diligence is something that can be taught, without changing anything fundamental about human nature.