Since many LW readers responded negatively about my apparently controversial idea about AI (check my previous post) without commenting on why, I learned that maybe it’s better to build up an audience who is willing to engage with you by leading with ‘safer’ posts
And the cost was debuting this community with negative karma
The other post wasn’t downvoted because it was controversial but because it was badly argued. It was also written in your own idiosyncratic style instead of trying to match the typical style of LessWrong posts.
Didn’t vote on either, but now that I see them, my reaction is negative, and here is why:
1) In the “neuromorphic AI” I already disagree with point 1. First objection, what if the “goal” includes something like diversity or individuality, in which case merging into one mind would go strictly against the goal. Second, we would probably want to colonize the universe, and the speed of light is limited, so at the very least we would need individual minds in different solar systems, otherwise it would take literally years for the signal to go from one part of the brain to the other. Or maybe one brain in the middle of the galaxy, and tons of unthinking machines collecting resources and energy? Dunno, sounds too fragile.
Reading further, you seem to want to throw all human value away, in the hope that a sufficiently smart AI would rediscover it anyway. I think the space of possible values is very large and the AI would just be very different, either focused only on survival, or having its idiosyncratic goals that most likely would be incomprehensible (and morally/emotionally worthless) for us.
To summarize, you put a list of statements, do not argue for them well, and I see counter-arguments, half of which should be obvious for a person who regularly reads LessWrong.
2) This article… just didn’t seem well written/explained. For example, it might help to provide specific examples for each of the general conclusions.
Since many LW readers responded negatively about my apparently controversial idea about AI (check my previous post) without commenting on why, I learned that maybe it’s better to build up an audience who is willing to engage with you by leading with ‘safer’ posts
And the cost was debuting this community with negative karma
The other post wasn’t downvoted because it was controversial but because it was badly argued. It was also written in your own idiosyncratic style instead of trying to match the typical style of LessWrong posts.
Didn’t vote on either, but now that I see them, my reaction is negative, and here is why:
1) In the “neuromorphic AI” I already disagree with point 1. First objection, what if the “goal” includes something like diversity or individuality, in which case merging into one mind would go strictly against the goal. Second, we would probably want to colonize the universe, and the speed of light is limited, so at the very least we would need individual minds in different solar systems, otherwise it would take literally years for the signal to go from one part of the brain to the other. Or maybe one brain in the middle of the galaxy, and tons of unthinking machines collecting resources and energy? Dunno, sounds too fragile.
Reading further, you seem to want to throw all human value away, in the hope that a sufficiently smart AI would rediscover it anyway. I think the space of possible values is very large and the AI would just be very different, either focused only on survival, or having its idiosyncratic goals that most likely would be incomprehensible (and morally/emotionally worthless) for us.
To summarize, you put a list of statements, do not argue for them well, and I see counter-arguments, half of which should be obvious for a person who regularly reads LessWrong.
2) This article… just didn’t seem well written/explained. For example, it might help to provide specific examples for each of the general conclusions.
Thanks for taking the time to explain.
It’s not controversial if nobody but yourself likes it. Currently standing at 5 votes and −12 karma, that looks like what has happened so far.