Gaining knowledge at a price
1. In our lives we often pay a price for knowledge (a different price for each circumstance)
(Either it be something negative happening to us or missing a percieved valuable opportunity)
Sometimes we don’t recoup the cost of that knowledge during our lifetime, other times we gain it back manifold
(Sometimes it’s an unconscious purchase, sometimes even against one’s will)
2. Sometimes it’s a fully thought out transaction, although we often forget later on the reasoning behind our choices (only focusing on the circumstance and the outcome, forgetting how valuable the experience it provided is)
For instance, those experiences may have guided us away from certain bad paths in our lives, but we never account the value of the absence of said paths because they are nevermore part of the calculations of where to go since we discard them right away
3. For example, as we grow older, we might think, ’I haven’t encountered anything like that since, so I didn’t gain much from that experience, therefore it wasn’t worth it’
But you haven’t encountered it much because you have knowledge about it, and you might have learned something that allows you to instinctively prevent it from showing up
And so we take for granted all the times we make the correct choice (whether big or small), but we often forget that we learned it once, and possibly at a certain price
Since many LW readers responded negatively about my apparently controversial idea about AI (check my previous post) without commenting on why, I learned that maybe it’s better to build up an audience who is willing to engage with you by leading with ‘safer’ posts
And the cost was debuting this community with negative karma
The other post wasn’t downvoted because it was controversial but because it was badly argued. It was also written in your own idiosyncratic style instead of trying to match the typical style of LessWrong posts.
Didn’t vote on either, but now that I see them, my reaction is negative, and here is why:
1) In the “neuromorphic AI” I already disagree with point 1. First objection, what if the “goal” includes something like diversity or individuality, in which case merging into one mind would go strictly against the goal. Second, we would probably want to colonize the universe, and the speed of light is limited, so at the very least we would need individual minds in different solar systems, otherwise it would take literally years for the signal to go from one part of the brain to the other. Or maybe one brain in the middle of the galaxy, and tons of unthinking machines collecting resources and energy? Dunno, sounds too fragile.
Reading further, you seem to want to throw all human value away, in the hope that a sufficiently smart AI would rediscover it anyway. I think the space of possible values is very large and the AI would just be very different, either focused only on survival, or having its idiosyncratic goals that most likely would be incomprehensible (and morally/emotionally worthless) for us.
To summarize, you put a list of statements, do not argue for them well, and I see counter-arguments, half of which should be obvious for a person who regularly reads LessWrong.
2) This article… just didn’t seem well written/explained. For example, it might help to provide specific examples for each of the general conclusions.
Thanks for taking the time to explain.
It’s not controversial if nobody but yourself likes it. Currently standing at 5 votes and −12 karma, that looks like what has happened so far.