Thanks, I often commit that mistake. I just write without thinking much, not recognizing the potential importance the given issues might bear. I guess the reason is that I mainly write for urge of feedback and to alleviate mental load.
It’s not really meant as an excuse but rather the exposure of how one can use the same arguments to support a different purpose while criticizing others for these arguments and not their differing purpose. And the bottom line is that there will have to be a tradeoff between protecting values and the violation of the same to guarantee their preservation. You just have to decide where to draw the line here. But I still think that given extreme possibilities which cause extreme emotions the question about potential extreme measures is a valid one.
I also note that when discussing potentially contentious subjects you need to be triply careful to be clear and impossible to credibly misunderstand.
That is true, thanks. Although I like to assume that this is a forum where people will inquire meaning before condemnation. I guess my own comments disprove this. But I take it lightly as I’m not writing a dissertation here but merely a comment to a comment in a open thread.
Paraphrase of the middle: I value freedom of ideas above the unintentional suffering of a few people reading named ideas. Some people on LW value the potential suffering of people by rouge AI above the freedom of ideas which might cause named suffering.
Maybe I do too, I’ll tell you once I made up my mind. My intention of starting this whole discussion was, as stated several times, the potential danger posed by people trying to avoid potential danger given unfriendly AI (which might be a rationalization.)
P.S.
Your comment absolutely hits the nail on the head. I’m an idiot sometimes. And really tired today too...argh, excuses again!
Thanks, I often commit that mistake. I just write without thinking much, not recognizing the potential importance the given issues might bear. I guess the reason is that I mainly write for urge of feedback and to alleviate mental load.
It’s not really meant as an excuse but rather the exposure of how one can use the same arguments to support a different purpose while criticizing others for these arguments and not their differing purpose. And the bottom line is that there will have to be a tradeoff between protecting values and the violation of the same to guarantee their preservation. You just have to decide where to draw the line here. But I still think that given extreme possibilities which cause extreme emotions the question about potential extreme measures is a valid one.
That is true, thanks. Although I like to assume that this is a forum where people will inquire meaning before condemnation. I guess my own comments disprove this. But I take it lightly as I’m not writing a dissertation here but merely a comment to a comment in a open thread.
Paraphrase of the middle: I value freedom of ideas above the unintentional suffering of a few people reading named ideas. Some people on LW value the potential suffering of people by rouge AI above the freedom of ideas which might cause named suffering.
Maybe I do too, I’ll tell you once I made up my mind. My intention of starting this whole discussion was, as stated several times, the potential danger posed by people trying to avoid potential danger given unfriendly AI (which might be a rationalization.)
P.S. Your comment absolutely hits the nail on the head. I’m an idiot sometimes. And really tired today too...argh, excuses again!