I’m sorry, but it really looks like you’ve very much misunderstood the technology, the situation, the risks, and the various arguments that have been made, across the board. Sorry that I couldn’t be of help.
Thanks so much for the feedback :) Could you (or someone else) go further into where I misunderstood something? Because at least right now, it seems like I’m genuinely unaware of something which all of you others know.
I currently believe that all the AGI “researchers” are delusional just for thinking that safe AI (or AGI) can even exist. And even if it would ever exist in a “perfect” world, there would be intermediate steps far more “dangerous” than the end result of AGI, namely publicly available uncensored LLMs. At the same time, if we continue censoring LLMs, humanity will continue to be stuck in all the crises where it currently is.
I’m sorry, but it really looks like you’ve very much misunderstood the technology, the situation, the risks, and the various arguments that have been made, across the board. Sorry that I couldn’t be of help.
Thanks so much for the feedback :) Could you (or someone else) go further into where I misunderstood something? Because at least right now, it seems like I’m genuinely unaware of something which all of you others know.
I currently believe that all the AGI “researchers” are delusional just for thinking that safe AI (or AGI) can even exist. And even if it would ever exist in a “perfect” world, there would be intermediate steps far more “dangerous” than the end result of AGI, namely publicly available uncensored LLMs. At the same time, if we continue censoring LLMs, humanity will continue to be stuck in all the crises where it currently is.
Where am I going wrong?