Could you do me a BIG FAVOR and every time you write “Yvain says...” or “Yvain believes...” in the future, follow it with ”...according to my interpretation of him, which has been consistently wrong every time I’ve tried to use it before”? I am getting really tired of having to clean up after your constant malicious misinterpretations of me.
So everyone should be aware that whenever Dmytry/private_messaging claims Yvain said something, that’s almost always wrong according to Yvain’s own view of what Yvain said.
I suppose the difference is whether you’re doing the Intel attack now, or in a hypothetical future in which Intel is making brain simulators that seem likely to become AGI. As someone else mentioned, if we’re talking about literally THEY ARE BUILDING SKYNET RIGHT NOW, then violence seems like the right idea.”
Emphasis mine. In this original quote, in the hypothetical future, where Intel is building brain simulations that seem likely to become artificial general intelligence, he supports violence. As clear as it can be.
His subsequent re-formulation to make himself look less bad was:
Even Yvain supports violence of AI seems imminent”. No, I might support violence if an obviously hostile unstoppable SKYNET-style AI seemed clearly imminent
Now, the caveat here is that he would use brain simulators built in the hypothetical future by Intel to be an example of “an obviously hostile unstoppable SKYNET-style AI” , a clear contradiction (if it was so obvious Intel wouldn’t be making those brain emulations)
Yvain has told you in the past the following:
So everyone should be aware that whenever Dmytry/private_messaging claims Yvain said something, that’s almost always wrong according to Yvain’s own view of what Yvain said.
The original quote from Yvain was
Emphasis mine. In this original quote, in the hypothetical future, where Intel is building brain simulations that seem likely to become artificial general intelligence, he supports violence. As clear as it can be.
His subsequent re-formulation to make himself look less bad was:
Now, the caveat here is that he would use brain simulators built in the hypothetical future by Intel to be an example of “an obviously hostile unstoppable SKYNET-style AI” , a clear contradiction (if it was so obvious Intel wouldn’t be making those brain emulations)