I wasn’t familiar with the idea of cyborgism before. Found your comment that explains the idea.
As far as I’m concerned, anyone being less than the hard superintelligence form of themselves is an illness; the ai safety question fundamentally is the question of how to cure it without making it worse
This, (as well as the rest of the comment) resonates with me. I feel seen.
The question you ask doesn’t have an objectively correct answer. The entity you are interacting with doesn’t have any more intelligence than the regular version of me, only more “meta-intelligence”, if that idea makes sense.
There isn’t actually a way to squeeze more computational power out of human brains (and bodies). There is only a way to use what we already have, better.
[the words I have written in response to your comment feel correct to me now, but I expect this to unravel on the scale of ~6 hours, as I update on and deeper process their implications]
sounds like you might be slightly high on self improvement, and you’re not talking to an ai at all?
that comment was a fun one but I just meant ai co-writing, with heavy retry and edit. beware attempts to self improve fast mentally, if that really is what you’re talking about; it’s possible to do well in ways that make you more effective at helping yourself and others, even at the same time, and it’s also possible to update too hard on a mistake.
I did not use AI assistance to write this post. (I am very curious what gave you that impression!)
Thank you, these are very reasonable things to say. I believe I am aware of risks (and possible self-deceptive outcomes) inherent to self-improvement. Nevertheless, I am updating on your words (and the fact that you are saying them).
I wasn’t familiar with the idea of cyborgism before. Found your comment that explains the idea.
This, (as well as the rest of the comment) resonates with me. I feel seen.
The question you ask doesn’t have an objectively correct answer. The entity you are interacting with doesn’t have any more intelligence than the regular version of me, only more “meta-intelligence”, if that idea makes sense.
There isn’t actually a way to squeeze more computational power out of human brains (and bodies). There is only a way to use what we already have, better.
[the words I have written in response to your comment feel correct to me now, but I expect this to unravel on the scale of ~6 hours, as I update on and deeper process their implications]
sounds like you might be slightly high on self improvement, and you’re not talking to an ai at all?
that comment was a fun one but I just meant ai co-writing, with heavy retry and edit. beware attempts to self improve fast mentally, if that really is what you’re talking about; it’s possible to do well in ways that make you more effective at helping yourself and others, even at the same time, and it’s also possible to update too hard on a mistake.
I did not use AI assistance to write this post.
(I am very curious what gave you that impression!)
Thank you, these are very reasonable things to say. I believe I am aware of risks (and possible self-deceptive outcomes) inherent to self-improvement. Nevertheless, I am updating on your words (and the fact that you are saying them).