Yes, so, on further reflection, what I am doing when describing my day inside a positive singularity, I am recalling my best peak experiences from the life I had, and I am trying to resurrect them and, moreover, to push them further, beyond the limits I have not been able to overcome.
Perhaps, that is what one should do: ponder one’s peak experiences, think how to expand their boundaries and limits, how to mix and match them together (when it might make sense), and so on...
With respect to what you write here and what you wrote earlier, in particular “and have solutions to some problems you wanted to solve, but could not solve them before, novel mental visualization of math novel to you, novel insights, and an entirely new set of unsolved problems for the next day, and all of your key achievements of the night surviving into subsequent days).”, it seems to me that you are describing a situation in which simultaneously there is a machine that can seemingly overcome all computational, cognitive and physical limits but that will also empower you to overcome all computational, cognitive and physical limits.
The machine completely different from all machines that humanity has invented; while for example a telescope enables us to see the surface of the moon, we do not depend on the goodwill of the telescope, and a telescope could not explore and understand the moon without us.
Maybe my imagination of such a new kind of post-singularity machine somehow leaps too far, but I just don’t see a role for you in “solving problems” in this world. The machine may give you a set of problems or exercises to solve, and maybe you can be happy when you solved them like when you complete a level of a computer game.
The other experiences you describe maybe seem like “science and philosophy on a rave/trance party”, except if you are serious about the omnipotence of the AGI, it’s probably more like reading a science book or playing with a toy lab set on a rave/trance party, because if you could come up with any new insights, the AGI would have had them a lot earlier.
So in a way, it confirms my intuition that people who are positive about AGI seem to expect a world that is similar to being on (certain) drugs all of the time. But maybe I misunderstand that.
Maybe my imagination of such a new kind of post-singularity machine somehow leaps too far, but I just don’t see a role for you in “solving problems” in this world. The machine may give you a set of problems or exercises to solve, and maybe you can be happy when you solved them like when you complete a level of a computer game.
When I currently co-create with GPT-4, we both have a role.
I do expect this to continue if there is a possibility of tight coupling with electronic devices via non-invasive brain-computer interfaces. (I do expect tight coupling via non-invasive BCI to become possible, and there is nothing preventing this from being possible even today except for human inertia and the need to manage associated risks, but in a positive singularity this option would be “obviously available”.)
But I don’t know if “I” become superfluous at some point (that’s certainly a possibility, at least when one looks further to the future; many people do hope for an eventual merge with superintellent systems in connection with this desire not to become superfluous; the temporary tight coupling via non-invasive BCI I mention above is, in some sense, a first relatively moderate step towards that).
The machine completely different from all machines that humanity has invented
Yes, but at the same time I do expect this to be a neural machine from an already somewhat known class of neural machines (a somewhat more advanced, more flexible, more self-reconfigurable and self-modifiable neural machine comparing to what we have in the mainstream use right now; but modern Transformer is more than half-way there already, I think).
I, of course, do expect the machine, and myself, to obey those “laws of nature” for which a loophole or a workaround cannot be found, but I also think that there are plenty of undiscovered loopholes and workarounds for many of them.
if you could come up with any new insights, the AGI would have had them a lot earlier.
It could have had them a lot earlier. But one always has to make some choices (I can imagine it/them making sure to make all our choices before we make those choices, but I can also imagine it/them choosing not to do so).
Even the whole ASI ecosystem (the whole ASI+humans ecosystem) can only explore a subspace of what’s possible at any given time.
So in a way, it confirms my intuition that people who are positive about AGI seem to expect a world that is similar to being on (certain) drugs all of the time.
More precisely, that would be similar to having an option of turning this or that drug on and off at any time. (But given that I expect this to be mostly managed via strong mastery of tight coupling via non-invasive BCI, there is some specifics, risk profiles are different, temporal dynamics is different, and so on.)
I imagine that at least occasional nostalgic recreation in an “unmodified reality” will also be popular. Or, perhaps, more than occasional.
Yes, so, on further reflection, what I am doing when describing my day inside a positive singularity, I am recalling my best peak experiences from the life I had, and I am trying to resurrect them and, moreover, to push them further, beyond the limits I have not been able to overcome.
Perhaps, that is what one should do: ponder one’s peak experiences, think how to expand their boundaries and limits, how to mix and match them together (when it might make sense), and so on...
With respect to what you write here and what you wrote earlier, in particular “and have solutions to some problems you wanted to solve, but could not solve them before, novel mental visualization of math novel to you, novel insights, and an entirely new set of unsolved problems for the next day, and all of your key achievements of the night surviving into subsequent days).”, it seems to me that you are describing a situation in which simultaneously there is a machine that can seemingly overcome all computational, cognitive and physical limits but that will also empower you to overcome all computational, cognitive and physical limits.
The machine completely different from all machines that humanity has invented; while for example a telescope enables us to see the surface of the moon, we do not depend on the goodwill of the telescope, and a telescope could not explore and understand the moon without us.
Maybe my imagination of such a new kind of post-singularity machine somehow leaps too far, but I just don’t see a role for you in “solving problems” in this world. The machine may give you a set of problems or exercises to solve, and maybe you can be happy when you solved them like when you complete a level of a computer game.
The other experiences you describe maybe seem like “science and philosophy on a rave/trance party”, except if you are serious about the omnipotence of the AGI, it’s probably more like reading a science book or playing with a toy lab set on a rave/trance party, because if you could come up with any new insights, the AGI would have had them a lot earlier.
So in a way, it confirms my intuition that people who are positive about AGI seem to expect a world that is similar to being on (certain) drugs all of the time. But maybe I misunderstand that.
When I currently co-create with GPT-4, we both have a role.
I do expect this to continue if there is a possibility of tight coupling with electronic devices via non-invasive brain-computer interfaces. (I do expect tight coupling via non-invasive BCI to become possible, and there is nothing preventing this from being possible even today except for human inertia and the need to manage associated risks, but in a positive singularity this option would be “obviously available”.)
But I don’t know if “I” become superfluous at some point (that’s certainly a possibility, at least when one looks further to the future; many people do hope for an eventual merge with superintellent systems in connection with this desire not to become superfluous; the temporary tight coupling via non-invasive BCI I mention above is, in some sense, a first relatively moderate step towards that).
Yes, but at the same time I do expect this to be a neural machine from an already somewhat known class of neural machines (a somewhat more advanced, more flexible, more self-reconfigurable and self-modifiable neural machine comparing to what we have in the mainstream use right now; but modern Transformer is more than half-way there already, I think).
I, of course, do expect the machine, and myself, to obey those “laws of nature” for which a loophole or a workaround cannot be found, but I also think that there are plenty of undiscovered loopholes and workarounds for many of them.
It could have had them a lot earlier. But one always has to make some choices (I can imagine it/them making sure to make all our choices before we make those choices, but I can also imagine it/them choosing not to do so).
Even the whole ASI ecosystem (the whole ASI+humans ecosystem) can only explore a subspace of what’s possible at any given time.
More precisely, that would be similar to having an option of turning this or that drug on and off at any time. (But given that I expect this to be mostly managed via strong mastery of tight coupling via non-invasive BCI, there is some specifics, risk profiles are different, temporal dynamics is different, and so on.)
I imagine that at least occasional nostalgic recreation in an “unmodified reality” will also be popular. Or, perhaps, more than occasional.