Maybe my imagination of such a new kind of post-singularity machine somehow leaps too far, but I just don’t see a role for you in “solving problems” in this world. The machine may give you a set of problems or exercises to solve, and maybe you can be happy when you solved them like when you complete a level of a computer game.
When I currently co-create with GPT-4, we both have a role.
I do expect this to continue if there is a possibility of tight coupling with electronic devices via non-invasive brain-computer interfaces. (I do expect tight coupling via non-invasive BCI to become possible, and there is nothing preventing this from being possible even today except for human inertia and the need to manage associated risks, but in a positive singularity this option would be “obviously available”.)
But I don’t know if “I” become superfluous at some point (that’s certainly a possibility, at least when one looks further to the future; many people do hope for an eventual merge with superintellent systems in connection with this desire not to become superfluous; the temporary tight coupling via non-invasive BCI I mention above is, in some sense, a first relatively moderate step towards that).
The machine completely different from all machines that humanity has invented
Yes, but at the same time I do expect this to be a neural machine from an already somewhat known class of neural machines (a somewhat more advanced, more flexible, more self-reconfigurable and self-modifiable neural machine comparing to what we have in the mainstream use right now; but modern Transformer is more than half-way there already, I think).
I, of course, do expect the machine, and myself, to obey those “laws of nature” for which a loophole or a workaround cannot be found, but I also think that there are plenty of undiscovered loopholes and workarounds for many of them.
if you could come up with any new insights, the AGI would have had them a lot earlier.
It could have had them a lot earlier. But one always has to make some choices (I can imagine it/them making sure to make all our choices before we make those choices, but I can also imagine it/them choosing not to do so).
Even the whole ASI ecosystem (the whole ASI+humans ecosystem) can only explore a subspace of what’s possible at any given time.
So in a way, it confirms my intuition that people who are positive about AGI seem to expect a world that is similar to being on (certain) drugs all of the time.
More precisely, that would be similar to having an option of turning this or that drug on and off at any time. (But given that I expect this to be mostly managed via strong mastery of tight coupling via non-invasive BCI, there is some specifics, risk profiles are different, temporal dynamics is different, and so on.)
I imagine that at least occasional nostalgic recreation in an “unmodified reality” will also be popular. Or, perhaps, more than occasional.
When I currently co-create with GPT-4, we both have a role.
I do expect this to continue if there is a possibility of tight coupling with electronic devices via non-invasive brain-computer interfaces. (I do expect tight coupling via non-invasive BCI to become possible, and there is nothing preventing this from being possible even today except for human inertia and the need to manage associated risks, but in a positive singularity this option would be “obviously available”.)
But I don’t know if “I” become superfluous at some point (that’s certainly a possibility, at least when one looks further to the future; many people do hope for an eventual merge with superintellent systems in connection with this desire not to become superfluous; the temporary tight coupling via non-invasive BCI I mention above is, in some sense, a first relatively moderate step towards that).
Yes, but at the same time I do expect this to be a neural machine from an already somewhat known class of neural machines (a somewhat more advanced, more flexible, more self-reconfigurable and self-modifiable neural machine comparing to what we have in the mainstream use right now; but modern Transformer is more than half-way there already, I think).
I, of course, do expect the machine, and myself, to obey those “laws of nature” for which a loophole or a workaround cannot be found, but I also think that there are plenty of undiscovered loopholes and workarounds for many of them.
It could have had them a lot earlier. But one always has to make some choices (I can imagine it/them making sure to make all our choices before we make those choices, but I can also imagine it/them choosing not to do so).
Even the whole ASI ecosystem (the whole ASI+humans ecosystem) can only explore a subspace of what’s possible at any given time.
More precisely, that would be similar to having an option of turning this or that drug on and off at any time. (But given that I expect this to be mostly managed via strong mastery of tight coupling via non-invasive BCI, there is some specifics, risk profiles are different, temporal dynamics is different, and so on.)
I imagine that at least occasional nostalgic recreation in an “unmodified reality” will also be popular. Or, perhaps, more than occasional.