This feels important.
The first portion seems particularly useful as a path toward cognitive enhancement with minimal AI (I’m thinking of the portion before “copies of his own mind...” slightly before “within a couple of years” jumps farther ahead). It seems like a roadmap to what we could accomplish in short order, given the chance.
I hadn’t gotten an intuitive feel for some of the low-hanging fruit in cognitive enhancements. Much of this could be accomplished very soon. Some of it can be accomplished now.
A few thoughts now, more later:
AI already has very good emotional intelligence; if we applied it to more of our decisions and struggles, it would probably be very helpful. Ease of use is one barrier to doing that. Having models “watch” and interpret what happens to us through a wearable would break down that barrier. Faster loops between helpful AI, particularly emotional/social intelligence AIs, might be extremely useful. The emulation of me wouldn’t have to be very good; it would just need some decent ideas about “what might you wish you’d done, later?” Of course the better those ideas were (like if they were produced by something smarter than me, or by me with a lot more time to think), they’d be even more useful. But just something about as smart as I am in System 1 terms (like Claude and GPT4o) might be pretty useful if I got its ideas in a fast loop.
Part of the vision here is that humans might become far more psychologically healthy, relatively easily. I think this is true. I’ve studied psychology—mostly cognitive psychology but a good bit of clinical and emotional theories as well—for a long time. I believe there is low-hanging fruit yet to be plucked in this area.
Human psychology is complex, yes, but our efforts thus far have been clumsy graspings in the dark. We can give people the tools to steadily ease their traumas and to work toward their goals. AI could speed that up dramatically. I’m not sure it has to be that much more emotionally intelligent than a human; merely having unlimited patience and enthusiasm for the project of working on our emotional hangups might be adequate.
Of course, the elephant in the room is: how do we get this sort of tool AI, and even a little time to use it, without summoning the demon by turning it into general, agentic AGI? The tools described here would wreak havoc if someone greedy told them “just take that future self predictive loop and feed it into these tools” then hired them out as labor. Our character wouldn’t have a job, because one person would now be doing the work of a hundred.
Yes, there is a very lucky possibility in which we get this world by accident: we could have some AI with excellent emotional intelligence, and others with excellent particular skills, and none that can do the planning and big-picture thinking that humans are doing. Even in that case, this person would be living through a traumatic period in history in which far fewer people are needed for work, so unemployment is rising rapidly.
So in most of the distribution of futures that produce a story like this, I think we must assume that it isn’t chance. AGI has been created, and both alignment problems have been solved—AI and human. Or else only AI alignment has been solved, and there’s been a soft takeover that the humans don’t even recognize.
Anyway, this is wonderful and inspiring!
Science fiction serves two particularly pragmatic purposes (as well as many purposes for pleasure and metaphor): providing visions of possible futures to steer toward, and visions of possible futures to steer away from.
We need far more scifi that maps routes to positive futures.
This is a great step in that direction. We need something to fight for. The character here could be all of us if we figure out enough of alignment in time.
More later. This is literally inspiring.
I still think that adequately aligning both AI/AGI and the society that creates it is the primary challenge. But this type of cognitive/emotional enhancement is one tool we might use to help us solve alignment.
And it’s part of the literally unimaginable payoff if we do collectively solve the problems facing us. This type of focused effort to imagine the payoffs will help us work toward those futures.
This feels important. The first portion seems particularly useful as a path toward cognitive enhancement with minimal AI (I’m thinking of the portion before “copies of his own mind...” slightly before “within a couple of years” jumps farther ahead). It seems like a roadmap to what we could accomplish in short order, given the chance.
I hadn’t gotten an intuitive feel for some of the low-hanging fruit in cognitive enhancements. Much of this could be accomplished very soon. Some of it can be accomplished now.
A few thoughts now, more later:
AI already has very good emotional intelligence; if we applied it to more of our decisions and struggles, it would probably be very helpful. Ease of use is one barrier to doing that. Having models “watch” and interpret what happens to us through a wearable would break down that barrier. Faster loops between helpful AI, particularly emotional/social intelligence AIs, might be extremely useful. The emulation of me wouldn’t have to be very good; it would just need some decent ideas about “what might you wish you’d done, later?” Of course the better those ideas were (like if they were produced by something smarter than me, or by me with a lot more time to think), they’d be even more useful. But just something about as smart as I am in System 1 terms (like Claude and GPT4o) might be pretty useful if I got its ideas in a fast loop.
Part of the vision here is that humans might become far more psychologically healthy, relatively easily. I think this is true. I’ve studied psychology—mostly cognitive psychology but a good bit of clinical and emotional theories as well—for a long time. I believe there is low-hanging fruit yet to be plucked in this area.
Human psychology is complex, yes, but our efforts thus far have been clumsy graspings in the dark. We can give people the tools to steadily ease their traumas and to work toward their goals. AI could speed that up dramatically. I’m not sure it has to be that much more emotionally intelligent than a human; merely having unlimited patience and enthusiasm for the project of working on our emotional hangups might be adequate.
Of course, the elephant in the room is: how do we get this sort of tool AI, and even a little time to use it, without summoning the demon by turning it into general, agentic AGI? The tools described here would wreak havoc if someone greedy told them “just take that future self predictive loop and feed it into these tools” then hired them out as labor. Our character wouldn’t have a job, because one person would now be doing the work of a hundred. Yes, there is a very lucky possibility in which we get this world by accident: we could have some AI with excellent emotional intelligence, and others with excellent particular skills, and none that can do the planning and big-picture thinking that humans are doing. Even in that case, this person would be living through a traumatic period in history in which far fewer people are needed for work, so unemployment is rising rapidly.
So in most of the distribution of futures that produce a story like this, I think we must assume that it isn’t chance. AGI has been created, and both alignment problems have been solved—AI and human. Or else only AI alignment has been solved, and there’s been a soft takeover that the humans don’t even recognize.
Anyway, this is wonderful and inspiring!
Science fiction serves two particularly pragmatic purposes (as well as many purposes for pleasure and metaphor): providing visions of possible futures to steer toward, and visions of possible futures to steer away from. We need far more scifi that maps routes to positive futures.
This is a great step in that direction. We need something to fight for. The character here could be all of us if we figure out enough of alignment in time.
More later. This is literally inspiring.
I still think that adequately aligning both AI/AGI and the society that creates it is the primary challenge. But this type of cognitive/emotional enhancement is one tool we might use to help us solve alignment.
And it’s part of the literally unimaginable payoff if we do collectively solve the problems facing us. This type of focused effort to imagine the payoffs will help us work toward those futures.
If you enjoy positive sci-fi I highly recommend the Bobiverse books by Dennis E. Taylor! Very optimistic and surprisingly grounded.