I’ve been trying my best to think of something that AGI could do which I really love and deeply endorse.
I can think of some nice things. New kinds of animals. Non-habit-forming heroine. Stupid-pointless-disagreements-between-family-members fixomatic maybe. Turn everyone hot.
None of this makes me joyful and hopeful. Just sounds neat and good. Humans seem pretty damn good at inventing tech etc ourselves anyway.
I think I might have assumed or copy-pasted the “AI is truly wonderful if it goes truly well” pillar of my worldview. Or maybe I’ve forgotten the original reasons I believed it.
What exactly did that great AI future involve again?
I mean, like, immortality? Abundance? Perfect health? World peace via all mentioned means and improved coordination? Human intelligence augmentation, where by intelligence I mean everything including and not limiting creativity, wisdom, perception clarity and self-awareness? Space colonization, which is first of all, ability to opt out from our current civilization and try something else?
You can say “but humans can invent all of this eventually anyway” but I dare to remind you that there is 14000 children dying every day and, conditional of alignment, AGI is a fastest way to stop it.
Do you care that much about which way is fastest? Just “get the things you like a bit sooner” doesn’t feel super compelling to me.
14000 children dying every day means that getting solution even an hour earlier saves in expectation ~583 of them which seems to be really worthy? “Children not dying” is pretty much compelling thing to want it even a bit sooner.
Good point
Do you think there’s a pathway to immortality without AGI? We still haven’t made any more progesss on aging than the Romans did.
[edit: pinned to profile]
I want to be able to calculate a plan that converts me from biology into a biology-like nanotech substrate that is made of sturdier materials all the way down, which can operate smoothly at 3 kelvin and an associated appropriate rate of energy use; more clockworklike—or would it be almost a superfluid? Both, probably, clockworklike but sliding through wide, shallow energy wells in a superfluid-like synchronized dance of molecules—Then I’d like to spend 10,000 years building an artful airless megastructure out of similarly strong materials as a series of rings in orbit of Pluto. I want to take a trip to alpha centauri every few millennia for a big get together of space-native beings in the area. I want to replace information death with cryonic sleep, so that nothing that was part of a person is ever forgotten again. I want to end all forms of unwanted suffering. I want to variously join and leave low latency hiveminds, retaining my selfhood and agency while participating in the dance of a high-trust high-bandwidth organization that respects the selfhood of its members and balances their agency smoothly as we create enormous works of art in deep space. I want to invent new kinds of culinary arts for the 2 to 3 kelvin lifestyle. I want to go swimming in Jupiter.
I want all of Earth’s offspring to ascend.
Check out the Fun Theory sequence, if you haven’t already.
Thanks for the pointer. Haven’t read it.
If we can do that due to AGI, almost surely we can solve aging, which would be truly great.
We’ll solve it either way right?
I’d guess so, but with AGI we’d go much much faster. Same for everything you’ve mentioned in the post.
Without AGI, no chance in our lifetimes or any lifetimes that are soon. Possibly never given dysgenic effects and declining world population.
AGI? Not just a few tricks with chemistry and proteins?
Current biomedical knowledge says no, it’s extremely complex, and simple tricks have unacceptable failure rates. Remember you want to turn everyone hot and not kill 10-50 percent of them during the first treatment, with all the subjects developing untreatable fatal cancers a few years after treatment.
These aren’t hypotheticals, cellular reprogramming, one of the few actual techniques that seems to reverse aging, has side effects like these.
If you want to make everyone hot and keep them alive for centuries you need many thousands, maybe millions, of separate techniques, many specific to exactly 1 living patient. Or essentially a network of powerful AGI and ASI systems who model each patient per an accurate model of human bodies too complex for any human to learn, then the system chooses the drugs or genetic edits to make, maximizing the chance of success per the model.
The simulation models are also updated for every patient treated, which is not something any study or any living doctor is able to benefit from.
And this is able to happen in seconds, so the medical system can save patients in the process of dying from failures current medicine is unaware of.
I would say value preservation and alignment of the human population. I think these are the hardest problems the human race faces, and the ones that would make the biggest difference if solved. You’re right, humanity is great at developing technology, but we’re very unaligned with respect to each other and are constantly losing value in some way or another.
If we could solve this problem without AGI, we wouldn’t need AGI. We could just develop whatever we want. But so far it seems like AGI is the only path for reliable alignment and avoiding Molochian issues.
I agree deeply with the first paragraph. I was going to list coordination as the only great thing I know of where AI might be able to help us do something we really couldn’t do otherwise. But I removed it because it occurred to me that I have no plausible story for how that would actually happen. How do you imagine that going down? All I’ve got is “some rogue benevolent actor does CVE or pivotal act” which I don’t think is very likely.
Bah, nobody’s mentioned social applications of superhuman planning yet.
You could let an AI give everyone subtle nudges, and a month later everone’s social life will be great. You’ll see your family the right amount, you’ll have friends who really get you who you see often and do fun things with. Sex will occur. Parties and other large gatherings will be significantly better.
The people to make this possible are all around us, it’s just really hard to figure out how to make it happen.
Oh I love this answer. Seems like pretty narrow AI would be adequate though. Also the same tech could probably be used to eg start or stop revolutions. Inspiring anyway.
Human potential is the big one for me.
Personally, I feel that my imagination is limited—not just around the capabilities of AGI, but in common work scenarios.
There are lots of people out there who are a lot smarter than me, but AGI can help me realise more of my human potential.
This applies both at a personal level, but eventually at a societal level and at a species level beyond that.
What this looks like and how it is made safe through safeguards, I don’t know. But I’m interested in how AGI can help us achieve our human potential in ways that I as an individual can’t imagine without the help of AGI / the sum of human knowledge.
How super/general the AI is is a knob you can set to whatever you want. With zero set to the present day, if you turn it up far enough you get godlike capability of which I find it impossible to say anything.
More modest accomplishments I would pay, say, the price of a car for would include a robot housekeeper that can cope with all of a human’s clutter, and clean everything, make beds, etc. as well as I can and better than in practice I will. Or a personal assistant that I can rely on for things like making complex travel arrangements, searching for stuff on the internet with the accuracy of Google and the conversational interface of ChatGPT, and having serious discussions with on any subject.
Beyond that my creativity falters. In 1980 I couldn’t even have foreseen Google, smartphones, or cat videos.
So, I don’t think we need AGI for this… But Digital humans. Uploads from preserved brains. The freedom from carbon-based substrate, and the benefits that go with that like immortality and speed-of-light travel.
I suggest that ideally we keep AI as weak as possible to get us quickly to digital humans, and then have the digital humans do the AI work. Never go down the path of non-human intelligence which is fully superior to human intelligence. Keep the future human! Empower Us, not Them!
(This response gives me a human chauvinist vibe. I’m sympathetic to really carefully thinking thing through before handing control to quite alien beings, but at some point, we’ll probably want to have control in the hands of beings which look very different than current humans. Also, the direct value might come from entities which aren’t well described as human.)
Yes, that’s correct. I’m transhuman-chauvinist. I want our present and future to belong as much as possible to humans and our enhanced descendants, not to alien minds who may not share my values. There absolutely are non-human minds I’d like to create and experience living with and share the future in an equitable way with, but it is a highly restricted set based on compatibility with my own values. For instance, uplifted mammals or digital humans. Many people might not describe either of those groups as ‘human’, but I’d still consider them within group human-ish. Of course, the human-ish group isn’t a team, it’s a category. Within that there are opposed factions actively killing each other. I’d prefer if that wasn’t the case. I don’t have a plan for aligning human-ish creatures to each other sufficiently to achieve a peaceful society, but I do suspect that this would be easier than aligning a group of actors including both human-ish actors and very alien actors. Until we have such a plan, probably we shouldn’t hand much power over to potentially non-peaceful alien actors.
I really want a brain computer interface that is incredibly transformative and will allow me to write in my head and scaffold my thinking.
This one might actually be doable without super-powerful AIs. Current progress in non-invasive brain-computer interfaces is rather impressive...
I do think people should organize and make it go faster, but this should be achievable with the current level of AI.
(Unlike practical immortality and eventually becoming God-like, which both do seem to require super-intelligence and which are what a lot of people really want. Being able to personally know, understand, and experience all which is worth to experience in the current world, and more beyond. This does require power of super-intelligence.)