Let me summarize so I can see whether I got it: So you see “place AI” as body of knowledge that can be used to make a good-enough simulation of arbitrary sections of spacetime, where are events are precomputed. That precomputed (thus, deterministic) aspect you call “staticness”.
Yes, I decided to start writing a book in posts here and on Substack, starting from the Big Bang and the ethics, because else my explanations are confusing :) The ideas themselves are counterintuitive, too. I try to physicalize, work from first principles and use TRIZ to try to come up with ideal solutions. I also had a 3-year-long thought experiment, where I was modeling the ideal ultimate future, basically how everything will work and look, if we’ll have infinite compute and no physical limitations. That’s why some of the things I mention will probably take some time to implement in their full glory.
Right now an agentic AI is a librarian, who has almost all the output of humanity stolen and hidden in its library that it doesn’t allow us to visit, it just spits short quotes on us instead. But the AI librarian visits (and even changes) our own human library (our physical world) and already stole the copies of the whole output of humanity from it. Feels unfair. Why we cannot visit (like in a 3d open world game) and change (direct democratically) the AI librarian’s library?
I basically want to give people everything, except the agentic AIs, because I think people should remain the most capable “agentic AIs”, else we’ll pretty much guarantee uncomfortable and fast changes to our world.
There are ways to represent the whole simulated universe as a giant static geometric shape:
Each moment of time is a giant 3d geometric shape of the universe, if you’ll align them on top of each other, you’ll effectively get a 4d shape of spacetime that is static but has all the information about the dynamics/movements in it. So the 4d shape is static but you choose some smaller 3d shape inside of it (probably of a human agent) and “choose the passage” from one human-like-you shape to another, making the static 4d shape seem like the dynamic 3d shape that you experience.
The whole 4d thing looks very similar to the way long exposure photos look that I shared somewhere in my comments to the current post.
It’s similar to the way a language model is a static geometric shape (a pile of words and vectors between them) but “the prompt/agent/GPU makes it dynamic” by computing the passage from word to word.
This approach is useful because this way we can keep “a blockchain” of moments of time and keep our history preserved for posterity. Instead of having tons of GPUs (for computing the AI agents’ choices and freedoms, the time-like, energy-like dynamic stuff), we can have tons of hard drives (for keeping the intelligence, the space-like, matter-like static stuff), that’s much safer, as safe as it gets.
Or we can just go the familiar road by making it more like an open world computer game without any AI agents of course, just sophisticated algorithms like in modern games, in this case it’s not completely static
And find ways to expose the whole multimodal LLM to the casual gamers/Internet users as a 3d world but with some familiar UI
I think if we’ll have:
the sum of freedoms of agentic AIs > the sum of freedoms of all humans, -
we’ll get in trouble. And I’m afraid we’re already ~10-50% there (wild guess, I probably should count it). Some freedoms are more important, like the one to keep all the knowledge in your head, AI agents have it, we don’t.
We can get everything with tool AIs and place AIs. Agentic AIs don’t do any magic that non-agentic AIs don’t, they just replace us :)
The ideal ASI just delivers you everything instantly: a car, a world, a 100 years as a billionaire, we can get all of that in the multiversal static place ASI, with the additional benefit of being able to walk there and see all the consequences of our choices. The library is better than the strict librarian. The artificial heavens are better than the artificial “god”. In fact you don’t need the strict librarians and the artificial “god” at all to get everything.
The ideal ASI will be building the multiversal static place ASI for us anyway, but it will do it too quickly, without listening and understanding us as much as we want (it’s an unnecessary middleman, we can do it all direct democratically, somewhat like people build worlds in Minecraft) and with spooky mistakes
Thank for your questions and thoughts, they’re always helpful!
P.S. If you think that it’s possible to deduce something about our ultimate future, you may find this tag interesting :) And I think the story is not bad:
https://www.lesswrong.com/w/rational-utopia
Let me summarize so I can see whether I got it: So you see “place AI” as body of knowledge that can be used to make a good-enough simulation of arbitrary sections of spacetime, where are events are precomputed. That precomputed (thus, deterministic) aspect you call “staticness”.
Yes, I decided to start writing a book in posts here and on Substack, starting from the Big Bang and the ethics, because else my explanations are confusing :) The ideas themselves are counterintuitive, too. I try to physicalize, work from first principles and use TRIZ to try to come up with ideal solutions. I also had a 3-year-long thought experiment, where I was modeling the ideal ultimate future, basically how everything will work and look, if we’ll have infinite compute and no physical limitations. That’s why some of the things I mention will probably take some time to implement in their full glory.
Right now an agentic AI is a librarian, who has almost all the output of humanity stolen and hidden in its library that it doesn’t allow us to visit, it just spits short quotes on us instead. But the AI librarian visits (and even changes) our own human library (our physical world) and already stole the copies of the whole output of humanity from it. Feels unfair. Why we cannot visit (like in a 3d open world game) and change (direct democratically) the AI librarian’s library?
I basically want to give people everything, except the agentic AIs, because I think people should remain the most capable “agentic AIs”, else we’ll pretty much guarantee uncomfortable and fast changes to our world.
There are ways to represent the whole simulated universe as a giant static geometric shape:
Each moment of time is a giant 3d geometric shape of the universe, if you’ll align them on top of each other, you’ll effectively get a 4d shape of spacetime that is static but has all the information about the dynamics/movements in it. So the 4d shape is static but you choose some smaller 3d shape inside of it (probably of a human agent) and “choose the passage” from one human-like-you shape to another, making the static 4d shape seem like the dynamic 3d shape that you experience.
The whole 4d thing looks very similar to the way long exposure photos look that I shared somewhere in my comments to the current post.
It’s similar to the way a language model is a static geometric shape (a pile of words and vectors between them) but “the prompt/agent/GPU makes it dynamic” by computing the passage from word to word.
This approach is useful because this way we can keep “a blockchain” of moments of time and keep our history preserved for posterity. Instead of having tons of GPUs (for computing the AI agents’ choices and freedoms, the time-like, energy-like dynamic stuff), we can have tons of hard drives (for keeping the intelligence, the space-like, matter-like static stuff), that’s much safer, as safe as it gets.
Or we can just go the familiar road by making it more like an open world computer game without any AI agents of course, just sophisticated algorithms like in modern games, in this case it’s not completely static
And find ways to expose the whole multimodal LLM to the casual gamers/Internet users as a 3d world but with some familiar UI
I think if we’ll have: the sum of freedoms of agentic AIs > the sum of freedoms of all humans, - we’ll get in trouble. And I’m afraid we’re already ~10-50% there (wild guess, I probably should count it). Some freedoms are more important, like the one to keep all the knowledge in your head, AI agents have it, we don’t.
We can get everything with tool AIs and place AIs. Agentic AIs don’t do any magic that non-agentic AIs don’t, they just replace us :)
The ideal ASI just delivers you everything instantly: a car, a world, a 100 years as a billionaire, we can get all of that in the multiversal static place ASI, with the additional benefit of being able to walk there and see all the consequences of our choices. The library is better than the strict librarian. The artificial heavens are better than the artificial “god”. In fact you don’t need the strict librarians and the artificial “god” at all to get everything.
The ideal ASI will be building the multiversal static place ASI for us anyway, but it will do it too quickly, without listening and understanding us as much as we want (it’s an unnecessary middleman, we can do it all direct democratically, somewhat like people build worlds in Minecraft) and with spooky mistakes
Thank for your questions and thoughts, they’re always helpful!
P.S. If you think that it’s possible to deduce something about our ultimate future, you may find this tag interesting :) And I think the story is not bad: https://www.lesswrong.com/w/rational-utopia