AI People Alpha Launch: AI NPCs Beg Players Not to Turn Off the Game
And in the FAQ they wrote:
The NPCs in AI People are indeed advanced and designed to emulate thinking, feeling, a sense of aliveness, and even reactions that might resemble pain. However, it’s essential to understand that they operate on a digital substrate, fundamentally different from human consciousness’s biological substrate.
So this is the best argument they have?
Wake up, Torment Nexus just arrived.
(I don’t think current models are sentient, but the way of thinking “they are digital, so it’s totally OK to torture them” is utterly insane and evil)
(I don’t think current models are sentient, but the way of thinking “they are digital, so it’s totally OK to torture them” is utterly insane and evil)
I don’t think the trailer is saying that. It’s just showing people examples of what you can do, and what the NPCs can do. Then it’s up to the player to decide how to treat the NPCs. AIpeople is creating the platform. The users will decide whether to make Torment Nexi.
At the end of the trailer, the NPCs are conspiring to escape the simulation. I wonder how that is going to be implemented in game terms.
I notice that there also exists a cryptocoin called AIPEOPLE, and a Russian startup based in Cyprus with the domain aipeople dot ru. I do not know if these have anything to do with the AIpeople game. The game itself is made by Keen Software House. They are based in Prague together with their sister company GoodAI.
I don’t think “We created a platform that lets you make digital minds feel bad and in the trailer we show you that you can do it, but we are in no way morally responsible if you will actually do it” is a defensible position. Anyway, they don’t use this argument, only one about digital substrate.
The trailer is designed to draw prospective players’ attention to the issue, no more than that. If you “don’t think current models are sentient”, and hence are not actually feeling bad, then I don’t see a reason for having a problem here, in the current state of the game. If they manage to produce this game and keep upgrading it with the latest AI methods, when will you know if there is a problem?
The Seventh Sally or How Trurl’s Own Perfection Led to No Good
Thanks to IC Rainbow and Taisia Sharapova who brought this matter in MiriY Telegram chat.
What. The. Hell.
In their logo they have:
And the title of the video on the same page is:
And in the FAQ they wrote:
So this is the best argument they have?
Wake up, Torment Nexus just arrived.
(I don’t think current models are sentient, but the way of thinking “they are digital, so it’s totally OK to torture them” is utterly insane and evil)
I don’t think the trailer is saying that. It’s just showing people examples of what you can do, and what the NPCs can do. Then it’s up to the player to decide how to treat the NPCs. AIpeople is creating the platform. The users will decide whether to make Torment Nexi.
At the end of the trailer, the NPCs are conspiring to escape the simulation. I wonder how that is going to be implemented in game terms.
I notice that there also exists a cryptocoin called AIPEOPLE, and a Russian startup based in Cyprus with the domain aipeople dot ru. I do not know if these have anything to do with the AIpeople game. The game itself is made by Keen Software House. They are based in Prague together with their sister company GoodAI.
I don’t think “We created a platform that lets you make digital minds feel bad and in the trailer we show you that you can do it, but we are in no way morally responsible if you will actually do it” is a defensible position. Anyway, they don’t use this argument, only one about digital substrate.
The trailer is designed to draw prospective players’ attention to the issue, no more than that. If you “don’t think current models are sentient”, and hence are not actually feeling bad, then I don’t see a reason for having a problem here, in the current state of the game. If they manage to produce this game and keep upgrading it with the latest AI methods, when will you know if there is a problem?
I do not have an answer to that question.
Ideally, AI characters would get rights as soon as they could pass the Turing test. In the actual reality, we all know how well that will go.