Also, even if our brain don’t have an explicit definition for it, it’s still implicitly there. You can’t not define a goal for an AI—the only question is whether you do it explicitly or implicitly.
Can we make a system that has a human as (part of) an implicit definition of its goal system? When you allow implicit definitions they can be made non-spatially located, although some information will need to flow between them.
I’m not sure if I am making myself clear. Just to check. I am interested in exploring systems where a human is an important computational component (not just pointed at) of an implicit goal system for an advanced computer system.
Because the human part is implicit, the system might not make the correct inferences and judge the human to be important. If there was a society of these systems, then if we engineered things correctly then most of them would make the correct inference and judge the human an important part of their goal system and they may be able to exert pressure on those that didn’t.
Okay. I thought that you meant something like that, but this clarified it.
I’m not sure why you think it’s better to build a society of these systems than to build just a single one. It seems to just make things more difficult: instead of trying to make sure that one AI does things right, we need to make sure that the overall dynamic that emerges from a society of interacting AIs does things right. That sounds a lot harder.
1) I am more skeptical of singleton take off. While I think it is possible, I don’t think it is likely that humans will be able to engineer it.
2)Logistics. If identity requires high bandwidth data connections between the two parts it would be easier to have a distributed system.
3) Politics. I doubt politicians will trust anyone to build a giant system to look after the world.
4) Letting the future take care of itself. If the systems do consider the human part of themselves, then they might be better placed to figure out an overarching way to balance everyones needs.
Can we make a system that has a human as (part of) an implicit definition of its goal system? When you allow implicit definitions they can be made non-spatially located, although some information will need to flow between them.
In principle? Sure.
In practice? I have no idea.
I’m not sure if I am making myself clear. Just to check. I am interested in exploring systems where a human is an important computational component (not just pointed at) of an implicit goal system for an advanced computer system.
Because the human part is implicit, the system might not make the correct inferences and judge the human to be important. If there was a society of these systems, then if we engineered things correctly then most of them would make the correct inference and judge the human an important part of their goal system and they may be able to exert pressure on those that didn’t.
Does that make more sense?
Okay. I thought that you meant something like that, but this clarified it.
I’m not sure why you think it’s better to build a society of these systems than to build just a single one. It seems to just make things more difficult: instead of trying to make sure that one AI does things right, we need to make sure that the overall dynamic that emerges from a society of interacting AIs does things right. That sounds a lot harder.
A few reasons.
1) I am more skeptical of singleton take off. While I think it is possible, I don’t think it is likely that humans will be able to engineer it.
2)Logistics. If identity requires high bandwidth data connections between the two parts it would be easier to have a distributed system.
3) Politics. I doubt politicians will trust anyone to build a giant system to look after the world.
4) Letting the future take care of itself. If the systems do consider the human part of themselves, then they might be better placed to figure out an overarching way to balance everyones needs.