Well, I would argue that if the computer is running a perfect simulation of a person, then the simulation is sentient—it’s simulating the brain and is therefore simulating consciousness, and for the life of me I can’t imagine any way in which “simulated consciousness” is different from just “consciousness”.
I think it will be incredibly difficult to get it to do something if it’s recursively self improving and it becomes progressively more difficult to do the further away you go from defining friendly as NOT(unfriendly).
I disagree. Creating a not-friendly-but-harmless AGI shouldn’t be any easier than creating a full-blown FAI. You’ve already had to do all the hard working of making it consistent while self-improving, and you’ve also had the do the hard work of programming the AI to recognise humans and to not do harm to them, while also acting on other things in the world. Here’s Eliezer’s paper.
Well, I would argue that if the computer is running a perfect simulation of a person, then the simulation is sentient—it’s simulating the brain and is therefore simulating consciousness, and for the life of me I can’t imagine any way in which “simulated consciousness” is different from just “consciousness”.
I disagree. Creating a not-friendly-but-harmless AGI shouldn’t be any easier than creating a full-blown FAI. You’ve already had to do all the hard working of making it consistent while self-improving, and you’ve also had the do the hard work of programming the AI to recognise humans and to not do harm to them, while also acting on other things in the world. Here’s Eliezer’s paper.
OK give me time to digest the jargon.