Correct. I (unlike some others) don’t hold the position that a destructive upload and then a simulated being is exactly the same being therefore destructively scanning the porn actresses would be killing them in my mind.
Non destructively scanning them and them using the simulated versions for “evil purposes”, however, is not killing the originals. Whether using the copies for evil purposes even against their simulated will is actually evil or not is debatable. I know some will take the position that the simulations could theoretically be sentient, If they are sentient then I am therefroe de facto evil.
And I get the point that we want to get the AGI to do something, just that I think it will be incredibly difficult to get it to do something if it’s recursively self improving and it becomes progressively more difficult to do the further away you go from defining friendly as NOT(unfriendly).
Why is it recursively self-improving if it isn’t doing anything? If my end goal was not to do anything, I certainly don’t need to modify myself in order to achieve that better than I could achieve it now.
Well, I would argue that if the computer is running a perfect simulation of a person, then the simulation is sentient—it’s simulating the brain and is therefore simulating consciousness, and for the life of me I can’t imagine any way in which “simulated consciousness” is different from just “consciousness”.
I think it will be incredibly difficult to get it to do something if it’s recursively self improving and it becomes progressively more difficult to do the further away you go from defining friendly as NOT(unfriendly).
I disagree. Creating a not-friendly-but-harmless AGI shouldn’t be any easier than creating a full-blown FAI. You’ve already had to do all the hard working of making it consistent while self-improving, and you’ve also had the do the hard work of programming the AI to recognise humans and to not do harm to them, while also acting on other things in the world. Here’s Eliezer’s paper.
Correct. I (unlike some others) don’t hold the position that a destructive upload and then a simulated being is exactly the same being therefore destructively scanning the porn actresses would be killing them in my mind. Non destructively scanning them and them using the simulated versions for “evil purposes”, however, is not killing the originals. Whether using the copies for evil purposes even against their simulated will is actually evil or not is debatable. I know some will take the position that the simulations could theoretically be sentient, If they are sentient then I am therefroe de facto evil.
And I get the point that we want to get the AGI to do something, just that I think it will be incredibly difficult to get it to do something if it’s recursively self improving and it becomes progressively more difficult to do the further away you go from defining friendly as NOT(unfriendly).
Why is it recursively self-improving if it isn’t doing anything? If my end goal was not to do anything, I certainly don’t need to modify myself in order to achieve that better than I could achieve it now.
Isn’t doing anything for us…
Well, I would argue that if the computer is running a perfect simulation of a person, then the simulation is sentient—it’s simulating the brain and is therefore simulating consciousness, and for the life of me I can’t imagine any way in which “simulated consciousness” is different from just “consciousness”.
I disagree. Creating a not-friendly-but-harmless AGI shouldn’t be any easier than creating a full-blown FAI. You’ve already had to do all the hard working of making it consistent while self-improving, and you’ve also had the do the hard work of programming the AI to recognise humans and to not do harm to them, while also acting on other things in the world. Here’s Eliezer’s paper.
OK give me time to digest the jargon.