Virtual Worlds doesn’t buy you any safety, even if it can’t break out of the simulator.
If you manage to make AI, you’ve got a Really Powerful Optimization Process. If it worked out simulated physics and has access to it’s own source, it’s probably smart enough to ‘foom’, even with the simulation. At which point you have a REALLY powerful optimizer, and no idea how to prove anything about it’s goal system. An untrustable genie.
Also, spending all those cycles on that kind of simulated world would be hugely inefficient.
James, you can’t blame me for responding to the question. Stuart has said that advice on giving up will not be accepted. The question is to minimise the fallout of a lucky stroke moving this guy’s AI forward and fooming. Both of my suggestions were around that.
Virtual Worlds doesn’t buy you any safety, even if it can’t break out of the simulator.
If you manage to make AI, you’ve got a Really Powerful Optimization Process. If it worked out simulated physics and has access to it’s own source, it’s probably smart enough to ‘foom’, even with the simulation. At which point you have a REALLY powerful optimizer, and no idea how to prove anything about it’s goal system. An untrustable genie.
Also, spending all those cycles on that kind of simulated world would be hugely inefficient.
James, you can’t blame me for responding to the question. Stuart has said that advice on giving up will not be accepted. The question is to minimise the fallout of a lucky stroke moving this guy’s AI forward and fooming. Both of my suggestions were around that.
You are quite right.