First of all, there is the general problem of ‘does this AI work?’ This includes the general intelligence/rationality-related problems, but possibly also other problems, such as whether it will wirehead itself (whether a box can test that really depends a lot on the implementation).
The morality-stuff is tricky and depends on a lot of stuff, especially on how the AI is implemented. It seems to dangerous to let it play a multiplayer game with humans, even with most restrictions I can think of. However, how to test the morality really depends on how its human-detection system has been implemented. If it just uses some ‘humans generally do these stupid things’ heuristics, you can just plop down a few NPCs. If it uses somewhat smarter heuristics, you might be able to make some animals play the game and let the AI care for them. If it picks something intelligent, you might be able to instantiate other copies of the AI with vastly different utility functions. Basically, there are a lot of approaches to testing morality, but it depends on how the AI is implemented.
It would actually tell us a lot of useful things.
First of all, there is the general problem of ‘does this AI work?’ This includes the general intelligence/rationality-related problems, but possibly also other problems, such as whether it will wirehead itself (whether a box can test that really depends a lot on the implementation).
The morality-stuff is tricky and depends on a lot of stuff, especially on how the AI is implemented. It seems to dangerous to let it play a multiplayer game with humans, even with most restrictions I can think of. However, how to test the morality really depends on how its human-detection system has been implemented. If it just uses some ‘humans generally do these stupid things’ heuristics, you can just plop down a few NPCs. If it uses somewhat smarter heuristics, you might be able to make some animals play the game and let the AI care for them. If it picks something intelligent, you might be able to instantiate other copies of the AI with vastly different utility functions. Basically, there are a lot of approaches to testing morality, but it depends on how the AI is implemented.