Time-accelerated research with forking seems like the only safe (sane) thing one could do with WBEs. The human brain breaks easily if anything too fancy is attempted. Not to mention that unlike UFAIs which tile the universe with paperclips or something equally inane insane neuromorphs might do something even worse instead, a la “I Have No Mouth And I Must Scream”. The problem of using fiction as evidence is evident but since most fictional UFAIs are basically human minds with a thin covering of what the author thinks AIs work like I think the fail isn’t so overwhelmingly strong with this one; unlike a “true” AGI a badly designed neuromorph might definitely feel resentment towards a low tribal status, for example. The risks of having a negative utilitarian or a deep ecologist as an emulation whose mind might be severely affected by unknown factors are something Captain Obvious would be very enthusiastic about.
Even simple time acceleration with reasonable sensory input might have unforeseen problems when the rest of the world runs in slow motion. Multiply it if there is only one mind instead of several that can still have some kind of a social life with each other. Modify the brain to remove the need for human interaction and you’re back in the previous paragraph.
Now, considering the obvious advantages of copying the best AI researchers and running them multiple times faster than real-time it would be quite reasonable to expect AGI to follow quickly after accelerated WBE. This could mean that WBE might not be very widespread; the organization that had the first fast EMs would probably focus on stacking supercomputing power to take over the future light cone instead of just releasing the tech to take over the IT / human enhancement sector.
If the organization is sane it could significantly increase the chance of FAI as the responsible researchers had both speed and numbers advantage over the non-WBE scenario. On the other hand, if the first emulators don’t focus on creating human-compatible AGI the chance of Fail would be the one growing massively.
Time-accelerated research with forking seems like the only safe (sane) thing one could do with WBEs. The human brain breaks easily if anything too fancy is attempted. Not to mention that unlike UFAIs which tile the universe with paperclips or something equally inane insane neuromorphs might do something even worse instead, a la “I Have No Mouth And I Must Scream”. The problem of using fiction as evidence is evident but since most fictional UFAIs are basically human minds with a thin covering of what the author thinks AIs work like I think the fail isn’t so overwhelmingly strong with this one; unlike a “true” AGI a badly designed neuromorph might definitely feel resentment towards a low tribal status, for example. The risks of having a negative utilitarian or a deep ecologist as an emulation whose mind might be severely affected by unknown factors are something Captain Obvious would be very enthusiastic about.
Even simple time acceleration with reasonable sensory input might have unforeseen problems when the rest of the world runs in slow motion. Multiply it if there is only one mind instead of several that can still have some kind of a social life with each other. Modify the brain to remove the need for human interaction and you’re back in the previous paragraph.
Now, considering the obvious advantages of copying the best AI researchers and running them multiple times faster than real-time it would be quite reasonable to expect AGI to follow quickly after accelerated WBE. This could mean that WBE might not be very widespread; the organization that had the first fast EMs would probably focus on stacking supercomputing power to take over the future light cone instead of just releasing the tech to take over the IT / human enhancement sector.
If the organization is sane it could significantly increase the chance of FAI as the responsible researchers had both speed and numbers advantage over the non-WBE scenario. On the other hand, if the first emulators don’t focus on creating human-compatible AGI the chance of Fail would be the one growing massively.