Some specific comments about uploads :
a. First, a more reasonable estimate of the speedup is on the order of 10^6 to 10^8. This depends on a few factors : if we assume 5ghz switching speeds and we use dedicated discrete circuits for every single simulated neuron, that would in theory be a speed of 25 million times versus human switching speeds on the order of 200 Hz. Some neurons are faster than this, however, and you need enough discrete timesteps to account for small differences in arrival times that are a major factor in signaling. The upper end of the range is likely possible if you use much faster nano-scale components, and again, dedicated hardware circuits for every component. These kinds of speeds are also only practical if the neuroscience understanding is good enough that fine cellular details need not be simulated, merely neural network nodes with a large number of parameters.
If general purpose CPUs inside a supercomputer are used instead, achieving sim speeds close to realtime is expected to be very difficult to accomplish.
b. Working from the assumption that the speed advantage is 10^6, predicting superhuman capabilities from the uploads seems reasonable. A single uploaded entity wouldn’t get just 12 PhDs—they could get every PhD. Presumably, they’d enroll in every PhD program in the world simultaneously, and send some kind of robotic avatar to class. By rapidly switching between many robot avatars, queuing up commands for each one, they could presumably do many slow tasks at once.
c. These kinds of speedups allow levels of micromanagement that do not exist today. For example, suppose that the uploaded entity has a factory that produces some kind of assembly robot. As the factory runs, the entity observes every last motion by the machines in the factory and can adjust settings and motions for optimal performance. So, now the assembly robots are coming off the production line and going to work. The entity might notice a design flaw in the first batch and make dozens of changes to correct the flaw, so 5 minutes later version 1.1 of the bot is the one coming off the line. But 1.1 has other drawbacks, so 5 minutes after that, it’s version 1.2.
Or even more detailed : the entity understands the current task to such a fine grained detail that EVERY robot coming off of the assembly line is slightly customized so that it uses available resources to the most efficient degree possible.
I do acknowledge the author’s basic point. An entity that can think 1 million times faster would not be able to advance technology at 1 million times current speed. R&D cannot be done purely in simulations : physical experiments must be performed, physical prototypes must be built and tested against the real world. However, the speedups would still be large enough than a world with high speed uploaded entities would soon look very different from a world without them.
Some specific comments about uploads : a. First, a more reasonable estimate of the speedup is on the order of 10^6 to 10^8. This depends on a few factors : if we assume 5ghz switching speeds and we use dedicated discrete circuits for every single simulated neuron, that would in theory be a speed of 25 million times versus human switching speeds on the order of 200 Hz. Some neurons are faster than this, however, and you need enough discrete timesteps to account for small differences in arrival times that are a major factor in signaling. The upper end of the range is likely possible if you use much faster nano-scale components, and again, dedicated hardware circuits for every component. These kinds of speeds are also only practical if the neuroscience understanding is good enough that fine cellular details need not be simulated, merely neural network nodes with a large number of parameters.
If general purpose CPUs inside a supercomputer are used instead, achieving sim speeds close to realtime is expected to be very difficult to accomplish.
b. Working from the assumption that the speed advantage is 10^6, predicting superhuman capabilities from the uploads seems reasonable. A single uploaded entity wouldn’t get just 12 PhDs—they could get every PhD. Presumably, they’d enroll in every PhD program in the world simultaneously, and send some kind of robotic avatar to class. By rapidly switching between many robot avatars, queuing up commands for each one, they could presumably do many slow tasks at once.
c. These kinds of speedups allow levels of micromanagement that do not exist today. For example, suppose that the uploaded entity has a factory that produces some kind of assembly robot. As the factory runs, the entity observes every last motion by the machines in the factory and can adjust settings and motions for optimal performance. So, now the assembly robots are coming off the production line and going to work. The entity might notice a design flaw in the first batch and make dozens of changes to correct the flaw, so 5 minutes later version 1.1 of the bot is the one coming off the line. But 1.1 has other drawbacks, so 5 minutes after that, it’s version 1.2.
Or even more detailed : the entity understands the current task to such a fine grained detail that EVERY robot coming off of the assembly line is slightly customized so that it uses available resources to the most efficient degree possible.
I do acknowledge the author’s basic point. An entity that can think 1 million times faster would not be able to advance technology at 1 million times current speed. R&D cannot be done purely in simulations : physical experiments must be performed, physical prototypes must be built and tested against the real world. However, the speedups would still be large enough than a world with high speed uploaded entities would soon look very different from a world without them.