Re: Intel and simulating microprocessors on further microprocessors.
“Simulating a hundred-million transistor chip design, using a smaller slower chip with a few gigabytes of RAM, or a clustered computer, would certainly be possible; and if stuck with 1998 hardware that’s exactly what Intel would do, and I doubt it would slow their rate of technological progress by very much.”
When you do microprocessor design there’s a subtle distinction between the simulation of the VHDL/Verilog-type information, which is basically boolean algebraic representations that are converted into the final circuits in terms of transistors etc., versus the functional testing which I know no better name of. This ‘functional testing’ is more like quality testing, where you wire up your 128-bit IO chip to testing equipment and push bits in and get stuff popped out to do formal physical verification. On 128-bit architectures this is 2^128 tests, you’re essentially traversing through the ridiculously huge state table. In practice this is infeasible to do, even in simulation (verification of all possible states), so VHDL/Verilog/RTL-type analysis is worth focusing on instead.
Bryan
Our ultimate fate may be one of doom but it may also be exceedingly positive to us. The conceiving of bad conceivable outcomes is not itself able to negate conceiving of positive conceivable outcomes, nor the other way around. Doomsaying (steel-dystopianizing?) and steel-utopianizing are therefore not productive activities of man.
There has never been a guarantee of safety of our or any other lifeform’s path through the evolutionary mysts. Guaranteeing our path through the singularity to specific agreeable outcomes may not be possible even in a world where a positive singularity outcome is actually later achieved. That might even be our world for all we know. Even if it’s always possible in all possible worlds to create a guarantee of our path through the singularity and its outcome, it’s not clear to me that working on trying to make theoretical (and practical) guarantees would be better than the utility of working on other positive technology developments instead. For example, while such guarantees may be possible in all possible worlds, it may not be possible to develop such guarantees in a timely manner for them to matter. Even if guarantees are universally possible in all possible worlds, prior to, you know, actually needing them to be implemented, it may still be less optimal to focus your work on those guarantees.
Some of those positive singularity outcomes may only be achievable in worlds where specifically your followers and readers neglect the very things that you are advocating for them to spend their time on. Nobody really knows, not with any certainty.