That was about the mental image I had in mind, yea.
And as Baughn said, I don’t think it’s possible, but I’m not entirely certain that it isn’t. More importantly, it doesn’t seem that implausible that at least one of the thousands of other ideas that are about as overpowered and impossible-sounding and that we don’t have the faintest hope of thinking of, is possible. And one is all the AI needs.
Pointing out that a very low probability argument is not proved to be impossible and is thus worth considering is roughly equivalent to pointing out that someone wins the lottery. Just as I wouldn’t listen to a financial adviser who at every meeting pointed out that I might soon win the lottery, it’s difficult to take seriously people warning me of the risks of things that I judge to be impossible. If you have significantly better advice than lottery tickets, I’m happy to hear it, but the argument that surely I could buy lots and lots of lottery tickets, and one of them has to win is not particularly convincing.
You are confusing probability with difficulty; I’m pretty certain that in some sense, random read-write patterns will eventually cause it to happen with some inconceivably low probability. The question is what’d be required to find the right pattern. Will quantum mechanics put a stop to it before it propagates more than a few atoms? Is there a pattern that’d do it, but finding that pattern would break thermodynamics? Is there a pattern that could be found, but that’d require far more computing power than could ever be built in this universe utilized maximally? Maybe it’s technically within reach but would require a matrochika brain. Or maybe, just maybe, almost certainly not, the kind of supercomputer the AI might get it’s hands on will be able to do it.
But yea, the probability this particular strategy would be practical is extremely small. That was not my point. my point is that the rough reference class of absurdly overpowered implausibilities, when you integrate over all the myriad different things in it, ends up with a decent chunk of probability put together.
The ‘femtotech from HDD’ thing is essentially shorthand for some kind of black swan. Considering how powerful mere human-level intelligence is, superintelligence seems certain to find something overpowered somewhere, even if it’s just hacking human minds.
FWIW, while my estimate of femtotech-through-weird-bit-patterns is effectively zero, my estimate of femtotech through rewriting the disk firmware and getting it to construct magnetic fields in no way corresponding to normal disk usage is.. well, still near-zero, but several orders of magnitude higher.
If you didn’t consider overwriting the firmware as an immediately obvious option, you may be merely human. An AI certainly would, and other options I wouldn’t think of. :-)
That was about the mental image I had in mind, yea.
And as Baughn said, I don’t think it’s possible, but I’m not entirely certain that it isn’t. More importantly, it doesn’t seem that implausible that at least one of the thousands of other ideas that are about as overpowered and impossible-sounding and that we don’t have the faintest hope of thinking of, is possible. And one is all the AI needs.
Pointing out that a very low probability argument is not proved to be impossible and is thus worth considering is roughly equivalent to pointing out that someone wins the lottery. Just as I wouldn’t listen to a financial adviser who at every meeting pointed out that I might soon win the lottery, it’s difficult to take seriously people warning me of the risks of things that I judge to be impossible. If you have significantly better advice than lottery tickets, I’m happy to hear it, but the argument that surely I could buy lots and lots of lottery tickets, and one of them has to win is not particularly convincing.
You are confusing probability with difficulty; I’m pretty certain that in some sense, random read-write patterns will eventually cause it to happen with some inconceivably low probability. The question is what’d be required to find the right pattern. Will quantum mechanics put a stop to it before it propagates more than a few atoms? Is there a pattern that’d do it, but finding that pattern would break thermodynamics? Is there a pattern that could be found, but that’d require far more computing power than could ever be built in this universe utilized maximally? Maybe it’s technically within reach but would require a matrochika brain. Or maybe, just maybe, almost certainly not, the kind of supercomputer the AI might get it’s hands on will be able to do it.
But yea, the probability this particular strategy would be practical is extremely small. That was not my point. my point is that the rough reference class of absurdly overpowered implausibilities, when you integrate over all the myriad different things in it, ends up with a decent chunk of probability put together.
The ‘femtotech from HDD’ thing is essentially shorthand for some kind of black swan. Considering how powerful mere human-level intelligence is, superintelligence seems certain to find something overpowered somewhere, even if it’s just hacking human minds.
FWIW, while my estimate of femtotech-through-weird-bit-patterns is effectively zero, my estimate of femtotech through rewriting the disk firmware and getting it to construct magnetic fields in no way corresponding to normal disk usage is.. well, still near-zero, but several orders of magnitude higher.
If you didn’t consider overwriting the firmware as an immediately obvious option, you may be merely human. An AI certainly would, and other options I wouldn’t think of. :-)
Once again, someone else express my thoughts way better than I ever could.