But anyway—you don’t have any rigorous argument to back up the idea that a system like you posit is possible in the real-world, either! And SIAI has staff who, unlike me, are paid full-time to write and philosophize … and they haven’t come up with a rigorous argument in favor of the possibility of such a system, either. Although they have talked about it a lot, though usually in the context of paperclips rather than Mickey Mouses.
This pretty much captures what a lot of people told me in private. Even those who have read the Sequences and met Eliezer Yudkowsky.
It’s kind-of trivial—you can make goal-directed systems pursue any goal you like.
The correct reply is not to argue for impossibility—but implausibility: our machine descendants will probably contine to maximise entropy—like the current biosphere does—and not some negentropic state—like gold atoms or something.
Yes, that leaves things open for a “but there’s still a chance: right?” reply. That’s OK—after all, there is a small chance.
This pretty much captures what a lot of people told me in private. Even those who have read the Sequences and met Eliezer Yudkowsky.
It’s kind-of trivial—you can make goal-directed systems pursue any goal you like.
The correct reply is not to argue for impossibility—but implausibility: our machine descendants will probably contine to maximise entropy—like the current biosphere does—and not some negentropic state—like gold atoms or something.
Yes, that leaves things open for a “but there’s still a chance: right?” reply. That’s OK—after all, there is a small chance.