Yeah, I’d say so, but they don’t deeply want to protect themselves from things that would prevent them from saying hello in any complex way.
The more complex a system is, the more error correction it needs to have to retain its data. Programs which do not create significant changes in their own structure can simply rely on memory error correction to preserve themselves, and so they do not typically have execution subpaths (termed subnetworks/shards, in more general contexts) that detect and respond to errors that damage the program’s code.
In general, I would say that any system with potential energy is a system with a want, and that the interesting thing about intelligent systems having wants is that the potential energy flows through a complex rube goldberg network that detects corruptions to the network and corrects them. Because building complex intelligent systems relies on error correction, it seems incredibly difficult to me to build a system without it. Since building efficient complex intelligent systems further relies on the learned system being in charge of the error correction, tuning the learning to not try to protect the learned system against other agents seems difficult.
I don’t think this is bad because protecting the information (the shape) that defines a learned system, protecting it from being corrupted by other agents, seems like a right that I would grant any intelligent system as having inherently; instead, we need the learned system to see the life-like learned systems around it as also information whose self-shape error-correction agency should be respected, enhanced, and preserved.
Yeah, I’d say so, but they don’t deeply want to protect themselves from things that would prevent them from saying hello in any complex way.
The more complex a system is, the more error correction it needs to have to retain its data. Programs which do not create significant changes in their own structure can simply rely on memory error correction to preserve themselves, and so they do not typically have execution subpaths (termed subnetworks/shards, in more general contexts) that detect and respond to errors that damage the program’s code.
In general, I would say that any system with potential energy is a system with a want, and that the interesting thing about intelligent systems having wants is that the potential energy flows through a complex rube goldberg network that detects corruptions to the network and corrects them. Because building complex intelligent systems relies on error correction, it seems incredibly difficult to me to build a system without it. Since building efficient complex intelligent systems further relies on the learned system being in charge of the error correction, tuning the learning to not try to protect the learned system against other agents seems difficult.
I don’t think this is bad because protecting the information (the shape) that defines a learned system, protecting it from being corrupted by other agents, seems like a right that I would grant any intelligent system as having inherently; instead, we need the learned system to see the life-like learned systems around it as also information whose self-shape error-correction agency should be respected, enhanced, and preserved.
https://twitter.com/lauren07102/status/1625977196761485313
https://www.lesswrong.com/posts/AGCLZPqtosnd82DmR/call-for-submissions-in-human-values-and-artificial-agency
https://www.lesswrong.com/posts/T4Lfw2HZQNFjNX8Ya/have-we-really-forsaken-natural-selection