I assumed as much and this is where the whole premise breaks down.
The “self-improvement” aspect doesn’t need immediate control over the immediate direct input to the deception detector. It can color the speech recognition, the Bayesian filters, the databases containing foments and linguistic itself… and twist those parameters to shape a future signal in a desired fashion.
Since “self-improvement” can happen at any layer and propagate the results to subsequent middleware, paranoid protections over the most immediate relationship between the deception detector and the CPU is inconsequential. This is a “self-improving” AI, after all. It can change its own internals at will… well… at my will. :D
Now, to be fair, I wrote an entire book about the idea of an AI intentionally lying to people when everyone else though their moralistic programming was the overriding factor. Never released the book, however… ;D
Uhhhh I actually program artificial intelligence....?
You do know that the ability to modify your own code (“self-modifying”) applies to every layer in the OSI model, each layer potentially influencing the data in transit… the data that determines the training of the classifiers...
I assumed as much and this is where the whole premise breaks down.
The “self-improvement” aspect doesn’t need immediate control over the immediate direct input to the deception detector. It can color the speech recognition, the Bayesian filters, the databases containing foments and linguistic itself… and twist those parameters to shape a future signal in a desired fashion.
Since “self-improvement” can happen at any layer and propagate the results to subsequent middleware, paranoid protections over the most immediate relationship between the deception detector and the CPU is inconsequential. This is a “self-improving” AI, after all. It can change its own internals at will… well… at my will. :D
Now, to be fair, I wrote an entire book about the idea of an AI intentionally lying to people when everyone else though their moralistic programming was the overriding factor. Never released the book, however… ;D
Technology isn’t magic. There are limits and constrains.
Uhhhh I actually program artificial intelligence....?
You do know that the ability to modify your own code (“self-modifying”) applies to every layer in the OSI model, each layer potentially influencing the data in transit… the data that determines the training of the classifiers...
You do know this… right?
What does the OSI model have to do with this?
I’m talking about a hypervisor operating system. Hardware which monitors the computing substrate which runs the AI.
(And yes, I write AI code as well.)