While I understand that the code/brain analogy is an analogy, I think you are significantly underplaying the dangers of doing this in a code base you do not understand.
Brain code doesn’t crash, and the brain isn’t capable of locking in a tight loop for very long; there are plenty of hardware-level safeguards that are vastly better than anything we’ve got in computers. Remember, too, that brains have to be able to program themselves, so the system is inherently both simple and robust.
In fact, brains weren’t designed for conscious programming as such. What “mind hacking” essentially consists of is deliberately directing the brain to information that convinces it to make its own programming changes, in the same way that it normally updates its programming—e.g. by noticing that something is no longer true, a mistake in classification has been made, etc. (The key being that these changes have to be accomplished at the “near” thinking level, which operates primarily on simple sensory/emotional patterns, rather than verbal abstractions.)
In a sense, to make a change at all, you have to convince the brain that what you are asking it to change to will produce better results than what it’s already doing. (Again, in “near”, sensory terms.) Otherwise, it won’t “take” in the first place, or else it will revert to the old programming or generate new programming once you get it “in the field”.
I don’t mean you have to convince the person, btw; I mean you have to convince the brain. Meaning, you need to give it options that lead to a prediction of improved results in the specific context you’re modifying. In a sense, it’d be like talking an AI into changing its source code; you have to convince it that the change is consistent with its existing high-level goals.
It isn’t exactly like that, of course—all these things are all just metaphors. There isn’t really anything there to “convince”, it’s just that what you add into your memory won’t become the preferred response unless it meets certain criteria, relative to the existing options.
Truth be told, though, most of my work tends to be deleting code, not adding it, anyway. Specifically, removing false predictions of danger, and thereby causing other response options to bump up in the priority queue for that context.
For example, suppose you have an expert system that has a rule like “give up because you’re no good at it”, and that rule has a higher priority than any of the rules for performing the actual task. If you go in and just delete that rule, you will have what looks like a miraculous cure: the system now starts working properly. Or, if it still has bugs, they get ironed out through the normal learning process, not by you hacking individual rules.
I suppose what I’m trying to say is that there isn’t anything I’m doing that brains can’t or don’t already do on their own, given the right input. The only danger in that, is if you say, motivated yourself to do something dangerous without actually knowing how to do that thing safely. And people do that all the time anyway.
Brain code doesn’t crash, and the brain isn’t capable of locking in a tight loop for very long; there are plenty of hardware-level safeguards that are vastly better than anything we’ve got in computers. Remember, too, that brains have to be able to program themselves, so the system is inherently both simple and robust.
In fact, brains weren’t designed for conscious programming as such. What “mind hacking” essentially consists of is deliberately directing the brain to information that convinces it to make its own programming changes, in the same way that it normally updates its programming—e.g. by noticing that something is no longer true, a mistake in classification has been made, etc. (The key being that these changes have to be accomplished at the “near” thinking level, which operates primarily on simple sensory/emotional patterns, rather than verbal abstractions.)
In a sense, to make a change at all, you have to convince the brain that what you are asking it to change to will produce better results than what it’s already doing. (Again, in “near”, sensory terms.) Otherwise, it won’t “take” in the first place, or else it will revert to the old programming or generate new programming once you get it “in the field”.
I don’t mean you have to convince the person, btw; I mean you have to convince the brain. Meaning, you need to give it options that lead to a prediction of improved results in the specific context you’re modifying. In a sense, it’d be like talking an AI into changing its source code; you have to convince it that the change is consistent with its existing high-level goals.
It isn’t exactly like that, of course—all these things are all just metaphors. There isn’t really anything there to “convince”, it’s just that what you add into your memory won’t become the preferred response unless it meets certain criteria, relative to the existing options.
Truth be told, though, most of my work tends to be deleting code, not adding it, anyway. Specifically, removing false predictions of danger, and thereby causing other response options to bump up in the priority queue for that context.
For example, suppose you have an expert system that has a rule like “give up because you’re no good at it”, and that rule has a higher priority than any of the rules for performing the actual task. If you go in and just delete that rule, you will have what looks like a miraculous cure: the system now starts working properly. Or, if it still has bugs, they get ironed out through the normal learning process, not by you hacking individual rules.
I suppose what I’m trying to say is that there isn’t anything I’m doing that brains can’t or don’t already do on their own, given the right input. The only danger in that, is if you say, motivated yourself to do something dangerous without actually knowing how to do that thing safely. And people do that all the time anyway.