No, of course not. I’m saying that if you have miraculous brain-fixing techniques and deploy them as effectively as you know how to on yourself for years, then after those years you should surely (1) be conspicuously much happier / better adjusted / more rational / more productive than everyone else, and (2) not still need fixing all the time.
Yes, of course, because we all know that if you have a text substitution tool like ‘sed’, you should be able to fix all the bugs in a legacy codebase written over a period of 30-some years by a large number of people, even though you have no ability to list the contents of that codebase, in just a couple of years working part-time, while you’re learning about the architecture and programming language used. Yeah, that should be a piece of cake.
Oh yeah, and there are lots of manuals available, but we can’t tell you which ones sound sensible but were actually written by idiots who don’t know what they’re talking about, and which ones sound like they were written by lunatic channelers but actually give good practical information.
Plus, since the code you’re working on is your own head, you get to deal with compiler bugs and bugs in the debugger. Glorious fun! I highly recommend it. Not.
It certainly doesn’t help that I started out from a more f’d up place than most of my clients. I’ve had a few clients who’ve gotten one session with me or attended one workshop who then considered themselves completely fixed, and others that spent only a few months with me before deciding they were good to go.
It also doesn’t help that you can’t see your own belief frames as easily as you can see the frames of others. It’s easy to be a coach or guru to someone else. Ridiculously so, compared to doing it to yourself.
See, the thing is that you don’t just say “I’ve got some ways of tweaking how my brain works. They aren’t very good, and I don’t really have any understanding of what I’m doing, but I find this interesting.” (Which would be the equivalent of “I’ve got a text-substitution tool, and maybe there might be some way of using it to fix this undocumented 30-year-old ball of mud whose code I can’t read”.)
Which is not all that surprising, given that you’re trying to make a living from helping people fix their brains, and you wouldn’t get many clients by saying “I don’t really have any more idea what I’m doing than some newbie wannabe hacker trying to wrangle the source code for Windows with no tools more powerful than sed”. But I really don’t think you should both claim that you understand brains and know how to fix them and you have “miracle” techniques and so on and so forth, and protest as soon as that’s questioned “oh, but really it’s like trying to work on an insanely complicated pile of legacy software with only crappy tools”.
See, the thing is that you don’t just say “I’ve got some ways of tweaking how my brain works. They aren’t very good, and I don’t really have any understanding of what I’m doing, but I find this interesting.” (Which would be the equivalent of “I’ve got a text-substitution tool, and maybe there might be some way of using it to fix this undocumented 30-year-old ball of mud whose code I can’t read”.)
Actually, I do say that; few of my blog posts do much else besides describe some bug I found, what I did to fix it, and throw in some tips about the pitfalls involved.
But I really don’t think you should both claim that you understand brains and know how to fix them and you have “miracle” techniques and so on and so forth, and protest as soon as that’s questioned “oh, but really it’s like trying to work on an insanely complicated pile of legacy software with only crappy tools”.
If I told you I had a miracle tool called a “wrench”, that made it much easier to turn things, but said you had to find which pipes or bolts to turn with it, and whether they needed to be tightened or loosened, would you say that that was a contradiction? Would you expect that having a wrench would instantly make you into a plumber, or an expert on a thousand different custom-built steam engines? That makes no sense.
Computer programmers have the same problem: what their clients perceive as “simple” vs. “difficult/miracle” is different from what is actually simple or a miracle for the programmer. Sometimes they’re the same, and sometimes not.
In the same way, many things that people on this forum consider “simple” changes can in fact be mind-bogglingly complicated to implement, while other things that they consider to be high-end Culture-level transhumanism are fucking trivial.
Funny story: probably the only reason I’m here is because in Eliezer’s work I recognized a commonality: the effort to escape the mind-projection fallacy. In his case, it was such projections applied to AI, but in my case, it’s such projections applied to self. As long as you think of your mind in non-reductionistic terms, you’re not going to have a useful map for change purposes.
(Oh, and by the way, I never claimed to “fix brains”—that’s your nomenclature. I change the contents of brains to fix bugs in people’s behavior. Brains aren’t broken, or at least aren’t fixable. They just have some rather nasty design limitations on the hardware level that contribute to the creation of bugs on the software level.)
I think this discussion is getting too lengthy and off-topic, so I shall be very brief. (I’ll also remark: I’m not actually quite as cynical about your claims as I am probably appearing here.)
If I told you I had a miracle tool called a “wrench” [...]
If you told me you had a miracle tool called a wrench, and an immensely complicated machine with no supporting documentation, whose workings you didn’t understand, and that you were getting really good results by tweaking random things with the wrench (note: they’d better be random things, because otherwise your analogy with an inexperienced software developer attacking an unmanageable pile of code that s/he can’t even see doesn’t work) … why, then, I’d say “Put that thing down and back away slowly before you completely fuck something up with it”.
I never claimed to “fix brains”—that’s your nomenclature.
Yes, that’s my nomenclature (though you did say “the code you’re working on is your own head”...), and I’m sorry if it bothers you. Changes to the “contents of brains”, IIUC, are mostly made by changing the actual brain a bit; the software/hardware distinction is nowhere near as clean as it is with digital computers.
(note: they’d better be random things, because otherwise your analogy with an inexperienced software developer attacking an unmanageable pile of code that s/he can’t even see doesn’t work)
It’s not that you can’t see the code at all, it’s that you can’t list all the code, or even search it except by a very restricted set of criteria. But you can single-step it in a debugger, viewing the specific instructions being executed at a given point in time. To single-step all the code would take a ridiculous amount of time, but if you can step through a specific issue, then you can make a change at that point.
Such single changes sometimes generalize broadly, if you happen to hit a “function” that’s used by a lot of different things. But as with any legacy code base, it’s hard to predict in advance how many things will need changing in order to implement a particular bugfix or new feature.
I’d say “Put that thing down and back away slowly before you completely fuck something up with it”.
Well, when I started down this road, I was desperate enough that the risk of frying something was much less than the risk of not doing something. Happily, I can now say that the brain is a lot more redundant—even at the software level—than we tend to think. It basically uses a, “when in doubt, use brute force” approach to computation. It’s inelegant in one sense, but VERY robust -- massively robust compared to any human-built hardware OR software.
It’s not that you can’t see the code at all, it’s that you can’t list all the code, or even search it except by a very restricted set of criteria. But you can single-step it in a debugger, viewing the specific instructions being executed at a given point in time. To single-step all the code would take a ridiculous amount of time, but if you can step through a specific issue, then you can make a change at that point.
Such single changes sometimes generalize broadly, if you happen to hit a “function” that’s used by a lot of different things. But as with any legacy code base, it’s hard to predict in advance how many things will need changing in order to implement a particular bugfix or new feature.
While I understand that the code/brain analogy is an analogy, I think you are significantly underplaying the dangers of doing this in a code base you do not understand. Roughly half of my job is fixing other people’s “fixes” because they really had no concept of what was happening or how to use the tools in the box correctly.
While I understand that the code/brain analogy is an analogy, I think you are significantly underplaying the dangers of doing this in a code base you do not understand.
Brain code doesn’t crash, and the brain isn’t capable of locking in a tight loop for very long; there are plenty of hardware-level safeguards that are vastly better than anything we’ve got in computers. Remember, too, that brains have to be able to program themselves, so the system is inherently both simple and robust.
In fact, brains weren’t designed for conscious programming as such. What “mind hacking” essentially consists of is deliberately directing the brain to information that convinces it to make its own programming changes, in the same way that it normally updates its programming—e.g. by noticing that something is no longer true, a mistake in classification has been made, etc. (The key being that these changes have to be accomplished at the “near” thinking level, which operates primarily on simple sensory/emotional patterns, rather than verbal abstractions.)
In a sense, to make a change at all, you have to convince the brain that what you are asking it to change to will produce better results than what it’s already doing. (Again, in “near”, sensory terms.) Otherwise, it won’t “take” in the first place, or else it will revert to the old programming or generate new programming once you get it “in the field”.
I don’t mean you have to convince the person, btw; I mean you have to convince the brain. Meaning, you need to give it options that lead to a prediction of improved results in the specific context you’re modifying. In a sense, it’d be like talking an AI into changing its source code; you have to convince it that the change is consistent with its existing high-level goals.
It isn’t exactly like that, of course—all these things are all just metaphors. There isn’t really anything there to “convince”, it’s just that what you add into your memory won’t become the preferred response unless it meets certain criteria, relative to the existing options.
Truth be told, though, most of my work tends to be deleting code, not adding it, anyway. Specifically, removing false predictions of danger, and thereby causing other response options to bump up in the priority queue for that context.
For example, suppose you have an expert system that has a rule like “give up because you’re no good at it”, and that rule has a higher priority than any of the rules for performing the actual task. If you go in and just delete that rule, you will have what looks like a miraculous cure: the system now starts working properly. Or, if it still has bugs, they get ironed out through the normal learning process, not by you hacking individual rules.
I suppose what I’m trying to say is that there isn’t anything I’m doing that brains can’t or don’t already do on their own, given the right input. The only danger in that, is if you say, motivated yourself to do something dangerous without actually knowing how to do that thing safely. And people do that all the time anyway.
Yes, of course, because we all know that if you have a text substitution tool like ‘sed’, you should be able to fix all the bugs in a legacy codebase written over a period of 30-some years by a large number of people, even though you have no ability to list the contents of that codebase, in just a couple of years working part-time, while you’re learning about the architecture and programming language used. Yeah, that should be a piece of cake.
Oh yeah, and there are lots of manuals available, but we can’t tell you which ones sound sensible but were actually written by idiots who don’t know what they’re talking about, and which ones sound like they were written by lunatic channelers but actually give good practical information.
Plus, since the code you’re working on is your own head, you get to deal with compiler bugs and bugs in the debugger. Glorious fun! I highly recommend it. Not.
It certainly doesn’t help that I started out from a more f’d up place than most of my clients. I’ve had a few clients who’ve gotten one session with me or attended one workshop who then considered themselves completely fixed, and others that spent only a few months with me before deciding they were good to go.
It also doesn’t help that you can’t see your own belief frames as easily as you can see the frames of others. It’s easy to be a coach or guru to someone else. Ridiculously so, compared to doing it to yourself.
See, the thing is that you don’t just say “I’ve got some ways of tweaking how my brain works. They aren’t very good, and I don’t really have any understanding of what I’m doing, but I find this interesting.” (Which would be the equivalent of “I’ve got a text-substitution tool, and maybe there might be some way of using it to fix this undocumented 30-year-old ball of mud whose code I can’t read”.)
Which is not all that surprising, given that you’re trying to make a living from helping people fix their brains, and you wouldn’t get many clients by saying “I don’t really have any more idea what I’m doing than some newbie wannabe hacker trying to wrangle the source code for Windows with no tools more powerful than sed”. But I really don’t think you should both claim that you understand brains and know how to fix them and you have “miracle” techniques and so on and so forth, and protest as soon as that’s questioned “oh, but really it’s like trying to work on an insanely complicated pile of legacy software with only crappy tools”.
Actually, I do say that; few of my blog posts do much else besides describe some bug I found, what I did to fix it, and throw in some tips about the pitfalls involved.
If I told you I had a miracle tool called a “wrench”, that made it much easier to turn things, but said you had to find which pipes or bolts to turn with it, and whether they needed to be tightened or loosened, would you say that that was a contradiction? Would you expect that having a wrench would instantly make you into a plumber, or an expert on a thousand different custom-built steam engines? That makes no sense.
Computer programmers have the same problem: what their clients perceive as “simple” vs. “difficult/miracle” is different from what is actually simple or a miracle for the programmer. Sometimes they’re the same, and sometimes not.
In the same way, many things that people on this forum consider “simple” changes can in fact be mind-bogglingly complicated to implement, while other things that they consider to be high-end Culture-level transhumanism are fucking trivial.
Funny story: probably the only reason I’m here is because in Eliezer’s work I recognized a commonality: the effort to escape the mind-projection fallacy. In his case, it was such projections applied to AI, but in my case, it’s such projections applied to self. As long as you think of your mind in non-reductionistic terms, you’re not going to have a useful map for change purposes.
(Oh, and by the way, I never claimed to “fix brains”—that’s your nomenclature. I change the contents of brains to fix bugs in people’s behavior. Brains aren’t broken, or at least aren’t fixable. They just have some rather nasty design limitations on the hardware level that contribute to the creation of bugs on the software level.)
I think this discussion is getting too lengthy and off-topic, so I shall be very brief. (I’ll also remark: I’m not actually quite as cynical about your claims as I am probably appearing here.)
If you told me you had a miracle tool called a wrench, and an immensely complicated machine with no supporting documentation, whose workings you didn’t understand, and that you were getting really good results by tweaking random things with the wrench (note: they’d better be random things, because otherwise your analogy with an inexperienced software developer attacking an unmanageable pile of code that s/he can’t even see doesn’t work) … why, then, I’d say “Put that thing down and back away slowly before you completely fuck something up with it”.
Yes, that’s my nomenclature (though you did say “the code you’re working on is your own head”...), and I’m sorry if it bothers you. Changes to the “contents of brains”, IIUC, are mostly made by changing the actual brain a bit; the software/hardware distinction is nowhere near as clean as it is with digital computers.
It’s not that you can’t see the code at all, it’s that you can’t list all the code, or even search it except by a very restricted set of criteria. But you can single-step it in a debugger, viewing the specific instructions being executed at a given point in time. To single-step all the code would take a ridiculous amount of time, but if you can step through a specific issue, then you can make a change at that point.
Such single changes sometimes generalize broadly, if you happen to hit a “function” that’s used by a lot of different things. But as with any legacy code base, it’s hard to predict in advance how many things will need changing in order to implement a particular bugfix or new feature.
Well, when I started down this road, I was desperate enough that the risk of frying something was much less than the risk of not doing something. Happily, I can now say that the brain is a lot more redundant—even at the software level—than we tend to think. It basically uses a, “when in doubt, use brute force” approach to computation. It’s inelegant in one sense, but VERY robust -- massively robust compared to any human-built hardware OR software.
While I understand that the code/brain analogy is an analogy, I think you are significantly underplaying the dangers of doing this in a code base you do not understand. Roughly half of my job is fixing other people’s “fixes” because they really had no concept of what was happening or how to use the tools in the box correctly.
Brain code doesn’t crash, and the brain isn’t capable of locking in a tight loop for very long; there are plenty of hardware-level safeguards that are vastly better than anything we’ve got in computers. Remember, too, that brains have to be able to program themselves, so the system is inherently both simple and robust.
In fact, brains weren’t designed for conscious programming as such. What “mind hacking” essentially consists of is deliberately directing the brain to information that convinces it to make its own programming changes, in the same way that it normally updates its programming—e.g. by noticing that something is no longer true, a mistake in classification has been made, etc. (The key being that these changes have to be accomplished at the “near” thinking level, which operates primarily on simple sensory/emotional patterns, rather than verbal abstractions.)
In a sense, to make a change at all, you have to convince the brain that what you are asking it to change to will produce better results than what it’s already doing. (Again, in “near”, sensory terms.) Otherwise, it won’t “take” in the first place, or else it will revert to the old programming or generate new programming once you get it “in the field”.
I don’t mean you have to convince the person, btw; I mean you have to convince the brain. Meaning, you need to give it options that lead to a prediction of improved results in the specific context you’re modifying. In a sense, it’d be like talking an AI into changing its source code; you have to convince it that the change is consistent with its existing high-level goals.
It isn’t exactly like that, of course—all these things are all just metaphors. There isn’t really anything there to “convince”, it’s just that what you add into your memory won’t become the preferred response unless it meets certain criteria, relative to the existing options.
Truth be told, though, most of my work tends to be deleting code, not adding it, anyway. Specifically, removing false predictions of danger, and thereby causing other response options to bump up in the priority queue for that context.
For example, suppose you have an expert system that has a rule like “give up because you’re no good at it”, and that rule has a higher priority than any of the rules for performing the actual task. If you go in and just delete that rule, you will have what looks like a miraculous cure: the system now starts working properly. Or, if it still has bugs, they get ironed out through the normal learning process, not by you hacking individual rules.
I suppose what I’m trying to say is that there isn’t anything I’m doing that brains can’t or don’t already do on their own, given the right input. The only danger in that, is if you say, motivated yourself to do something dangerous without actually knowing how to do that thing safely. And people do that all the time anyway.