A brain emulation may want to modify so that when it multiplies numbers together, instead of its hardware emulating all the neurons involved, it performs the multiplication on a standard computer processor.
This would be far faster, more accurate, and less memory intensive.
Implementation would involve figuring out how the hardware recognizes the intention to perform a multiplication, represent the numbers digitally, and then present the answer back to the emulated neurons. This is outside the scope of any mechanism we might have to make changes within our brains, which would not be able to modify the emulator.
Cracking the protein folding problem, building nanotechnology, and reviving a cryonics patient at the highest possible fidelity. Redesigning the spaghetti code of the brain so as to permit it to live a flourishing and growing life rather than e.g. overloading with old memories at age 200.
I suppose you make a remarkable illustration of how people with no cosmic ambitions and brainwashed by the self-help industry, don’t even have any goals in life that require direct brain editing, and aren’t much willing to imagine them because it implies that their own brains are (gasp!) inadequate.
people with no cosmic ambitions and brainwashed by the self-help industry, don’t even have any goals in life that require direct brain editing, aren’t much willing to imagine them because it implies that their own brains are (gasp!) inadequate.
Is this your causal theory? Literally, that pjeby considered a goal that would have required direct brain editing, noticed that the goal would have implied that his brain was inadequate, felt negative self-image associations, and only then dropped the goal from consideration, and for no other reason? And further, that this is why he asked: “If you have a system that’s perfectly capable of making changes on its own, debugged by millions of years of evolution, why on earth would you want to bypass those safeties?”
I think that, where you are imagining direct brain editing done only with a formal, philosophically cross-validated theory of brain editing safety and only after a long enough delay to develop that theory, and where you imagine pjeby to be imagining direct brain editing done only with a formal, philosophically cross-validated theory of brain editing safety and only after a long enough delay to develop that theory, pjeby may be actually imagining someone who already has a brain-editing device and no safetiness theory, and who is faced with a short-range practical decision problem about whether to use the device when the option of introspective self-modification is available. pjeby probably has a lot of experience with people who have simple technical tools and are not reflective like you about whether they are safe to use. That is the kind of person he might be thinking of when he is deciding whether it would be better advice to tell the person to introspect or to use the brain editor.
(Also, someone other than me should have diagnosed this potential communication failure already! Do you guys prefer strife and ad-hominems and ill will or something?)
The x you get from
argmax_(x) U(x, y)
for fixed y is, in general, different from the x you get from
argmax_(x, y) U(x, y).
But this doesn’t mean you can conclude that the first argmax calculated U() wrong.
I suppose you make a remarkable illustration of how people with no cosmic ambitions and brainwashed by the self-help industry, don’t even have any goals in life that require direct brain editing, and aren’t much willing to imagine them because it implies that their own brains are (gasp!) inadequate.
Wow, somebody’s cranky today. (I could equally note that you’re an illustration of what happens when people try to build a technical solution to a human problem… while largely ignoring the human side of the problem.)
Solving cooler technical problems or having more brain horsepower sure would be nice. But as I already know from personal experience, just being smarter than other people doesn’t help, if it just means you execute your biases and misconceptions with greater speed and an increased illusion of certainty.
Hence, I consider the sort of self-modification that removes biases, misconceptions, and motivated reasoning to be both vastly more important and incredibly more urgent than the sort that would let me think faster, while retaining the exact same blindspots.
But if you insist on hacking brain hardware directly or in emulation, please do start with debugging support: the ability to see in real-time what belief structures are being engaged in reaching a decision or conclusion, with nice tracing readouts of all their backing assumptions. That would be really, really useful, even if you never made any modifications outside the ones that would take place by merely observing the debugger output.
you’re an illustration of what happens when people try to build a technical solution to a human problem
If there were a motivator captioned “TECHNICAL SOLUTIONS TO HUMAN PROBLEMS”, I would be honored to have my picture appear on it, so thank you very much.
If there were a motivator captioned “TECHNICAL SOLUTIONS TO HUMAN PROBLEMS”, I would be honored to have my picture appear on it, so thank you very much.
You left out the “ignoring the human part of the problem” part.
The best technical solutions to human problems are the ones that leverage and use the natural behaviors of humans, rather than trying to replace those behaviors with a perfect technical process or system, or trying to force the humans to conform to expectations.
(I’d draw an analogy with Nelson’s Xanadu vs. the web-as-we-know-it, but that could be mistaken for a pure Worse Is Better argument, and I certainly don’t want any motivated superintelligences being built on a worse-is-better basis.)
Wow what hubris the “brain is inadequate spaghetti code”. Tell me have you ever actually studied neuroscience? Where do you think modern science came from? This inadequate spaghetti code has given us the computer, modern physics and plenty of other things. For being inadequate spaghetti code (this is really a misnomer because we don’t actually understand the brain well enough to make that judgement) it does pretty well.
If the brain is as bad as you make it out to be then I challenge you to make a better one. In fact I challenge you to make a computer capable of as many operations as the brain running on as little power as the brain does. If you can’t do better then you are no better then the people who go around bashing General Relativity without being able to propose something better.
I look forward to it. (though I doubt I will ever see it considering how long you’ve been saying you were going to make an FAI and how little progress you have actually made)
But maybe your pulling a Wolfram and going to work alone for 10 years and dazzle everyone with your theory.
I don’t think there’s actually any substantive disagreement here. “Good,” “bad,” “adequate,” “inadequate”—these are all just words. The empirical facts are what they are, and we can only call them good or bad relative to some specific standard. Part of Eliezer’s endearing writing style is holding things to ridiculously impossibly high standards, and so he has a tendency to mouth off about how the human brain is poorly designed, human lifespans are ridiculously short and poor, evolutions are stupid, and so forth. But it’s just a cute way of talking about things; we can easily imagine someone with the same anticipations of experience but less ambition (or less hubris, if you prefer to say that) who says, “The human brain is amazing; human lives are long and rich; evolution is a wonder!” It’s not a disagreement in the rationalist’s sense, because it’s not about the facts. It’s not about neuroscience; it’s about attitude.
The post shows the exact same lack of familiarity with neuroscience as the comment I responded to. Examine closely how a single neuron functions and the operations that it can perform. Examine closely the ability of savants (things like memory, counting in primes, calender math...) and after a few years of reading the current neuroscience research comeback and we might have something to discuss.
For what, specifically?
A brain emulation may want to modify so that when it multiplies numbers together, instead of its hardware emulating all the neurons involved, it performs the multiplication on a standard computer processor.
This would be far faster, more accurate, and less memory intensive.
Implementation would involve figuring out how the hardware recognizes the intention to perform a multiplication, represent the numbers digitally, and then present the answer back to the emulated neurons. This is outside the scope of any mechanism we might have to make changes within our brains, which would not be able to modify the emulator.
Cracking the protein folding problem, building nanotechnology, and reviving a cryonics patient at the highest possible fidelity. Redesigning the spaghetti code of the brain so as to permit it to live a flourishing and growing life rather than e.g. overloading with old memories at age 200.
I suppose you make a remarkable illustration of how people with no cosmic ambitions and brainwashed by the self-help industry, don’t even have any goals in life that require direct brain editing, and aren’t much willing to imagine them because it implies that their own brains are (gasp!) inadequate.
Is this your causal theory? Literally, that pjeby considered a goal that would have required direct brain editing, noticed that the goal would have implied that his brain was inadequate, felt negative self-image associations, and only then dropped the goal from consideration, and for no other reason? And further, that this is why he asked: “If you have a system that’s perfectly capable of making changes on its own, debugged by millions of years of evolution, why on earth would you want to bypass those safeties?”
I think that, where you are imagining direct brain editing done only with a formal, philosophically cross-validated theory of brain editing safety and only after a long enough delay to develop that theory, and where you imagine pjeby to be imagining direct brain editing done only with a formal, philosophically cross-validated theory of brain editing safety and only after a long enough delay to develop that theory, pjeby may be actually imagining someone who already has a brain-editing device and no safetiness theory, and who is faced with a short-range practical decision problem about whether to use the device when the option of introspective self-modification is available. pjeby probably has a lot of experience with people who have simple technical tools and are not reflective like you about whether they are safe to use. That is the kind of person he might be thinking of when he is deciding whether it would be better advice to tell the person to introspect or to use the brain editor.
(Also, someone other than me should have diagnosed this potential communication failure already! Do you guys prefer strife and ad-hominems and ill will or something?)
The x you get from
argmax_(x) U(x, y)
for fixed y is, in general, different from the x you get from
argmax_(x, y) U(x, y).
But this doesn’t mean you can conclude that the first argmax calculated U() wrong.
Wow, somebody’s cranky today. (I could equally note that you’re an illustration of what happens when people try to build a technical solution to a human problem… while largely ignoring the human side of the problem.)
Solving cooler technical problems or having more brain horsepower sure would be nice. But as I already know from personal experience, just being smarter than other people doesn’t help, if it just means you execute your biases and misconceptions with greater speed and an increased illusion of certainty.
Hence, I consider the sort of self-modification that removes biases, misconceptions, and motivated reasoning to be both vastly more important and incredibly more urgent than the sort that would let me think faster, while retaining the exact same blindspots.
But if you insist on hacking brain hardware directly or in emulation, please do start with debugging support: the ability to see in real-time what belief structures are being engaged in reaching a decision or conclusion, with nice tracing readouts of all their backing assumptions. That would be really, really useful, even if you never made any modifications outside the ones that would take place by merely observing the debugger output.
If there were a motivator captioned “TECHNICAL SOLUTIONS TO HUMAN PROBLEMS”, I would be honored to have my picture appear on it, so thank you very much.
You left out the “ignoring the human part of the problem” part.
The best technical solutions to human problems are the ones that leverage and use the natural behaviors of humans, rather than trying to replace those behaviors with a perfect technical process or system, or trying to force the humans to conform to expectations.
(I’d draw an analogy with Nelson’s Xanadu vs. the web-as-we-know-it, but that could be mistaken for a pure Worse Is Better argument, and I certainly don’t want any motivated superintelligences being built on a worse-is-better basis.)
Wow what hubris the “brain is inadequate spaghetti code”. Tell me have you ever actually studied neuroscience? Where do you think modern science came from? This inadequate spaghetti code has given us the computer, modern physics and plenty of other things. For being inadequate spaghetti code (this is really a misnomer because we don’t actually understand the brain well enough to make that judgement) it does pretty well.
If the brain is as bad as you make it out to be then I challenge you to make a better one. In fact I challenge you to make a computer capable of as many operations as the brain running on as little power as the brain does. If you can’t do better then you are no better then the people who go around bashing General Relativity without being able to propose something better.
I accept your challenge. See you in a while.
Awesome.
I look forward to it. (though I doubt I will ever see it considering how long you’ve been saying you were going to make an FAI and how little progress you have actually made) But maybe your pulling a Wolfram and going to work alone for 10 years and dazzle everyone with your theory.
I don’t think there’s actually any substantive disagreement here. “Good,” “bad,” “adequate,” “inadequate”—these are all just words. The empirical facts are what they are, and we can only call them good or bad relative to some specific standard. Part of Eliezer’s endearing writing style is holding things to ridiculously impossibly high standards, and so he has a tendency to mouth off about how the human brain is poorly designed, human lifespans are ridiculously short and poor, evolutions are stupid, and so forth. But it’s just a cute way of talking about things; we can easily imagine someone with the same anticipations of experience but less ambition (or less hubris, if you prefer to say that) who says, “The human brain is amazing; human lives are long and rich; evolution is a wonder!” It’s not a disagreement in the rationalist’s sense, because it’s not about the facts. It’s not about neuroscience; it’s about attitude.
While my sample size is limited I have noticed a distinct correlation between engaging in hubris and levelling the charge at others. Curious.
For calibration, see The Power of Intelligence.
“The Power of Intelligence”
Derivative drivel...
The post shows the exact same lack of familiarity with neuroscience as the comment I responded to. Examine closely how a single neuron functions and the operations that it can perform. Examine closely the ability of savants (things like memory, counting in primes, calender math...) and after a few years of reading the current neuroscience research comeback and we might have something to discuss.