If such a person would write a similar post and actually write in a way that they feel, rather than being incredible polite, things would look very different.
I’m assuming you think they’d come in, scoff at our arrogance for a few pages, and then waltz off. Disregarding how many employed machine learning engineers also do side work on general intelligence projects, you’d probably get the same response from automobile engineer, someone with a track record and field expertise, talking to the Wright Brothers. Thinking about new things and new ideas doesn’t automatically make you wrong.
That recursive self-improvement is nothing more than a row of English words, a barely convincing fantasy.
Really? Because that’s a pretty strong claim. If I knew how the human brain worked well enough to build one in software, I could certainly build something smarter. You could increase the number of slots in working memory. Tweak the part of the brain that handles intuitive math to correctly deal with orders of magnitude. Improve recall to eidetic levels. Tweak the brain’s handling of probabilities to be closer to the Bayesian ideal. Even those small changes would likely produce a mind smarter than any human being who has ever lived. That, plus the potential for exponential subjective speedup, is already dangerous. And that’s assuming that the mind that results would see zero new insights that I’ve missed, which is pretty unlikely. Even if the curve bottoms out fairly quickly, after only a generation or two that’s STILL really dangerous.
Worst of all, you are completely unconvincing and do not even notice it because there are so many other people who are strongly and emotionally attached to the particular science fiction scenarios that you envision.
Really makes you wonder how all those people got convinced in the first place.
If I knew how the human brain worked well enough to build one in software, I could certainly build something smarter.
This is totally unsupported. To quote Lady Catherine de Bourgh, “If I had ever learned [to play the piano], I should have become a great proficient.”
You have no idea whether the “small changes” you propose are technically feasible, or whether these “tweaks” would in fact mean a complete redesign. For all we know, if you knew how the human brain worked well enough to build one in software, you would appreciate why these changes are impossible without destroying the rest of the system’s functionality.
After all, it would appear that (say) eidetic recall would provide a fitness advantage. Given that humans lack it, there may well be good reasons why.
“totally unsupported” seems extreme. (Though I enjoyed the P&P shoutout. I was recently in a stage adaptation of the book, so it is pleasantly primed.)
What the claim amounts to is the belief that: a) there exist good design ideas for brains that human evolution didn’t implement, and b) a human capable of building a working brain at all is capable of coming up with some of them.
A seems pretty likely to me… at least, the alternative (our currently evolved brains are the best possible design) seems so implausible as to scarcely be worth considering.
B is harder to say anything clear about, but given our experience with other evolved systems, it doesn’t strike me as absurd. We’re pretty good at improving the stuff we were born with.
Of course, you’re right that this is evidence and not proof. It’s possible that we just can’t do any better than human brains for thinking, just like it was possible (but turned out not to be true) that we couldn’t do any better than human legs for covering long distances efficiently.
I don’t doubt that it’s possible to come up with something that thinks better than the human brain, just as we have come up with something that travels better than the human leg. But to cover long distances efficiently, people didn’t start by replicating a human leg, and then tweaking it. They came up with a radically different design—e.g. the wheel.
I don’t see the evidence that knowing how to build a human brain is the key step in knowing how to build something better. For instance, suppose you could replicate neuron function in software, and then scan a brain map (Robin Hanson’s “em” concept). That wouldn’t allow you to make any of the improvements to memory, maths, etc, that Dolores suggests. Perhaps you could make it run faster—although depending on hardware constraints, it might run slower. If you wanted to build something better, you might need to start from scratch. Or, things could go the other way—we might be able to build “minds” far better than the human brain, yet never be able to replicate a human one.
But it’s not just that evidence is lacking—Dolores is claiming certainty in the lack of evidence. I really do think the Austen quote was appropriate.
To clarify, I did not mean having the data to build a neuron-by-neuron model of the brain. I meant actually understanding the underlying algorithms those slabs of neural tissue are implementing. Think less understanding the exact structure of a bird’s wing, and more understanding the concept of lift.
I think, with that level of understanding, the odds that a smart engineer (even if it’s not me) couldn’t find something to improve seem low.
I agree that I might not need to be able to build a human brain in software to be able to build something better, as with cars and legs.
And I agree that I might be able to build a brain in software without understanding how to do it, e.g., by copying an existing one as with ems.
That said, if I understand the principles underlying a brain well enough to build one in software (rather than just copying it), it still seems reasonable to believe that I can also build something better.
I’m assuming you think they’d come in, scoff at our arrogance for a few pages, and then waltz off. Disregarding how many employed machine learning engineers also do side work on general intelligence projects, you’d probably get the same response from automobile engineer, someone with a track record and field expertise, talking to the Wright Brothers. Thinking about new things and new ideas doesn’t automatically make you wrong.
Really? Because that’s a pretty strong claim. If I knew how the human brain worked well enough to build one in software, I could certainly build something smarter. You could increase the number of slots in working memory. Tweak the part of the brain that handles intuitive math to correctly deal with orders of magnitude. Improve recall to eidetic levels. Tweak the brain’s handling of probabilities to be closer to the Bayesian ideal. Even those small changes would likely produce a mind smarter than any human being who has ever lived. That, plus the potential for exponential subjective speedup, is already dangerous. And that’s assuming that the mind that results would see zero new insights that I’ve missed, which is pretty unlikely. Even if the curve bottoms out fairly quickly, after only a generation or two that’s STILL really dangerous.
Really makes you wonder how all those people got convinced in the first place.
This is totally unsupported. To quote Lady Catherine de Bourgh, “If I had ever learned [to play the piano], I should have become a great proficient.”
You have no idea whether the “small changes” you propose are technically feasible, or whether these “tweaks” would in fact mean a complete redesign. For all we know, if you knew how the human brain worked well enough to build one in software, you would appreciate why these changes are impossible without destroying the rest of the system’s functionality.
After all, it would appear that (say) eidetic recall would provide a fitness advantage. Given that humans lack it, there may well be good reasons why.
“totally unsupported” seems extreme. (Though I enjoyed the P&P shoutout. I was recently in a stage adaptation of the book, so it is pleasantly primed.)
What the claim amounts to is the belief that:
a) there exist good design ideas for brains that human evolution didn’t implement, and
b) a human capable of building a working brain at all is capable of coming up with some of them.
A seems pretty likely to me… at least, the alternative (our currently evolved brains are the best possible design) seems so implausible as to scarcely be worth considering.
B is harder to say anything clear about, but given our experience with other evolved systems, it doesn’t strike me as absurd. We’re pretty good at improving the stuff we were born with.
Of course, you’re right that this is evidence and not proof. It’s possible that we just can’t do any better than human brains for thinking, just like it was possible (but turned out not to be true) that we couldn’t do any better than human legs for covering long distances efficiently.
But it’s not negligible evidence.
I don’t doubt that it’s possible to come up with something that thinks better than the human brain, just as we have come up with something that travels better than the human leg. But to cover long distances efficiently, people didn’t start by replicating a human leg, and then tweaking it. They came up with a radically different design—e.g. the wheel.
I don’t see the evidence that knowing how to build a human brain is the key step in knowing how to build something better. For instance, suppose you could replicate neuron function in software, and then scan a brain map (Robin Hanson’s “em” concept). That wouldn’t allow you to make any of the improvements to memory, maths, etc, that Dolores suggests. Perhaps you could make it run faster—although depending on hardware constraints, it might run slower. If you wanted to build something better, you might need to start from scratch. Or, things could go the other way—we might be able to build “minds” far better than the human brain, yet never be able to replicate a human one.
But it’s not just that evidence is lacking—Dolores is claiming certainty in the lack of evidence. I really do think the Austen quote was appropriate.
To clarify, I did not mean having the data to build a neuron-by-neuron model of the brain. I meant actually understanding the underlying algorithms those slabs of neural tissue are implementing. Think less understanding the exact structure of a bird’s wing, and more understanding the concept of lift.
I think, with that level of understanding, the odds that a smart engineer (even if it’s not me) couldn’t find something to improve seem low.
I agree that I might not need to be able to build a human brain in software to be able to build something better, as with cars and legs.
And I agree that I might be able to build a brain in software without understanding how to do it, e.g., by copying an existing one as with ems.
That said, if I understand the principles underlying a brain well enough to build one in software (rather than just copying it), it still seems reasonable to believe that I can also build something better.