Artificial General Inteligence has nothing to do with simulating brains.
The approaches are different, the math formulares are different, We’re slowly moving to sparcity for some things (wich is similar to how a brain works) but still.
I dont think you are calibrated properly about the ideas that are most commonly shared in the LW community.
Nobody is saying “we will get a so good brain simulator that will kill us” That’s not the point.
The point is that we can create agents in other ways, and those Agents can still kills us, no brain simulation included.
We address this argument. AGI has a lot to do with simulating brains in our opinion, since an agent of similar or higher complexity has to be created. There can be no shortcut, in our opinion.
A deep learning network with 10^7 nodes will not outperform a brain with 10^11 neurons, especially if each neuron is highly complex.
We are not arguing that a brain simulation will/will not take over, but that an agent which could would have to use a similar amount of energy, or even several orders below. And that’s unrealistic.
Haven’t read the paper, so sorry if it is explained there, but I disagree with the assumption that human brain is the minimum possible size for an agent. Human brain has some constraints that do not apply to electronic, non-evolved agents.
As an example, my external hard disk drive has a capacity 1.5 TB. How many bytes of information can a human brain store reliably? How many human-style neurons would we need to simulate in order to create a human-like agent capable of memorizing 1.5 TB of arbitrary data reliably? My point is that simply building the 1.5 TB external HDD, plus some interface with the rest of the brain if necessary, is several orders of magnitude cheaper than trying to use a human-like neuron architecture for the same.
Possible additional advantages for a silicon intelligence (besides storage):
It can view its own neurons and edit their weights or configuration.
It can be copied, and plausibly copy itself.
Its memory/storage can be shared/copied/backed up.
It may have access to better, higher-fidelity sensors.
We evolved to perform a set of tasks suitable to a hunter-gatherer environment. It can be trained or configured to perform a set of tasks that is more optimised for today’s world.
It has access to more energy.
It can perform operations faster (Bostrom writes that biological neurons operate at 200 Hz).
It can send signals internally faster (Bostrom writes that axons carry action potentials at 120 m/s, which is 6 OOMs slower than the speed of light).
I dont think I wrote that statement with that particular intention in mind.
I’m not trying to imply he is wrong because he doenst know our “groupthink” I was just generally annoyed at how he started the post, so i wanted to be reasonably civil, but a bit mean.
Thanks for noticing, I’m not convinced I should have refrained from that particular comment tho.
I would suggest to remove “I dont think you are calibrated properly about the ideas that are most commonly shared in the LW community. ” and present your argument, without speaking for the whole community.
Hey, Interesting post.
Artificial General Inteligence has nothing to do with simulating brains.
The approaches are different, the math formulares are different, We’re slowly moving to sparcity for some things (wich is similar to how a brain works) but still.
I dont think you are calibrated properly about the ideas that are most commonly shared in the LW community.
Nobody is saying “we will get a so good brain simulator that will kill us” That’s not the point.
The point is that we can create agents in other ways, and those Agents can still kills us, no brain simulation included.
We address this argument. AGI has a lot to do with simulating brains in our opinion, since an agent of similar or higher complexity has to be created. There can be no shortcut, in our opinion.
A deep learning network with 10^7 nodes will not outperform a brain with 10^11 neurons, especially if each neuron is highly complex.
We are not arguing that a brain simulation will/will not take over, but that an agent which could would have to use a similar amount of energy, or even several orders below. And that’s unrealistic.
Haven’t read the paper, so sorry if it is explained there, but I disagree with the assumption that human brain is the minimum possible size for an agent. Human brain has some constraints that do not apply to electronic, non-evolved agents.
As an example, my external hard disk drive has a capacity 1.5 TB. How many bytes of information can a human brain store reliably? How many human-style neurons would we need to simulate in order to create a human-like agent capable of memorizing 1.5 TB of arbitrary data reliably? My point is that simply building the 1.5 TB external HDD, plus some interface with the rest of the brain if necessary, is several orders of magnitude cheaper than trying to use a human-like neuron architecture for the same.
Possible additional advantages for a silicon intelligence (besides storage):
It can view its own neurons and edit their weights or configuration.
It can be copied, and plausibly copy itself.
Its memory/storage can be shared/copied/backed up.
It may have access to better, higher-fidelity sensors.
We evolved to perform a set of tasks suitable to a hunter-gatherer environment. It can be trained or configured to perform a set of tasks that is more optimised for today’s world.
It has access to more energy.
It can perform operations faster (Bostrom writes that biological neurons operate at 200 Hz).
It can send signals internally faster (Bostrom writes that axons carry action potentials at 120 m/s, which is 6 OOMs slower than the speed of light).
Reliable? Your hard disk will be unreadable before long, while the human brain has developed ways to pass information down over generations.
AGI doesn’t necessarily have anything to do with simulating brains, but it would count if you could do it.
>I dont think you are calibrated properly about the ideas that are most commonly shared in the LW community.
This is chastising him for failure to abide by groupthink.
The rest of your comment makes a point that is undermined by this statement.
I dont think I wrote that statement with that particular intention in mind.
I’m not trying to imply he is wrong because he doenst know our “groupthink” I was just generally annoyed at how he started the post, so i wanted to be reasonably civil, but a bit mean.
Thanks for noticing, I’m not convinced I should have refrained from that particular comment tho.
What would you have said?
I would suggest to remove “I dont think you are calibrated properly about the ideas that are most commonly shared in the LW community. ” and present your argument, without speaking for the whole community.
>What would you have said?
Your comment is stronger without this sentence.