I think I’ve figured out a basic neural gate. I will do my best to describe it.
4 nerves: A,B,X,Y, A has it’s tail connected to X. B has it’s tail connected to X and Y. If A fires, X fires. If B fires, X and Y fire. If A then B fire, X will fire then Y will fire (X need a small amount of time to reset, so B will only be able to activate Y). If B then A fire, X and Y will fire at the same time.
This feels like it could be similar to the AND circuit. Just like modern electronics need AND, OR, and NOT, if I could find all the nerve gates I’d have all the parts needed to build a brain. (or at least a network based computer)
The typical CPU is built from n-NOR, n-NAND, and NOT gates. The NOT gates works like a 1-NAND or a 1-NOR (they’re the same thing, electronically). Everything else, including AND and OR, are made from those three. The actual logic only requires NOT and {1 of AND, OR, NAND, NOR}. Notice there are several sets of minimum gates and and a larger set of used gates.
The brain (I’m theorizing now, I have no background in neural chemistry) has a similar set of basic gates that can be organized into a Turing machine, and the gate I described previously is one of them.
The brain (I’m theorizing now, I have no background in neural chemistry) has a similar set of basic gates that can be organized into a Turing machine, and the gate I described previously is one of them.
No. You can represent logic gates using neural circuits, and use them to describe arbitrary finite-state automata that generalize into Turing-complete automata in the limit of infinite size (or by adding an infinite external memory), but that’s not how the brain is organized, and it would be difficult to have any learning in a system constucted in this way.
ANNs don’t begin to scratch the surface of the scale or complexity of the human brain.
Not that they’re not fun as toy models, or useful in their own right, just remember that they are oblivious to all human brain chemistry, and to chemistry in general.
ANNs don’t begin to scratch the surface of the scale or complexity of the human brain.
Of course, but Cube is talking about “a similar set of basic gates that can be organized into a Turing machine” which looks like an ANN more than it looks like wetware.
I think I’ve figured out a basic neural gate. I will do my best to describe it.
4 nerves: A,B,X,Y, A has it’s tail connected to X. B has it’s tail connected to X and Y. If A fires, X fires. If B fires, X and Y fire. If A then B fire, X will fire then Y will fire (X need a small amount of time to reset, so B will only be able to activate Y). If B then A fire, X and Y will fire at the same time.
This feels like it could be similar to the AND circuit. Just like modern electronics need AND, OR, and NOT, if I could find all the nerve gates I’d have all the parts needed to build a brain. (or at least a network based computer)
How familiar are you with this area? I think that this sort of thing is already well-studied, but I have only vague memories to go by.
As an aside, you only need (AND and NOT) or (OR and NOT), not all three; and if you have NAND or NOR, either of those is sufficient by itself.
I’m a computer expert but a brain newbie.
The typical CPU is built from n-NOR, n-NAND, and NOT gates. The NOT gates works like a 1-NAND or a 1-NOR (they’re the same thing, electronically). Everything else, including AND and OR, are made from those three. The actual logic only requires NOT and {1 of AND, OR, NAND, NOR}. Notice there are several sets of minimum gates and and a larger set of used gates.
The brain (I’m theorizing now, I have no background in neural chemistry) has a similar set of basic gates that can be organized into a Turing machine, and the gate I described previously is one of them.
We don’t run on logic gates. We run on noisy differential equations.
No.
You can represent logic gates using neural circuits, and use them to describe arbitrary finite-state automata that generalize into Turing-complete automata in the limit of infinite size (or by adding an infinite external memory), but that’s not how the brain is organized, and it would be difficult to have any learning in a system constucted in this way.
You might want to look into what’s called ANN—artificial neural networks.
ANNs don’t begin to scratch the surface of the scale or complexity of the human brain.
Not that they’re not fun as toy models, or useful in their own right, just remember that they are oblivious to all human brain chemistry, and to chemistry in general.
Of course, but Cube is talking about “a similar set of basic gates that can be organized into a Turing machine” which looks like an ANN more than it looks like wetware.