I am also for Wet Nanotech. But different genetic code is not needed or it is not an important thing.
The main thing is to put a Turing computer inside a living cell similar to E.Coli and to create a way for two-side communication with externals computer. Such computer should be genetically encoded, so if the cell replicates the computer also replicates. The computer has to have the ability to get data from some sensors inside the cells and output some proteins.
Building such Wet Nanotech is orders of magnitude simpler than real nanotech.
The main obstacle for AI is the need to perform real-world experiments. In classical EY paradigm, first AI is so superintelligent that it does not need to perform any experiments, as it could guess everything about the real world and will do everything right from the first attempt. But if first AI is still limited by amount of available compute, or its intelligence or some critical data, it has to run tests.
Running experiments is longer and AI is more likely to be cought in its first attempts. This will slower its ascend and may force it to choose the ways there it cooperates with humans for longer periods of time.
You want to grow brains that work more like CPUs. The computational paradigm used by CPUs is used because it’s conceptually easy to program, but it has some problems. Error tolerance is very poor; CPUs can be crashed by a single bit-flip from cosmic rays. CPUs also have less computational capacity than GPUs. Brains work more like...neural networks; perhaps that’s where the name came from.
Have you heard of the Arc protein? It’s conceivable that it’s responsible for transmitting digital information in the brain, like, if that were useful, it would be doing that, so I’d expect to see computation too.
I sometimes wonder if this is openworm’s missing piece. But it’s not my field.
That is so freakin’ cool. Thank you for this link. Hadn’t heard about this yet…
...and yes, memory consolidation is on my list as “very important” for uploading people to get a result where the ems are still definitely “full people (with all the features which give presumptive confidence that the features are ‘sufficient for personhood’ because the list as been constructed in a minimalist way such that the absence of a sufficient feature ‘breaking personhood’, then not even normal healthy humans are ‘people’)”.
There is a new paper by Jeremy England that seems relevant:
Self-organized computation in the far-from-equilibrium cell
Recent progress in our understanding of the physics of self-organization in active matter has pointed to the possibility of spontaneous collective behaviors that effectively compute things about the patterns in the surrounding patterned environment.
I am also for Wet Nanotech. But different genetic code is not needed or it is not an important thing.
The main thing is to put a Turing computer inside a living cell similar to E.Coli and to create a way for two-side communication with externals computer. Such computer should be genetically encoded, so if the cell replicates the computer also replicates. The computer has to have the ability to get data from some sensors inside the cells and output some proteins.
Building such Wet Nanotech is orders of magnitude simpler than real nanotech.
The main obstacle for AI is the need to perform real-world experiments. In classical EY paradigm, first AI is so superintelligent that it does not need to perform any experiments, as it could guess everything about the real world and will do everything right from the first attempt. But if first AI is still limited by amount of available compute, or its intelligence or some critical data, it has to run tests.
Running experiments is longer and AI is more likely to be cought in its first attempts. This will slower its ascend and may force it to choose the ways there it cooperates with humans for longer periods of time.
You want to grow brains that work more like CPUs. The computational paradigm used by CPUs is used because it’s conceptually easy to program, but it has some problems. Error tolerance is very poor; CPUs can be crashed by a single bit-flip from cosmic rays. CPUs also have less computational capacity than GPUs. Brains work more like...neural networks; perhaps that’s where the name came from.
No, I didn’t mean brains. I mean digital computers inside the cell; but they can use all the ways of error-correction including parallelism.
Have you heard of the Arc protein? It’s conceivable that it’s responsible for transmitting digital information in the brain, like, if that were useful, it would be doing that, so I’d expect to see computation too.
I sometimes wonder if this is openworm’s missing piece. But it’s not my field.
That is so freakin’ cool. Thank you for this link. Hadn’t heard about this yet…
...and yes, memory consolidation is on my list as “very important” for uploading people to get a result where the ems are still definitely “full people (with all the features which give presumptive confidence that the features are ‘sufficient for personhood’ because the list as been constructed in a minimalist way such that the absence of a sufficient feature ‘breaking personhood’, then not even normal healthy humans are ‘people’)”.
There is a new paper by Jeremy England that seems relevant:
Self-organized computation in the far-from-equilibrium cell
https://aip.scitation.org/doi/full/10.1063/5.0103151