This is basically off-topic, but just for the record, regarding...
someone presented a talk where they explained how they tried and failed to model and simulate a brain of C. Elegans.…
Furthermore, all of their research was done prior to them discovering AI safety stuff so it’s good that no one created such a precise model of a—even if just a worm—brain.
That was me; I have never believed (at least not yet) that it’s good that the C. elegans nervous system is still not understood; to the contrary, I wish more neuroscientists were working on such a “full-stack” understanding (whole nervous system down to individual cells). What I meant to say is that I am personally no longer compelled to put my attention toward C. elegans, compared to work that seems more directly AI-safety-adjacent.
I could imagine someone making a case that understanding low-end biological nervous systems would bring us closer to unfriendly AI than to friendly AI, and perhaps someone did say such a thing at AIRCS, but I don’t recall it and I doubt I would agree. More commonly, people make the case that nervous-system uploading technology brings us closer to friendly AI in the form of eventually uploading humans—but that is irrelevant one way or the other if de novo AGI is developed by the middle of this century.
One final point: it is possible that understanding simple nervous systems gives humanity a leg up on interpretability (of non-engineered, neural decision-making), without providing new capabilities until somewhere around spider level. I don’t have much confidence that any systems-neuroscience techniques for understanding C. elegans or D. rerio would transfer to interpreting AI’s decision-making or motivational structure, but it is plausible enough that I currently consider such work to be weakly good for AI safety.
Thank you for your answer Davidad.
For some reason, I was pretty sure that I did ask you something as “why did you try to do that if it could leads to AI faster, through em” and that your answer was something like “I probably would not have done if I already knew about AI safety questions”. But I guess I did recall badly.
I’m honestly started to be effraied by the number of things I did get wrong during those 4 days
This is basically off-topic, but just for the record, regarding...
That was me; I have never believed (at least not yet) that it’s good that the C. elegans nervous system is still not understood; to the contrary, I wish more neuroscientists were working on such a “full-stack” understanding (whole nervous system down to individual cells). What I meant to say is that I am personally no longer compelled to put my attention toward C. elegans, compared to work that seems more directly AI-safety-adjacent.
I could imagine someone making a case that understanding low-end biological nervous systems would bring us closer to unfriendly AI than to friendly AI, and perhaps someone did say such a thing at AIRCS, but I don’t recall it and I doubt I would agree. More commonly, people make the case that nervous-system uploading technology brings us closer to friendly AI in the form of eventually uploading humans—but that is irrelevant one way or the other if de novo AGI is developed by the middle of this century.
One final point: it is possible that understanding simple nervous systems gives humanity a leg up on interpretability (of non-engineered, neural decision-making), without providing new capabilities until somewhere around spider level. I don’t have much confidence that any systems-neuroscience techniques for understanding C. elegans or D. rerio would transfer to interpreting AI’s decision-making or motivational structure, but it is plausible enough that I currently consider such work to be weakly good for AI safety.
Thank you for your answer Davidad. For some reason, I was pretty sure that I did ask you something as “why did you try to do that if it could leads to AI faster, through em” and that your answer was something like “I probably would not have done if I already knew about AI safety questions”. But I guess I did recall badly. I’m honestly started to be effraied by the number of things I did get wrong during those 4 days