Typo has been fixed.
niconiconi
I don’t see anything fundamentally wrong with Voldemort’s approach. To identify and destroy those horcruxes, the protagonists surely did spend significant amount of time, at great personal expenses. To me it has already successfully achieved the intended effect.
In cryptography, Shamir’s Secret Sharing Scheme (SSSS) is the same idea—this algorithm splits an encryption key into multiple shares, which then can be guarded by different trustees. The encryption key, hence the secret information, can only be unlocked when most or all trustees are compromised or agree to release their shares. This is certainly extremely useful for many problems, and it also foreshadowed a new cryptography subfield called Secure Multi-Party Computation (MPC). I think it’s fair to call this a product of the “true deep security mindset”.
Yudkowsky said “seven keys hidden in different places [in the filesystem]” is silly because they’re not conditionally independent, the entire filesystem could be bypassed altogether. Also, the attacker who’s able to find the first key is likely to be able to find the next key as well.
[...] the chance of obtaining the seventh key is not conditionally independent of the chance of obtaining the first two keys. If I can read the encrypted password file, and read your encrypted encryption key, then I’ve probably come up with something that just bypasses your filesystem and reads directly from the disk.
But speaking of Shamir’s shares or Voldemort’s horcruxes, they are basically all uncorrelated to each other and cannot be bypassed. I think the different shapes and forms of Voldemort’s horcruxes are actually a good demonstration of “security through diversity”—intentionally decorrelate the redundant parts of the system, e.g. don’t use the same operating system, don’t trust the same people. Tor Project identified the Linux monoculture as a security risk and encourages people to run more FreeBSD and OpenBSD relays.
Thus, I think not mentioning Voldemort’s horcruxes is a correct decision. While misguided reliance of redundancy is “ordinairy paranoia” and dangerous—attaching 7 locks to a breakable door, or adding secure secret sharing to a monolithic kernel probably does little on improving security (even with conditionally independent keys), and Tor Project’s platform diversity attempt only has a small (but still useful) contribution to its overall network security since they all run the same Tor executable. Nevertheless, redundancy itself can be “deep security”.
Thanks for the info. Your comment is the reason why I’m on LessWrong.
The lack of progress here may be a quite good thing.
Did I miss some subtle cultural changes at LW?
I know the founding principles of LW are rationalism and AI safety from the start. But in my mind, LW always has all types of adjacent topics and conversations, with many different perspectives. Or at least this was my impression of the 2010s LW threads on the Singularity and transhumanism. Did these discussions become more and more emphasized on AI safety and derisking over time?
I’m not a regular reader of LW, any explanation would be greatly appreciated.
Optogenetics was exactly the method proposed by David, I just updated the article and included a full quote.
I originally thought my post was already a mere summary of the previous LW posts by jefftk, excessive quotation could make it too unoriginal, interested readers could simply read more by following the links. But I just realized giving sufficient context is important when you’re restarting a forgotten discussion.
David believed one can develop optogenetic techniques to do this. Just added David’s full quote to the post.
With optogenetic techniques, we are just at the point where it’s not an outrageous proposal to reach for the capability to read and write to anywhere in a living C. elegans nervous system, using a high-throughput automated system. It has some pretty handy properties, like being transparent, essentially clonal, and easily transformed. It also has less handy properties, like being a cylindrical lens, being three-dimensional at all, and having minimal symmetry in its nervous system. However, I am optimistic that all these problems can be overcome by suitably clever optical and computational tricks.
Quote jefftk.
To see why this isn’t enough, consider that nematodes are capable of learning. [...] For example, nematodes can learn that a certain temperature indicates food, and then seek out that temperature. They don’t do this by growing new neurons or connections, they have to be updating their connection weights. All the existing worm simulations treat weights as fixed, which means they can’t learn. They also don’t read weights off of any individual worm, which means we can’t talk about any specific worm as being uploaded.
If this doesn’t count as uploading a worm, however, what would? Consider an experiment where someone trains one group of worms to respond to stimulus one way and another group to respond the other way. Both groups are then scanned and simulated on the computer. If the simulated worms responded to simulated stimulus the same way their physical versions had, that would be good progress. Additionally you would want to demonstrate that similar learning was possible in the simulated environment.
(just included the quotation in my post)
Did David build an automated device to collect data from living cells? If not, was the reason it wasn’t done because of some sudden unexpected huge difficulty that 100+ people and multi-million dollar budget couldn’t solve, or was it because...those people weren’t there and neither was the funding?
Good points, I did more digging and found some relevant information I initially missed, see “Update”. He didn’t, and funding was indeed a major factor.
One review criticized my post for being inadequate at world modeling—readers who wish to learn more about predictions are better served by other books and posts (but also praised me for being willing to update its content after new information arrived). I don’t disagree, but I felt it was necessary to clarify my background of writing it.
First and foremost, this post was meant specifically as (1) a review of the research progress on Whole Brain Emulation of C. elegans, and (2) a request for more information from the community. I became aware of this research project since 10 years ago on Wikipedia, and like everyone else, I thought it would be a milestone of transhumanism. At the beginning of 2020 I remembered it again—it seemed to stuck in development hell forever, without clear reasons. Frustrated, I decided to find out why.
I knew transhumanism has always been an active topic on LessWrong so naturally I came here and searched the posts. It was both disappointing and encouraging. There was no up-to-date post beyond the original one in 2010, but I found some researchers are LW members, and it was likely that I would be able to learn more by asking it here, perhaps with an “insider” answer.
Thus, I made the original post. It was not, at all, intended as an exercise of forecasting or world modeling. Then, I was completely surprised by the post’s reception. Despite being the first post I’ve ever made, it received a hundred upvotes within days, and later, it was selected by the editors as a “Curated” homepage post. It even became the reading material at offline meetups! It was completely unexpected. As a result, jefftk—the lead researcher of one project, personally answered my questions. I updated my post to incorporate new information as they arrived.
In conclusion, this post completely served its purpose of gathering new information about the research progress in this field. However, the research I did in the original post (before update) was incomplete. I totally missed the 2014 review and a 2020 mention, both were literally on LW. If I knew the post would be selected as Curated and widely read by a large audience as an exercise of world modeling (and winning a prize), I would have used more patience during my initial research before sending the original version.