Iâd like to say up front that I respect you both, but I think shminux is right that bhauthâs article (1) doesnât make the point it needs to make to change the âbelief about the whether a set of âmazesâ exist whose collective solution gives nanoâ for many people working on nano and (2) this is logically connected to issue of âmotivational stuffâ.
A key question is the âamount of workâ necessary to make intellectual progress on nano (which is probably inherently cross-disciplinary), and thus it is implicitly connected to motivating the amount of work a human would have to put in. This could be shorted to just talking about âmotivationâ which is a complex thing to do for many reasons that the reader can surely imagine for themselves. And yet⌠I shall step into the puddle, and see how deep it might be!đ
I. Close To Object Level Nano Stuff
For people who are hunting, intellectually, âamong the countably infinite number of mazes whose solution and joining with other solved mazes would constitute a win condition by offering nano-enabling capacitiesâ they have already solved many of the problems raised in the OP, as explained in in Thomas Kwaâs excellent top level nano-solutions comment.
One of Kwaâs broad overall points is ânano isnât actually going to be a biological system operating on purely aqueous chemistryâ and this helps dodge a huge numbers of these objections, and has been a recognized part of the plan for âreal nanotechâ (ie molecular manufacturing rather the nanoparticle bullshit that sucked up all the nano grant money) since the 1980s.
If bhauth wants to write an object level follow-up post, I think it might be interesting to read an attempt to defend the claim: âNanotechnology requires aqueous biological methods⌠which are incapable of meeting the demandâ. However, I donât think this is something bhauth actually agrees with, so maybe that point is moot?
II. What Kinds Of Psychologizing Might Even Be Helpful And Why??
I really respect your engagement here, bhauth, whether you:
(1) really want to advance nano and are helping with that and this was truly your best effort, vs whether
(2) you are playing a devilâs advocate against nano plans and offered this up as an attempt to âsay what a lot of the doubters are thinking quietly, where the doubters can upvote, and then also read the comments, and then realize that their doubts werenât as justified as they might have expectedâ, or
(3) something more complex to explain rhetorically and/âor motivationally.
There is a kind of courage in speaking in public, and confident ability to reason about object level systems, and faith in an audience enough that youâre willing to engage deeply.
Also, one there are skills for assessing the objectively most important ways technology can or should go in the future, and willingness to work on such things in a publicly visible forum where it also generates educational and political value for many potential readers.
All these are probably virtues, and many are things I see in your efforts here, bhauth!
I canât read your mind, and donât mean to impose advice where it is not welcome, but it does seem like you were missing ideas that you would have had if had spent the time to collect all the best âsolved mazesâ floating in the heads of most of the smartest people who want-or-wanted* to make nano happen?
III. Digressing Into A âMotivated Sociology Of Epistemicsâ For a Bit
All of this is to reiterate the initial point that effortful epistemics turns out to complexly interact with âemotional statesâ and what be want-or-wanted* when reasoning about the costs of putting in different kinds of epistemic effort.
(* = A thing often happens, motivationally, where an aspiring-oppenheimer-type really wants something, and starts collecting all the theories and techniques to make it happen⌠and then loses their desire part way through as they see more and more of the vivid details of what they are really actually likely to create given what they know. Often, as they become more able to apply a gears level analysis, and it comes more into near mode, and they see how it is likely to vividly interact with âall of everything they also care about in near modeâ their awareness of certain âhigh effort ideasâ becomes evidence of what they wanted, rather than want they still âwant in the ânearâ futureâ.
(A wrinkle about âwhat ânearâ meansâ arises with variation in age and motivation. Old people whose interests mostly end with their own likely death (like hedonic interests) will have some of the smallest ideas about what ânearâ is, and old people with externalized interests will have some of the largest ideas in that they might care in a detailed way about the decades or centuries after their death, while having a clearer idea of what that might even mean than would someone getting a PhD or still gunning for tenure. Old externalized people are thus more likely to be âwriting for the agesâ with clearer/âlonger timelines. (And if anyone in these planning loops is an immortalist with uncrushed dreams, then what counts as ânear in timeâ gets even more complicated.)))
I think shminux probably didnât have time to write all this out, but might be nodding along to maybe half of it so far? And I think unpacking it might help bhauth (and all the people upvoting bhauth here?) to level up more and faster, which would probably be good!
For myself, I could maybe tell a story, where the reason I engaged here is, maybe, because Iâm focused on getting a Win Condition for all of Earth, and all sapient beings (probably including large language model personas and potential-future aliens and so on?) and I think all good sapient beings with enough time and energy probably converge on collectively advancing the collective eudaemonia of all sentient beings (which I put non-zero credence on being a category that includes individual cells themselves).
Given this larger level goal, I think, as a sociological engineering challenge, it would logically fall out of this that it is super important for present day humans to help nucleate-or-grow-or-improve some kind of âlong term convergent Win Condition Communityâ (which may have existed all the way back to Bacon, or even farther (and which probably explicitly needs to be able to converge with all live instances of similar communities that arise independently and stumble across each other)).
And given this, when I see two really smart people not seeming to understand each other and both making good points, in public, on LW, with wildly lopsided voting patternsâŚ
...that is like catnip to a âSocio-Epistemic Progress Frameâ which often seems, to me, to generate justifications for being specifically locally helpful and have that redound (via an admittedly very circuitous-seeming path) to extremely large long term benefits for all sentient beings?
I obviously canât mind read either of you, but when I suggested that bhauth might be doing âsomething even more rhetorically complexâ it was out of an awareness that many such cases exist, and are probably helpful, even if wrong, so long as there is a relatively precise kind of good faith happening, where low-latency high-trust error correction seems to be pretty central to explicit/âformal cognitive growth.
A hunch I have is that maybe shminux started in computer science, and maybe bhauth started in biology? Also I think exactly things kinds of collaborations are often very intellectually productive from both sides!
IV. In Praise Of CS/âBIO Collaboration
From experience working in computational virology out of a primary interest in studying the mechanisms of the smallest machines nature has so far produced (as a long term attack on being able to work on nano at an object level), I recognize some of the ways that these fields often have wildly different initial intuitions, based on distinctions like engineering/âscience, algorithms/âempiricism, human/âalien and design/âaccretion.
People whose default is to âengineer (and often reverse-engineer) human-designed algorithmsâ and people whose default is to âempirically study accreted alien designsâ have amazingly different approaches to thinking about âdesignâ đ
Still, I think there are strong analogies across these fields.
Like to a CS person âyou need to patch a 2,000,000 line system written by people who are all now dead, against a new security zero day, as fast as you canâ is a very very hard and advanced problem, but like⌠thatâs the simple STARTING position for essentially all biologically evolved systems, as a single step in a typical red queen dynamic⌠see for example hyperparasiticvirophages for a tiny and more-likely-tractable example, where the âgenetic code baseâ is relatively small, and generation times are minuscule. But there there are a lot of BIO people, I think, who have been dealing with nearly impossible systems for so long that they have âgiven upâ in some deep way on expecting to understand certain things, and I think it would help them to play with code to get more intuitions about how KISS is possible and useful and beautiful.
(And to be clear, this CS/âBIO/âdesign thing is just one place where differences occur between these two fields, and it might very well not be the one that is going on here, and a lot of people in those fields are likely to roll their eyes at bothering with the other one, I suspect? Because âframesâ or âemotionsâ or âstancesâ or âmotivationsâ or just âfinite lifeâ maybe? But from a hyper abstract bayesian perspective, such motivational choices mean the data they are updating on has biases, so their posteriors will be predictably uncalibrated outside their âcomfort zoneâ, which is an epistemic issue.)
As a final note in praise of BIO/âCS collaboration, it is probably useful to notice that current approaches to AI do not involve hand-coding any of it, but rather âsummoningâ algorithms into relatively-computationally-universal frameworks via SGD over data sets with enough kolmogorov complexity that it becomes worthwhile to simply put the generating algorithm in the weights rather than try to store all the cases in the weights. This is, arguably, a THIRD kind of âsummoned designâ that neither CS or BIO people are likely to have good intuitions for, but I suspect it is somewhere in the middle, and that mathematicians would be helpful for understanding it.
V. In Closing
If this helps bhauth or shminux, or anyone who upvoted bhauth really hard, or anyone who downvoted shminux, that would make it worth the writing based on what Iâm directly aiming at so long as any the net harms to any such people are smaller and lesser (which is likely, because it is a wall-o-text that few will read unless (hopefully) they like, and are getting something, from reading it). Such is my hope đ
Iâd like to say up front that I respect you both, but I think shminux is right that bhauthâs article (1) doesnât make the point it needs to make to change the âbelief about the whether a set of âmazesâ exist whose collective solution gives nanoâ for many people working on nano and (2) this is logically connected to issue of âmotivational stuffâ.
A key question is the âamount of workâ necessary to make intellectual progress on nano (which is probably inherently cross-disciplinary), and thus it is implicitly connected to motivating the amount of work a human would have to put in. This could be shorted to just talking about âmotivationâ which is a complex thing to do for many reasons that the reader can surely imagine for themselves. And yet⌠I shall step into the puddle, and see how deep it might be!đ
I. Close To Object Level Nano Stuff
For people who are hunting, intellectually, âamong the countably infinite number of mazes whose solution and joining with other solved mazes would constitute a win condition by offering nano-enabling capacitiesâ they have already solved many of the problems raised in the OP, as explained in in Thomas Kwaâs excellent top level nano-solutions comment.
One of Kwaâs broad overall points is ânano isnât actually going to be a biological system operating on purely aqueous chemistryâ and this helps dodge a huge numbers of these objections, and has been a recognized part of the plan for âreal nanotechâ (ie molecular manufacturing rather the nanoparticle bullshit that sucked up all the nano grant money) since the 1980s.
If bhauth wants to write an object level follow-up post, I think it might be interesting to read an attempt to defend the claim: âNanotechnology requires aqueous biological methods⌠which are incapable of meeting the demandâ. However, I donât think this is something bhauth actually agrees with, so maybe that point is moot?
II. What Kinds Of Psychologizing Might Even Be Helpful And Why??
I really respect your engagement here, bhauth, whether you:
(1) really want to advance nano and are helping with that and this was truly your best effort, vs whether
(2) you are playing a devilâs advocate against nano plans and offered this up as an attempt to âsay what a lot of the doubters are thinking quietly, where the doubters can upvote, and then also read the comments, and then realize that their doubts werenât as justified as they might have expectedâ, or
(3) something more complex to explain rhetorically and/âor motivationally.
There is a kind of courage in speaking in public, and confident ability to reason about object level systems, and faith in an audience enough that youâre willing to engage deeply.
Also, one there are skills for assessing the objectively most important ways technology can or should go in the future, and willingness to work on such things in a publicly visible forum where it also generates educational and political value for many potential readers.
All these are probably virtues, and many are things I see in your efforts here, bhauth!
I canât read your mind, and donât mean to impose advice where it is not welcome, but it does seem like you were missing ideas that you would have had if had spent the time to collect all the best âsolved mazesâ floating in the heads of most of the smartest people who want-or-wanted* to make nano happen?
III. Digressing Into A âMotivated Sociology Of Epistemicsâ For a Bit
All of this is to reiterate the initial point that effortful epistemics turns out to complexly interact with âemotional statesâ and what be want-or-wanted* when reasoning about the costs of putting in different kinds of epistemic effort.
(* = A thing often happens, motivationally, where an aspiring-oppenheimer-type really wants something, and starts collecting all the theories and techniques to make it happen⌠and then loses their desire part way through as they see more and more of the vivid details of what they are really actually likely to create given what they know. Often, as they become more able to apply a gears level analysis, and it comes more into near mode, and they see how it is likely to vividly interact with âall of everything they also care about in near modeâ their awareness of certain âhigh effort ideasâ becomes evidence of what they wanted, rather than want they still âwant in the ânearâ futureâ.
(A wrinkle about âwhat ânearâ meansâ arises with variation in age and motivation. Old people whose interests mostly end with their own likely death (like hedonic interests) will have some of the smallest ideas about what ânearâ is, and old people with externalized interests will have some of the largest ideas in that they might care in a detailed way about the decades or centuries after their death, while having a clearer idea of what that might even mean than would someone getting a PhD or still gunning for tenure. Old externalized people are thus more likely to be âwriting for the agesâ with clearer/âlonger timelines. (And if anyone in these planning loops is an immortalist with uncrushed dreams, then what counts as ânear in timeâ gets even more complicated.)))
I think shminux probably didnât have time to write all this out, but might be nodding along to maybe half of it so far? And I think unpacking it might help bhauth (and all the people upvoting bhauth here?) to level up more and faster, which would probably be good!
For myself, I could maybe tell a story, where the reason I engaged here is, maybe, because Iâm focused on getting a Win Condition for all of Earth, and all sapient beings (probably including large language model personas and potential-future aliens and so on?) and I think all good sapient beings with enough time and energy probably converge on collectively advancing the collective eudaemonia of all sentient beings (which I put non-zero credence on being a category that includes individual cells themselves).
Given this larger level goal, I think, as a sociological engineering challenge, it would logically fall out of this that it is super important for present day humans to help nucleate-or-grow-or-improve some kind of âlong term convergent Win Condition Communityâ (which may have existed all the way back to Bacon, or even farther (and which probably explicitly needs to be able to converge with all live instances of similar communities that arise independently and stumble across each other)).
And given this, when I see two really smart people not seeming to understand each other and both making good points, in public, on LW, with wildly lopsided voting patternsâŚ
...that is like catnip to a âSocio-Epistemic Progress Frameâ which often seems, to me, to generate justifications for being specifically locally helpful and have that redound (via an admittedly very circuitous-seeming path) to extremely large long term benefits for all sentient beings?
I obviously canât mind read either of you, but when I suggested that bhauth might be doing âsomething even more rhetorically complexâ it was out of an awareness that many such cases exist, and are probably helpful, even if wrong, so long as there is a relatively precise kind of good faith happening, where low-latency high-trust error correction seems to be pretty central to explicit/âformal cognitive growth.
A hunch I have is that maybe shminux started in computer science, and maybe bhauth started in biology? Also I think exactly things kinds of collaborations are often very intellectually productive from both sides!
IV. In Praise Of CS/âBIO Collaboration
From experience working in computational virology out of a primary interest in studying the mechanisms of the smallest machines nature has so far produced (as a long term attack on being able to work on nano at an object level), I recognize some of the ways that these fields often have wildly different initial intuitions, based on distinctions like engineering/âscience, algorithms/âempiricism, human/âalien and design/âaccretion.
People whose default is to âengineer (and often reverse-engineer) human-designed algorithmsâ and people whose default is to âempirically study accreted alien designsâ have amazingly different approaches to thinking about âdesignâ đ
Still, I think there are strong analogies across these fields.
Like to a CS person âyou need to patch a 2,000,000 line system written by people who are all now dead, against a new security zero day, as fast as you canâ is a very very hard and advanced problem, but like⌠thatâs the simple STARTING position for essentially all biologically evolved systems, as a single step in a typical red queen dynamic⌠see for example hyperparasitic virophages for a tiny and more-likely-tractable example, where the âgenetic code baseâ is relatively small, and generation times are minuscule. But there there are a lot of BIO people, I think, who have been dealing with nearly impossible systems for so long that they have âgiven upâ in some deep way on expecting to understand certain things, and I think it would help them to play with code to get more intuitions about how KISS is possible and useful and beautiful.
(And to be clear, this CS/âBIO/âdesign thing is just one place where differences occur between these two fields, and it might very well not be the one that is going on here, and a lot of people in those fields are likely to roll their eyes at bothering with the other one, I suspect? Because âframesâ or âemotionsâ or âstancesâ or âmotivationsâ or just âfinite lifeâ maybe? But from a hyper abstract bayesian perspective, such motivational choices mean the data they are updating on has biases, so their posteriors will be predictably uncalibrated outside their âcomfort zoneâ, which is an epistemic issue.)
As a final note in praise of BIO/âCS collaboration, it is probably useful to notice that current approaches to AI do not involve hand-coding any of it, but rather âsummoningâ algorithms into relatively-computationally-universal frameworks via SGD over data sets with enough kolmogorov complexity that it becomes worthwhile to simply put the generating algorithm in the weights rather than try to store all the cases in the weights. This is, arguably, a THIRD kind of âsummoned designâ that neither CS or BIO people are likely to have good intuitions for, but I suspect it is somewhere in the middle, and that mathematicians would be helpful for understanding it.
V. In Closing
If this helps bhauth or shminux, or anyone who upvoted bhauth really hard, or anyone who downvoted shminux, that would make it worth the writing based on what Iâm directly aiming at so long as any the net harms to any such people are smaller and lesser (which is likely, because it is a wall-o-text that few will read unless (hopefully) they like, and are getting something, from reading it). Such is my hope đ