That framing is unnatural to me. I see “solving a problem” as being more like solving several mazes simultaneously. Finding or seeing dead ends in a maze is both a type of progress towards solving the maze and a type of progress towards knowing if the maze is solvable.
I’d like to say up front that I respect you both, but I think shminux is right that bhauth’s article (1) doesn’t make the point it needs to make to change the “belief about the whether a set of ‘mazes’ exist whose collective solution gives nano” for many people working on nano and (2) this is logically connected to issue of “motivational stuff”.
A key question is the “amount of work” necessary to make intellectual progress on nano (which is probably inherently cross-disciplinary), and thus it is implicitly connected to motivating the amount of work a human would have to put in. This could be shorted to just talking about “motivation” which is a complex thing to do for many reasons that the reader can surely imagine for themselves. And yet… I shall step into the puddle, and see how deep it might be!🙃
I. Close To Object Level Nano Stuff
For people who are hunting, intellectually, “among the countably infinite number of mazes whose solution and joining with other solved mazes would constitute a win condition by offering nano-enabling capacities” they have already solved many of the problems raised in the OP, as explained in in Thomas Kwa’s excellent top level nano-solutions comment.
One of Kwa’s broad overall points is “nano isn’t actually going to be a biological system operating on purely aqueous chemistry” and this helps dodge a huge numbers of these objections, and has been a recognized part of the plan for “real nanotech” (ie molecular manufacturing rather the nanoparticle bullshit that sucked up all the nano grant money) since the 1980s.
If bhauth wants to write an object level follow-up post, I think it might be interesting to read an attempt to defend the claim: “Nanotechnology requires aqueous biological methods… which are incapable of meeting the demand”. However, I don’t think this is something bhauth actually agrees with, so maybe that point is moot?
II. What Kinds Of Psychologizing Might Even Be Helpful And Why??
I really respect your engagement here, bhauth, whether you:
(1) really want to advance nano and are helping with that and this was truly your best effort, vs whether
(2) you are playing a devil’s advocate against nano plans and offered this up as an attempt to “say what a lot of the doubters are thinking quietly, where the doubters can upvote, and then also read the comments, and then realize that their doubts weren’t as justified as they might have expected”, or
(3) something more complex to explain rhetorically and/or motivationally.
There is a kind of courage in speaking in public, and confident ability to reason about object level systems, and faith in an audience enough that you’re willing to engage deeply.
Also, one there are skills for assessing the objectively most important ways technology can or should go in the future, and willingness to work on such things in a publicly visible forum where it also generates educational and political value for many potential readers.
All these are probably virtues, and many are things I see in your efforts here, bhauth!
I can’t read your mind, and don’t mean to impose advice where it is not welcome, but it does seem like you were missing ideas that you would have had if had spent the time to collect all the best “solved mazes” floating in the heads of most of the smartest people who want-or-wanted* to make nano happen?
III. Digressing Into A “Motivated Sociology Of Epistemics” For a Bit
All of this is to reiterate the initial point that effortful epistemics turns out to complexly interact with “emotional states” and what be want-or-wanted* when reasoning about the costs of putting in different kinds of epistemic effort.
(* = A thing often happens, motivationally, where an aspiring-oppenheimer-type really wants something, and starts collecting all the theories and techniques to make it happen… and then loses their desire part way through as they see more and more of the vivid details of what they are really actually likely to create given what they know. Often, as they become more able to apply a gears level analysis, and it comes more into near mode, and they see how it is likely to vividly interact with “all of everything they also care about in near mode” their awareness of certain “high effort ideas” becomes evidence of what they wanted, rather than want they still “want in the ‘near’ future”.
(A wrinkle about “what ‘near’ means” arises with variation in age and motivation. Old people whose interests mostly end with their own likely death (like hedonic interests) will have some of the smallest ideas about what “near” is, and old people with externalized interests will have some of the largest ideas in that they might care in a detailed way about the decades or centuries after their death, while having a clearer idea of what that might even mean than would someone getting a PhD or still gunning for tenure. Old externalized people are thus more likely to be “writing for the ages” with clearer/longer timelines. (And if anyone in these planning loops is an immortalist with uncrushed dreams, then what counts as “near in time” gets even more complicated.)))
I think shminux probably didn’t have time to write all this out, but might be nodding along to maybe half of it so far? And I think unpacking it might help bhauth (and all the people upvoting bhauth here?) to level up more and faster, which would probably be good!
For myself, I could maybe tell a story, where the reason I engaged here is, maybe, because I’m focused on getting a Win Condition for all of Earth, and all sapient beings (probably including large language model personas and potential-future aliens and so on?) and I think all good sapient beings with enough time and energy probably converge on collectively advancing the collective eudaemonia of all sentient beings (which I put non-zero credence on being a category that includes individual cells themselves).
Given this larger level goal, I think, as a sociological engineering challenge, it would logically fall out of this that it is super important for present day humans to help nucleate-or-grow-or-improve some kind of “long term convergent Win Condition Community” (which may have existed all the way back to Bacon, or even farther (and which probably explicitly needs to be able to converge with all live instances of similar communities that arise independently and stumble across each other)).
And given this, when I see two really smart people not seeming to understand each other and both making good points, in public, on LW, with wildly lopsided voting patterns…
...that is like catnip to a “Socio-Epistemic Progress Frame” which often seems, to me, to generate justifications for being specifically locally helpful and have that redound (via an admittedly very circuitous-seeming path) to extremely large long term benefits for all sentient beings?
I obviously can’t mind read either of you, but when I suggested that bhauth might be doing “something even more rhetorically complex” it was out of an awareness that many such cases exist, and are probably helpful, even if wrong, so long as there is a relatively precise kind of good faith happening, where low-latency high-trust error correction seems to be pretty central to explicit/formal cognitive growth.
A hunch I have is that maybe shminux started in computer science, and maybe bhauth started in biology? Also I think exactly things kinds of collaborations are often very intellectually productive from both sides!
IV. In Praise Of CS/BIO Collaboration
From experience working in computational virology out of a primary interest in studying the mechanisms of the smallest machines nature has so far produced (as a long term attack on being able to work on nano at an object level), I recognize some of the ways that these fields often have wildly different initial intuitions, based on distinctions like engineering/science, algorithms/empiricism, human/alien and design/accretion.
People whose default is to “engineer (and often reverse-engineer) human-designed algorithms” and people whose default is to “empirically study accreted alien designs” have amazingly different approaches to thinking about “design” 😂
Still, I think there are strong analogies across these fields.
Like to a CS person “you need to patch a 2,000,000 line system written by people who are all now dead, against a new security zero day, as fast as you can” is a very very hard and advanced problem, but like… that’s the simple STARTING position for essentially all biologically evolved systems, as a single step in a typical red queen dynamic… see for example hyperparasiticvirophages for a tiny and more-likely-tractable example, where the “genetic code base” is relatively small, and generation times are minuscule. But there there are a lot of BIO people, I think, who have been dealing with nearly impossible systems for so long that they have “given up” in some deep way on expecting to understand certain things, and I think it would help them to play with code to get more intuitions about how KISS is possible and useful and beautiful.
(And to be clear, this CS/BIO/design thing is just one place where differences occur between these two fields, and it might very well not be the one that is going on here, and a lot of people in those fields are likely to roll their eyes at bothering with the other one, I suspect? Because “frames” or “emotions” or “stances” or “motivations” or just “finite life” maybe? But from a hyper abstract bayesian perspective, such motivational choices mean the data they are updating on has biases, so their posteriors will be predictably uncalibrated outside their “comfort zone”, which is an epistemic issue.)
As a final note in praise of BIO/CS collaboration, it is probably useful to notice that current approaches to AI do not involve hand-coding any of it, but rather “summoning” algorithms into relatively-computationally-universal frameworks via SGD over data sets with enough kolmogorov complexity that it becomes worthwhile to simply put the generating algorithm in the weights rather than try to store all the cases in the weights. This is, arguably, a THIRD kind of “summoned design” that neither CS or BIO people are likely to have good intuitions for, but I suspect it is somewhere in the middle, and that mathematicians would be helpful for understanding it.
V. In Closing
If this helps bhauth or shminux, or anyone who upvoted bhauth really hard, or anyone who downvoted shminux, that would make it worth the writing based on what I’m directly aiming at so long as any the net harms to any such people are smaller and lesser (which is likely, because it is a wall-o-text that few will read unless (hopefully) they like, and are getting something, from reading it). Such is my hope 😇
I guess the latter? But maybe also the former. Trying to solve the problem rather than enumerating all the ways in which it is unsolvable.
That framing is unnatural to me. I see “solving a problem” as being more like solving several mazes simultaneously. Finding or seeing dead ends in a maze is both a type of progress towards solving the maze and a type of progress towards knowing if the maze is solvable.
I’d like to say up front that I respect you both, but I think shminux is right that bhauth’s article (1) doesn’t make the point it needs to make to change the “belief about the whether a set of ‘mazes’ exist whose collective solution gives nano” for many people working on nano and (2) this is logically connected to issue of “motivational stuff”.
A key question is the “amount of work” necessary to make intellectual progress on nano (which is probably inherently cross-disciplinary), and thus it is implicitly connected to motivating the amount of work a human would have to put in. This could be shorted to just talking about “motivation” which is a complex thing to do for many reasons that the reader can surely imagine for themselves. And yet… I shall step into the puddle, and see how deep it might be!🙃
I. Close To Object Level Nano Stuff
For people who are hunting, intellectually, “among the countably infinite number of mazes whose solution and joining with other solved mazes would constitute a win condition by offering nano-enabling capacities” they have already solved many of the problems raised in the OP, as explained in in Thomas Kwa’s excellent top level nano-solutions comment.
One of Kwa’s broad overall points is “nano isn’t actually going to be a biological system operating on purely aqueous chemistry” and this helps dodge a huge numbers of these objections, and has been a recognized part of the plan for “real nanotech” (ie molecular manufacturing rather the nanoparticle bullshit that sucked up all the nano grant money) since the 1980s.
If bhauth wants to write an object level follow-up post, I think it might be interesting to read an attempt to defend the claim: “Nanotechnology requires aqueous biological methods… which are incapable of meeting the demand”. However, I don’t think this is something bhauth actually agrees with, so maybe that point is moot?
II. What Kinds Of Psychologizing Might Even Be Helpful And Why??
I really respect your engagement here, bhauth, whether you:
(1) really want to advance nano and are helping with that and this was truly your best effort, vs whether
(2) you are playing a devil’s advocate against nano plans and offered this up as an attempt to “say what a lot of the doubters are thinking quietly, where the doubters can upvote, and then also read the comments, and then realize that their doubts weren’t as justified as they might have expected”, or
(3) something more complex to explain rhetorically and/or motivationally.
There is a kind of courage in speaking in public, and confident ability to reason about object level systems, and faith in an audience enough that you’re willing to engage deeply.
Also, one there are skills for assessing the objectively most important ways technology can or should go in the future, and willingness to work on such things in a publicly visible forum where it also generates educational and political value for many potential readers.
All these are probably virtues, and many are things I see in your efforts here, bhauth!
I can’t read your mind, and don’t mean to impose advice where it is not welcome, but it does seem like you were missing ideas that you would have had if had spent the time to collect all the best “solved mazes” floating in the heads of most of the smartest people who want-or-wanted* to make nano happen?
III. Digressing Into A “Motivated Sociology Of Epistemics” For a Bit
All of this is to reiterate the initial point that effortful epistemics turns out to complexly interact with “emotional states” and what be want-or-wanted* when reasoning about the costs of putting in different kinds of epistemic effort.
(* = A thing often happens, motivationally, where an aspiring-oppenheimer-type really wants something, and starts collecting all the theories and techniques to make it happen… and then loses their desire part way through as they see more and more of the vivid details of what they are really actually likely to create given what they know. Often, as they become more able to apply a gears level analysis, and it comes more into near mode, and they see how it is likely to vividly interact with “all of everything they also care about in near mode” their awareness of certain “high effort ideas” becomes evidence of what they wanted, rather than want they still “want in the ‘near’ future”.
(A wrinkle about “what ‘near’ means” arises with variation in age and motivation. Old people whose interests mostly end with their own likely death (like hedonic interests) will have some of the smallest ideas about what “near” is, and old people with externalized interests will have some of the largest ideas in that they might care in a detailed way about the decades or centuries after their death, while having a clearer idea of what that might even mean than would someone getting a PhD or still gunning for tenure. Old externalized people are thus more likely to be “writing for the ages” with clearer/longer timelines. (And if anyone in these planning loops is an immortalist with uncrushed dreams, then what counts as “near in time” gets even more complicated.)))
I think shminux probably didn’t have time to write all this out, but might be nodding along to maybe half of it so far? And I think unpacking it might help bhauth (and all the people upvoting bhauth here?) to level up more and faster, which would probably be good!
For myself, I could maybe tell a story, where the reason I engaged here is, maybe, because I’m focused on getting a Win Condition for all of Earth, and all sapient beings (probably including large language model personas and potential-future aliens and so on?) and I think all good sapient beings with enough time and energy probably converge on collectively advancing the collective eudaemonia of all sentient beings (which I put non-zero credence on being a category that includes individual cells themselves).
Given this larger level goal, I think, as a sociological engineering challenge, it would logically fall out of this that it is super important for present day humans to help nucleate-or-grow-or-improve some kind of “long term convergent Win Condition Community” (which may have existed all the way back to Bacon, or even farther (and which probably explicitly needs to be able to converge with all live instances of similar communities that arise independently and stumble across each other)).
And given this, when I see two really smart people not seeming to understand each other and both making good points, in public, on LW, with wildly lopsided voting patterns…
...that is like catnip to a “Socio-Epistemic Progress Frame” which often seems, to me, to generate justifications for being specifically locally helpful and have that redound (via an admittedly very circuitous-seeming path) to extremely large long term benefits for all sentient beings?
I obviously can’t mind read either of you, but when I suggested that bhauth might be doing “something even more rhetorically complex” it was out of an awareness that many such cases exist, and are probably helpful, even if wrong, so long as there is a relatively precise kind of good faith happening, where low-latency high-trust error correction seems to be pretty central to explicit/formal cognitive growth.
A hunch I have is that maybe shminux started in computer science, and maybe bhauth started in biology? Also I think exactly things kinds of collaborations are often very intellectually productive from both sides!
IV. In Praise Of CS/BIO Collaboration
From experience working in computational virology out of a primary interest in studying the mechanisms of the smallest machines nature has so far produced (as a long term attack on being able to work on nano at an object level), I recognize some of the ways that these fields often have wildly different initial intuitions, based on distinctions like engineering/science, algorithms/empiricism, human/alien and design/accretion.
People whose default is to “engineer (and often reverse-engineer) human-designed algorithms” and people whose default is to “empirically study accreted alien designs” have amazingly different approaches to thinking about “design” 😂
Still, I think there are strong analogies across these fields.
Like to a CS person “you need to patch a 2,000,000 line system written by people who are all now dead, against a new security zero day, as fast as you can” is a very very hard and advanced problem, but like… that’s the simple STARTING position for essentially all biologically evolved systems, as a single step in a typical red queen dynamic… see for example hyperparasitic virophages for a tiny and more-likely-tractable example, where the “genetic code base” is relatively small, and generation times are minuscule. But there there are a lot of BIO people, I think, who have been dealing with nearly impossible systems for so long that they have “given up” in some deep way on expecting to understand certain things, and I think it would help them to play with code to get more intuitions about how KISS is possible and useful and beautiful.
(And to be clear, this CS/BIO/design thing is just one place where differences occur between these two fields, and it might very well not be the one that is going on here, and a lot of people in those fields are likely to roll their eyes at bothering with the other one, I suspect? Because “frames” or “emotions” or “stances” or “motivations” or just “finite life” maybe? But from a hyper abstract bayesian perspective, such motivational choices mean the data they are updating on has biases, so their posteriors will be predictably uncalibrated outside their “comfort zone”, which is an epistemic issue.)
As a final note in praise of BIO/CS collaboration, it is probably useful to notice that current approaches to AI do not involve hand-coding any of it, but rather “summoning” algorithms into relatively-computationally-universal frameworks via SGD over data sets with enough kolmogorov complexity that it becomes worthwhile to simply put the generating algorithm in the weights rather than try to store all the cases in the weights. This is, arguably, a THIRD kind of “summoned design” that neither CS or BIO people are likely to have good intuitions for, but I suspect it is somewhere in the middle, and that mathematicians would be helpful for understanding it.
V. In Closing
If this helps bhauth or shminux, or anyone who upvoted bhauth really hard, or anyone who downvoted shminux, that would make it worth the writing based on what I’m directly aiming at so long as any the net harms to any such people are smaller and lesser (which is likely, because it is a wall-o-text that few will read unless (hopefully) they like, and are getting something, from reading it). Such is my hope 😇