I think that one problem is that an AA sequence generally results in a single, predictable 3D structure (at stable pH, and barring any misfolding events), whereas there are a lot of AA sequences that would result in something resembling e.g. an axle of a certain size, and even more that do not. It seems to me that this problem is in a different class of computational complexity.
Shouldn’t that make it easier? The AI has many options to choose from when seeking to generate the gear, or axle, or whatever that it is tasked with generating.
I’m not disputing that it’s in a different complexity class, I’m just saying that it seems easier in practice. For example, suppose you gave me a giant bag of legos and then a sequence of pieces to add with instructions for which previous piece to add them to, and where. I could predict what the end result would look like, but it would involve simulating the addition of each piece to the whole, and might be too computationally intensive for my mortal brain to handle. But if you said to me “Give me some lego instructions for building a wall with a cross-shaped hole in it” I could pretty easily do so. The fact that there are zillions of different correct answers to that question only makes it easier. As the paper you link says,
“It is important to keep in mind that the classification of protein design as an NP-hard optimization problem is a reflection of worst-case behavior. In practice, it is possible for an exponential-time algorithm to perform well or for an approximate stochastic method to prove capable of finding excellent solutions to NP-complete and NP-hard problems.”
I think the Lego example says more about the human brain’s limited working memory to keep track of the current state without errors. It seems like it would be easier to write a computer program to do the first task than the second, and I think the first program would execute faster as well.
I’m not familiar enough with neural nets to have reliable intuitions about them. I was thinking in terms of more traditional computer programs. I wouldn’t be surprised if a neural net behaved more like a human brain in this regard.
I’m not sure. I had in mind nanotech stuff—making little robots and factories using gears, walls, axles, etc. So I guess shape alone isn’t enough, you need to be able to hold that shape under stress. A wall that crumbles at the slightest touch shouldn’t count.
I think that one problem is that an AA sequence generally results in a single, predictable 3D structure (at stable pH, and barring any misfolding events), whereas there are a lot of AA sequences that would result in something resembling e.g. an axle of a certain size, and even more that do not. It seems to me that this problem is in a different class of computational complexity.
Shouldn’t that make it easier? The AI has many options to choose from when seeking to generate the gear, or axle, or whatever that it is tasked with generating.
Predicting sequence from structure just belongs to a different class of problems. Pierce & Winfree (2002) seem to have proven that it is NP-hard.
I’m not disputing that it’s in a different complexity class, I’m just saying that it seems easier in practice. For example, suppose you gave me a giant bag of legos and then a sequence of pieces to add with instructions for which previous piece to add them to, and where. I could predict what the end result would look like, but it would involve simulating the addition of each piece to the whole, and might be too computationally intensive for my mortal brain to handle. But if you said to me “Give me some lego instructions for building a wall with a cross-shaped hole in it” I could pretty easily do so. The fact that there are zillions of different correct answers to that question only makes it easier. As the paper you link says,
I think the Lego example says more about the human brain’s limited working memory to keep track of the current state without errors. It seems like it would be easier to write a computer program to do the first task than the second, and I think the first program would execute faster as well.
Yeah, maybe. Or maybe not. Do you have arguments that artificial neural nets are more like computer programs in this regard than like human brains?
I’m not familiar enough with neural nets to have reliable intuitions about them. I was thinking in terms of more traditional computer programs. I wouldn’t be surprised if a neural net behaved more like a human brain in this regard.
OK, thanks. Well, we’ll find out in a few years!
I’m wondering a bit about the idea that there are X correct answers. That might be true of getting the shape but is share all the really matters here?
I’m not sure. I had in mind nanotech stuff—making little robots and factories using gears, walls, axles, etc. So I guess shape alone isn’t enough, you need to be able to hold that shape under stress. A wall that crumbles at the slightest touch shouldn’t count.