AlphaFold 2 takes an amino acid sequence as input, and outputs a 3D structure which represents the protein that that sequence forms. It would be cool if we could do it in reverse, i.e. the user inputs a 3D model (e.g. a gear, an axle, a wall with a hole in it of a certain shape...) and then the system outputs an amino acid sequence that would form a protein with that structure.
I don’t have a good sense of whether this is a very difficult problem that we are nowhere near solving, or an obvious next step after AlphaFold2.
My current median is that it’s 4 years away, but I’m very uncertain about that.
From what I understand the pipeline depends strongly on homology to existing proteins that have determined structures to use substitution correlations to create an interaction graph which it then allows to evolve via learned rules.
I strongly suspect that as such it will not be very good at orphans without significant homology, be it sequence to structure or the reverse.
The naive way can be done immediately, AlphaFold2 is differentiable end-to-end, so you could immediately use gradient descent to optimise over the space of amino acid sequences (this is a discrete space, but you can treat it as continuous and add some penalties to your loss function to bias it towards the discrete points):
Correct sequence = argmin ( || AlphaFold2(x) - Target structure||)
For some notion of distance between structures.
But is AlphaFold2(x) a (mostly) convex function? More importantly, is the real structure(x) convex?
I can see a potential bias here, in that AlphaFold and inverse AlphaFold might work well for biomolecules because evolution is also a kind of local search, so if it can find a certain solution, AlphaFold will find it, too. But both processes will be blind to a vast design space that might contain extremely useful designs.
Then again, we are biology, so maybe we only care about biomolecules and adjacent synthetic molecules anyway.
Yeah, I agree that there’s probably a bias towards biomolecules right now, but I think that’s relatively easily fixable by using the naive way to predict the amino acid sequence of a structure we want, then actually making that protein, then checking its structure with crystallography and retraining AlphaFold to predict the right structure. If we do this procedure with sequences that differ more and more from biomolecules, we’ll slowly remove that bias from AlphaFold.
By “bias” I didn’t mean biases in the learned model, I meant “the class of proteins whose structures can be predicted by ML algorithms at all is biased towards biomolecules”. What you’re suggesting is still within the local search paradigm, which might not be sufficient for the protein folding problem in general, any more than it is sufficient for 3-SAT in general. No sampling is dense enough if large swaths of the problem space is discontinuous.
I think that one problem is that an AA sequence generally results in a single, predictable 3D structure (at stable pH, and barring any misfolding events), whereas there are a lot of AA sequences that would result in something resembling e.g. an axle of a certain size, and even more that do not. It seems to me that this problem is in a different class of computational complexity.
Shouldn’t that make it easier? The AI has many options to choose from when seeking to generate the gear, or axle, or whatever that it is tasked with generating.
Predicting sequence from structure just belongs to a different class of problems. Pierce & Winfree (2002) seem to have proven that it is NP-hard.
I’m not disputing that it’s in a different complexity class, I’m just saying that it seems easier in practice. For example, suppose you gave me a giant bag of legos and then a sequence of pieces to add with instructions for which previous piece to add them to, and where. I could predict what the end result would look like, but it would involve simulating the addition of each piece to the whole, and might be too computationally intensive for my mortal brain to handle. But if you said to me “Give me some lego instructions for building a wall with a cross-shaped hole in it” I could pretty easily do so. The fact that there are zillions of different correct answers to that question only makes it easier. As the paper you link says,
I think the Lego example says more about the human brain’s limited working memory to keep track of the current state without errors. It seems like it would be easier to write a computer program to do the first task than the second, and I think the first program would execute faster as well.
Yeah, maybe. Or maybe not. Do you have arguments that artificial neural nets are more like computer programs in this regard than like human brains?
I’m not familiar enough with neural nets to have reliable intuitions about them. I was thinking in terms of more traditional computer programs. I wouldn’t be surprised if a neural net behaved more like a human brain in this regard.
OK, thanks. Well, we’ll find out in a few years!
I’m wondering a bit about the idea that there are X correct answers. That might be true of getting the shape but is share all the really matters here?
I’m not sure. I had in mind nanotech stuff—making little robots and factories using gears, walls, axles, etc. So I guess shape alone isn’t enough, you need to be able to hold that shape under stress. A wall that crumbles at the slightest touch shouldn’t count.