Here’s a much better article criticizing evo-psych. I think it goes a little too far in some places, and I’ve posted it before, but those looking for something a bit more structured and well argued would do well to start here.
zslastman
Yeah it doesn’t say much. For one thing I’d say it’s just about all of the genes that are differentially expressed, if you look hard enough. Regardless, that doesn’t tell us how many of them really matter with respect to the things we care about, how many causal factors are at work, or how difficult it will be to fix. Doesn’t rule out a single silver bullet aging cure (though other things probably do)
Yes that’s the case. To get enough data we probably need lots of in vitro experiments. Remember that data is not equal to information—even really big sample sizes wouldn’t be enough to resolve the combinatoric explosion. What I mean in that comment up there (I posted it before it was finished, I think) is that there are ~23k genes in the genome, so even under the absurdly simple assumption that there’s only one mutation possible per gene, you have half a billion possible combinations of gene breakages, which you will never ever be able to get enough of a sample size to look at blindly.
Ha, in theory, but it looks like the guys at TeXmacs are already selling the product for free, so no dice...
Yes, that would also be great, but I a) I can’t afford such a tablet, and b) I strongly suspect that the OCR would be inaccurate enough that I’d end up wishing for a keyboard anyway. Hell accurate voice recognition would be better, but I’m still waiting for that to happen...
Been using it for an hour now,and yes, it’s crashed on me once, but no more than half the other programs I use. Already seeing the benefits of it when I spent half an hour doing something, realised there was a mistake at the start, and could then just find/replace stuff instead of scrunching the paper up into a ball and cursing Pierre Laplace. Also I don’t have to deal with the aesthetic trauma of viewing my own handwriting. Outstanding.
YES. Thank you so much. Texmacs seems to be exactly what I wanted.
pen and paper is far more instant than any method I can imagine of poking mathematics in through a keyboard.
Yeah… I think I just have to bite this bullet. If you do math professionally and the people you know work onto pen and paper, then that’s the answer.
It’s just.… I feel like I can imagine a system that would be better than pen and paper. There’s so much tedious repetition of symbols when I do algebra on paper, and inevitably while simplifying some big integral I write something wrong, and have to scratch it out, and the whole thing becomes a confusing mess. writing my verbal thoughts down with a keyboard is just as quick and intuitive as a pen and paper. There must be a better way...
Yeah I can imagine doing that all right—I wouldn’t actually mind writing in latex even, the problem is the lag. Building a latex document after each change takes time. If the latex was being built in a window next to it, in real time, (say a 1 second lag would probably be fine) there’d be no problem. I’m not looking to publish the math, I just want a thought-aid.
Why isn’t there a good way of doing symbolic math on a computer?
I want to brush up on my probability theory. I hate using a pen and paper, I lose them, they get damaged, and my handwriting is slow and messy.
In my mind I can envisage a simple symbolic math editor with keyboard shortcuts for common symbols, that would allow you to edit nice, neat latex style equations, as easily as I can edit text. Markdown would be acceptable as long as I can see the equation in it’s pretty form next to it. This doesn’t seem to exist. Python based symbolic math systems, like ‘sagemath’, are hopelessly clunky. Mathematica, although I can’t afford it, doesn’t seem to be what I want either. I want to be able to write math fast, to aid my thinking while proving theorems and doing problems from a textbook, not have the computer do the thinking for me. Latex equation editors I’ve seen are all similarly unwieldy—waiting 10 seconds for it to build the pdf document is totally disruptive to my thought process.
Why isn’t this a solved problem? Is it just that nobody does this kind of thing on a computer? Do I have to overcome my hatred of dead tree media and buy a pencil sharpener?
I’m always puzzled by how many how many LWers seem to casually dismiss the reality of mortality with appeals to singularities, cryonics etc. I’m sure immortality is coming, but I don’t see much chance of me living to see it. Seems prudent to come to terms with that.
See my answer on the other thread :) Difficult to estimate. You need a new method of transgenesis − 5-20 years?
Hmmmm. I’m shamefully ignorant about prices, but I would estimate such an effort would be in the tens of millions, if you wanted it done quickly (and it will still take a while). As far as I’m aware we haven’t developed methods for transgenesis in Tetse flies, having only gotten the genome sequenced in 2014 (priorities people?!), and setting it up in a new organism in a new organism with an unusual life cycle can be surprisingly difficult. The link below describes techniques for manipulating gut microbes in the flies, which I don’t think would suffice.
In drosophila you can’t go from cell culture to an embryo easily like in mammals, you have to inject stuff into embryos and then breed from those embryos and hope some of your vector got into the germ line. In Tetse flies, I am now aware, the mother keeps the embryo until it’s quite developed, meaning the techniques used in Drosophila wouldn’t work, and we certainly don’t have any tetse cell lines, which I doubt would be of use anyway. So you’d be looking at developing a novel means of transgenesis. (Viral vector targetting the germ line maybe?? ) Which is a task that, while no doubt solvable, inevitably has big uncertainties in it.
So yes, tens of millions, give or take an order of magnitude, plus years and years of work. Well worth doing though. In my opinion the potential gains far outweigh the risks.
P.S. The link to ‘relevant risks’ you posted is broken, I’d be interested in seeing it.
I’m totally in favor of chlorinating that pool, but just bear in mind that the ‘registry of standard parts’, and other biological tools in general, like CRISPR, are nowwhere near as easy to use and reliable as it says on the packaging. I’m always amused by the contrast between articles about CRISPR which make it sound like you can just jam a thumbdrive into a mouse, and the people I know trying to get CRISPR to work on a new cell line or organism, who are up all night for months in a row muttering schizophrenically in the cell culture room. You need a lot of time and human capitol for these things.
Another genomics PhD here. It’s a complex topic. We know that combinatorial effects (epistasis in genetics lingo) matter, from population genetics studies in model organisms. This is despite the fact that simple linear models perform well in the human population—provided they are against some reasonably constant genetic background, low allele frequencies mean that the combinatorial effects are well captured by linear ones.
The problem is that even if you only care about pairwise combinations, there are far too many of them, given a uniform prior. Even if we sequence everyone on earth we wouldn’t have anywhere near enough info, sequencing additional individuals has diminishing returns because there’s only so much genetic variation in the human population (and ~23000^2 possible pairwise combinations).
What we need are good priors over combinations of mutations. To do that we’ll need detailed info about which genes function together to produce which phenotypes. Such models exist already and are seeing moderate success, but we need new ideas and more data than any one startup could provide. Which is exactly what molecular biologists are working on.
Short answer—no, this is a hard, ongoing problem.
I think you’re looking for the concept of ‘mutational variance’. This is the amount of variation in a trait that is generated by random mutation. The variance in a trait is going to be determined by the balance of mutational variance and selective effects. Things with lots of genes effecting them will have a large ‘mutational target size’. So for instance intellectual disability has a large mutational target size because there are so many different ways to break a brain, while some kinds of haemophilia have a large mutational target size because the particular sequences of DNA involved mutate a lot.
In general mutation variance is very difficult to measure outside of single celled organisms, although good approximations have been done in e.g. fruit flies. The problem is that it’s very difficult to stop evolution from exerting it’s filtering effect on your mutations before you can measure them.
So in the absence of direct measures, It’s difficult to guess at how many genes might be involved in something like homosexuality, and what the mutational variance could be. On the surface, we can imagine it’s just a simple trait that should have few genes effecting it. Such is the case in fruit flies. But actually, we just don’t know enough about how evolution has created the human mind. Without knowing how genes produce a brain, we don’t know enough to say that homosexuality isn’t just a particularly common “failure mode” of the brain, like autism and ID. Maybe something about the way the human brain has evolved makes it turn out gay a lot.
Myself I don’t really buy the ‘gayness is selected for’ explanations. My own opinion is that exclusive homosexuality might be more due to our own present society than anything else, and it’s need to cordon off homosexual behaviour from normal, straight behaviour. If that’s the case most of the mystery disappears.
WGS is going to get cheaper and cheaper as time goes on, presumably in the future we’ll have developed a process for analysing the results properly. In the intervening time, there isn’t much to be gained from it. SNP genotyping gives you most of the info about common variants, because the things it doesn’t catch (deletions, insertions, etc.) will generally have some SNP in linkage to them. The rare variants are what you miss, and right now we don’t really know what to do with them.
In general I wouldn’t overestimate how much genotyping will tell you. Your family history is likely to be more informative.
I should be clearer on that score. It’s not that I see a high likelihood of a singularity happening in the next 50 years, with Skynet waltzing in and solving everything. Rather I see new methods in Biology happening that render what I’m doing irrelevant, and my training not very useful. An example: lots of people in the 90s spent their entire PhDs sequencing single genes by hand. I feel like what I’m doing is the equivalent.
Cognitive genomics is definitely something I”ll look into, thanks.
I’m late to the party here but have been trying to get less confused about quantum physics by reading Sean Carrolls math-heavy pop sci book quanta and fields. I wanted to ask you about the statement that electrons “cannot be only waves”. In his telling, smooth fields are the more complete description of reality—particle like behavior emerges from the fact that the fields are harmonic oscillators and thus have discrete modes they can vibrate at—which is what gives us quantization. So he would say it’s only waves—particles are just a very useful abstraction.
Do you guys meaningfully disagree on something (I know he’s a many worlds guy) or is it just semantics about the word ‘wave’ (I realize e.g. a sound wave is meaningfully different from a wave in the electron field).