Does the post ever mention the target of growing the population? I only recall mentions of replacement fertility.
p.b.
The next question would be the success-rate. Even successful somatic gene editing I’ve read about so far have modified only a small fraction of cells. Is it realistic to modify a double digit percentage of neurons in the brain?
One thought: One could probably do mice studies where instead of maximizing a polygenic score, non-consensus variants are edited to reduce mutational load. If that had positive effects it would be a huge result.
Somatic gene editing was in cards for a while now, but I assumed that so far off-target effects would make that pretty risky, especially for a large number of variants.
What is the current situation regarding off-target effects for large numbers of edits?
If you scale width more than depth and data more than parameters you can probably go some ways before latency becomes a real problem.
Additionally, it would also make sense to take more time (i.e. larger models) for harder tasks. The user probably doesn’t need code or mathematical solutions instantly, as long as its still 100X faster than a human.
In robotics you probably need something hierarchical, where low-level movements are controlled by small nets.
I suggest that you add a short explanation what the local learning coefficient is to the TL;DR. IMHO the post goes on for too long until the reader finds out what it is about.
SDXL gives me something like this. But I don’t know, not what you had in mind?
I used this hugging face space: https://huggingface.co/spaces/google/sdxl
And a prompt roughly: An elven face made out of green metal—dungeons and dragons, fantasy, awesome lighting
I think the hyperfeminine traits are due to finetuning—you should get a lot less of that with the Stable Diffusion base model.
Eight eyes—yeah, counting is hard, but it’s also hard to put eight eyes into a face build for two. If I would try to get that I would probably try control net where you can add a ton of eyes to a line drawing and use that as a starting point for the image creation. (Maybe create an image without the eyes first. Apply canny edge detector or similar, multiply the eyes and then use canny edge control net.)
Your Roggenmuhme should also be within the realm of the possible, I think, but I am not going to dwell on that, because I want to sleep at night.
For correct moon phases and Deinonychus’s wrist position you’ll have to wait for AGI.
Dall-E 3
My current assumption is that extracting “intelligence” from images and even more so from videos is much less efficient than from text. Text is just extremely information dense.
So I wouldn’t expect Gemini to initially feel more intelligent than GPT4 even if it used 5 times the compute.
I mostly wonder about qualitative differences maybe induced by algorithmic improvements like actually using RL or search components for a kind of self-supervised finetuning, that’s one area where I can easily see Deepmind outcompeting OpenAI.
For what it’s worth, I read it when it came out and loved it. I lent it to a friend who never gave it back, which is probably another point in favour. I also enjoyed the follow-up “On the origin of good moves”.
Great article!
Maybe homologous recombination should be mentioned as the reason why “the newborn cell receives an assemblage of random pieces of each parents’ genome”. Just mixing chromosomes would not be enough to stop muller’s ratchet.
I did train a transformer to predict moves from board positions (not strictly FEN because with FEN positional encodings don’t point to the same squares consistently). Maybe I’ll get around to letting it compete against the different GPTs.
The game notation is pretty close to a board representation already. For most pieces you just go to their last move to see on which square they are standing. I assume that is very readable for a LLM because they are able to keep all tokens in mind simultaneously.
In my games with ChatGPT and GPT-4 (without the magic prompt) they both seemed to lose track of the position after the opening and completely fell apart. Which might be because by then many pieces have moved several times (so there are competing moves indicating a square) and many pieces have vanished from the board altogether.
The ChessGPT paper does something like that: https://arxiv.org/abs/2306.09200
We collect chess game data from a one-month dump of the Lichess dataset, deliberately distinct from the month used in our own Lichess dataset. we design several model-based tasks including converting PGN to FEN, transferring UCI to FEN, and predicting legal moves, etc, resulting in 1.9M data samples.
You could still offer money up front. Getting $2000 if the stars align is still much more likely than getting $150,000.
I thought about giving the long flu example, but flu is much less contagious than covid and does not infect everyone yearly. That holds even more for SARS or MERS.
People aren’t betting with you because the utility of money is not linear.
If you own $150,000 it is very unlikely that $1000 makes any difference to your life whatsoever. But losing $150,000 might ruin it.
Here is a way around that problem: You both wager $1000 (or whatever you like). When the bet is resolved you throw the dice (or rather use a random number generator).
If you win you throw the 99.33% probability “you get paid”-dice.
If your opponent wins he throws the 0.66% probability “he gets paid”-dice.
(If the [0,100] random number is <0.66 your opponent gets $1000 if he wins. If the random number is between 0.66 and 100 you will get $1000 if you win. In the other combinations you both keep your money.)
So instead of wagering a larger amount of money your opponent wagers a larger probability of having to pay in the event of losing the bet.
PS: Yes, you can also just wager small amounts of money but that’s kinda boring.
Isn’t there also evidence that long covid is partly psychosomatic? (random paper that lists some studies)
The number of people prone to psychosomatic symptoms is probably not going to go up, so your growth rate should be overestimated.
There are also other risk factors involved, same argument applies to those.
In the extreme scenario those people prone to develop long covid already have it and very few other people will get it.
The assumptions in your simulation also seem consistent with that possibility:
Maybe the reinfection long covid probability of 5% is mostly the 60% of the 10% … ;-)For what it’s worth I know zero people with long covid and I have also never heard anybody mention an acquaintance with long covid.
It is the change that is bad, not necessarily the future total size of the population.
Edit: Maybe I should unpack that a bit. I also think more people is better, because life is good and innovation is proportional to the number of innovators, but apart from that:
A decreasing population leads to economic stagnation and innovation slowdown. Both can be observed in Japan. South Korea, China, Taiwan are on track to tank their population much faster than Japan ever did. Hows that going to work out for them?
In a permanent recession will investment dry up killing whatever dynamism there might still be?
If the age pyramid is inverted old people have too much political power for the country to ever reverse course and support the young towards family formation.
If you allow massive immigration to fix the labor shortage you also invite ethnic strife down the line. Almost all violent conflicts are based on having two or more ethnic groups within one country.
Will young people emigrate if they are burdened with caring for too many old people in a shrinking economy?
My view is that the progress we observe in the last centuries is more fragile than it seems and it is certainly possible that we will kill it almost completely if we continue to remove or weaken many of the preconditions for it.