Nit: 0.36 bits/letter seems way off. I suspect you only counted the contribution of the letter E from the above table (-p log2 p
for E’s frequency value is 0.355).
darius
Agreed. I had [this recent paper](https://ieeexplore.ieee.org/abstract/document/9325353) in mind when I raised the question.
The Landauer limit constrains irreversible computing, not computing in general.
Here’s the argument I’d give for this kind of bottleneck. I haven’t studied evolutionary genetics; maybe I’m thinking about it all wrong.
In the steady state, an average individual has n children in their life, and just one of those n makes it to the next generation. (Crediting a child 1⁄2 to each parent.) This gives log2(n) bits of error-correcting signal to prune deleterious mutations. If the genome length times the functional bits per base pair times the mutation rate is greater than that log2(n), then you’re losing functionality with every generation.
One way for a beneficial new mutation to get out of this bind is by reducing the mutation rate. Another is refactoring the same functionality into fewer bits, freeing up bits for something new. But generically a fitness advantage doesn’t seem to affect the argument that the signal from purifying selection gets shared by the whole genome.
An allegedly effective manual spaced-repetition system: flashcards in a shoebox with dividers. You take cards from the divider at one end and redistribute them by how well you recall. I haven’t tried this, but maybe I will since notecards have some advantages over a computer at a desk or a phone.
(It turns out I was trying to remember the Leitner system, which is slightly different.)
Radical Abundance is worth reading. It says that current work is going on under other names like biomolecular engineering, the biggest holdup is a lack of systems engineering focused on achieving strategic capabilities (like better molecular machines for molecular manufacturing), and we ought to be preparing for those developments. It’s in a much less exciting style than his first book.
Small correction: Law’s Order is by David Friedman, the middle generation. It’s an excellent book.
I had a similar reaction to the sequences. Some books that influenced me the most as a teen in the 80s: the Feynman Lectures and Drexler’s Engines of Creation. Feynman modeled scientific rationality, thinking for yourself, clarity about what you don’t know or aren’t explaining, being willing to tackle problems, … it resists a summary. Drexler had many of the same virtues, plus thinking carefully and boldly about future technology and what we might need to do in advance to steer to an acceptable outcome. (I guess it’s worth adding that seemingly a lot of people misread it as gung-ho promotion of the wonders of Tomorrowland that we could all look forward to by now, more like Kurzweil. For one sad consequence, Drexler seems to have become a much more guarded writer.)
Hofstadter influenced me too, and Egan and Szabo.
I’m not a physicist, but if I wanted to fuse metallic hydrogen I’d think about a really direct approach: shooting two deuterium/tritium bullets at each other at 1.5% of c (for a Coulomb barrier of 0.1 MeV according to Wikipedia). The most questionable part I can see is that a nucleus from one bullet could be expected to miss thousands of nuclei from the other, before it hit one, and I would worry about losing too much energy to bremsstrahlung in those encounters.
s/From their/From there
I also reviewed some of his prototype code for a combinatorial prediction market around 10 years ago. I agree that these are promising ideas and I liked this post a lot.
Robin Hanson proposed much the same over 20 years ago in “Buy Health, Not Health Care”.
IIRC Doug Orleans once made an ifMUD bot for a version of Zendo where a rule was a regular expression. This would give the user a way to express their guess of the rule instead of you having to test them on examples (regex equality is decidable).
Also I made a version over s-expressions and Lisp predicates—it was single-player and never released. It would time-out long evaluations and treat them as failure. I wonder if I can dig up the code...
Here’s what’s helped for me. I had strong headaches that would persist for weeks, with some auras, which my doctor called migraines. (They don’t seem to be as bad as what people usually mean by the word.) A flaxseed oil supplement keeps them away. When I don’t take enough, they come back; it needs to be at least 15g/day or so (many times more than the 2-3 gelcaps/day that supplement bottles direct you to take). I’ve taken fish oil occasionally instead.
I found this by (non-blinded) experimenting with different allegedly anti-inflammatory supplements. I’m not a doctor, etc.
Computing: The Pattern On The Stone by Daniel Hillis. It’s shorter and seemingly more focused on principles than the Petzold book Code, which I can’t compare further because I stopped reading early (low information density).
it’s also notable that he successfully predicted the rise of the internet
Quibble: there was plenty of internet in 1986. He predicted a global hypertext publishing network, and its scale of impact, and starting when (mid-90s). (He didn’t give any such timeframe for nanotechnology, I guess it’s worth mentioning.)
Radical Abundance, came out this past month.
Added: The most relevant things in the book for this post (which I’ve only skimmed):
There’s been lots of progress in molecular-scale engineering and science that isn’t called nanotechnology. This progress has been pretty much along the lines Drexler sketched in his 1981 paper and in the how-can-we-get-there sections of Nanosystems, though. This matches what I saw sitting in on Caltech courses in biomolecular engineering last year. Drexler believes the biggest remaining holdup on the engineering work is how it’s organized: when diverse scientists study nature their work adds up because nature is a whole, but when they work on bits and pieces of technology infrastructure in the same way, their work can’t be expected to coalesce on its own into useful systems.
He gives his latest refinement of the arguments at a lay level.
Yes—in my version of this you do get passed your own source code as a convenience.
If you’d rather run with a very small and well-defined Scheme dialect meant just for this problem, see my reply to Eliezer proposing this kind of tournament. I made up a restricted language since Racket’s zillion features would get in the way of interesting source-code analyses. Maybe they’ll make the game more interesting in other ways?
There’s a Javascript library by Andrew Plotkin for this sort of thing that handles ‘a/an’ and capitalization and leaves your code less repetitive, etc.
That writes can affect another session violates my expectations, at least, of the boundaries that’d be set.