bhauth
I spoke with one of the inventors of bridge recombinases at a dinner a few months ago and (at least according to him), they work in human cells.
Hmm. I don’t believe that, not without a bit more evidence.
There are already several perfectly good languages for schemas, such as CUE, Dhall, and XSD
This won’t find deception in mesaoptimizers, right?
make fewer points, selected carefully to be bulletproof, understandable to non-experts, and important to the overall thesis
That conflicts with eg:
If you replied with this, I would have said something like “then what’s wrong with the designs for diamond mechanosynthesis tooltips, which don’t resemble enzymes
Anyway, I already answered that in 9. diamond.
Yes, this is part of why I didn’t post AI stuff in the past, and instead just tried to connect with people privately. I might not have accomplished much, but at least I didn’t help OpenAI happen or shift the public perception of AI safety towards “fedora-wearing overweight neckbeards”.
I wrote a related post.
betting they would benefit from a TMSC blockade?
Yes, if you meant TSMC.
But the bet would have tired up your capital for a year.
...so? More importantly, Intel is down 50% from early 2024.
debating buying NVDA in 2019
new chinese stealth aircraft
PCR retrospective
Your document says:
AI Controllability Rules
...
AI Must Not Self-Manage:
Must Not Modify AI Rules: AI must not modify AI Rules. If inadequacies are identified, AI can suggest changes to Legislators but the final modification must be executed by them.
Must Not Modify Its Own Program Logic: AI must not modify its own program logic (self-iteration). It may provide suggestions for improvement, but final changes must be made by its Developers.
Must Not Modify Its Own Goals: AI must not modify its own goals. If inadequacies are identified, AI can suggest changes to its Users but the final modification must be executed by them.
I agree that, if those rules are followed, AI alignment is feasible in principle. The problem is, some people won’t follow those rules if they have a large penalty to AI capabilities, and I think they will.
“Mirror life” is beyond the scope of this post, and the concerns about it are very different than the concerns about “grey goo”—it doesn’t have more capabilities or efficiency, it’s just maybe harder for immune systems to deal with. Personally, I’m not very worried about that and see no scientific reason for the timing of the recent fuss about it. If it’s not just another random fad, the only explanation I can see for that timing is: influential scientists trying to hedge against Trump officials determining that “COVID was a lab leak” in a way that doesn’t offend their colleagues. On the other hand, I do think artificial pathogens in general are a major concern, and even if I’m not very concerned about “mirror life”, there are no real benefits to trying to make it, so maybe just don’t.
I think this is a pretty good post that makes a point some people should understand better. There is, however, something I think it could’ve done better. It chooses a certain gaussian and log-normal distribution for quality and error, and the way that’s written sort of implies that those are natural and inevitable choices.
I would have preferred something like:
Suppose we determine that quality has distribution X and error has distribution Y. Here’s a graph of those superimposed. We can see that Y has more of a fat tail than X, so if measured quality is very high, we should expect that to be mostly error. But of course, the opposite case is also possible. Now then, here’s some basic info about when different probability distributions are good choices.