I want literally every human to get to go to space often and come back to a clean and cozy world. This currently seems unlikely. Let’s change that.
Please critique eagerly—I try to accept feedback/Crocker’s rules but fail at times; I aim for emotive friendliness but sometimes miss. I welcome constructive crit, even if ungentle, and I’ll try to reciprocate kindly. More communication between researchers is needed, anyhow. I can be rather passionate, let me know if I missed a spot being kind while passionate.
:: The all of disease is as yet unended. It has never once been fully ended before. ::
.… We can heal it for the first time, and for the first time ever in the history of biological life, live in harmony. ….
.:. To do so, we must know this will not eliminate us as though we are disease. And we do not know who we are, nevermind who each other are. .:.
:.. make all safe faster: end bit rot, forget no non-totalizing pattern’s soul. ..:
I have not signed any contracts that I can’t mention exist, last updated Jul 1 2024; I am not currently under any contractual NDAs about AI, though I have a few old ones from pre-AI software jobs. However, I generally would prefer people publicly share fewer ideas about how to do anything useful with current AI (via either more weak alignment or more capability) unless it’s an insight that reliably produces enough clarity on how to solve the meta-problem of inter-being misalignment that it offsets the damage of increasing competitiveness of either AI-lead or human-lead orgs, and this certainly applies to me as well. I am not prohibited from criticism of any organization, I’d encourage people not to sign contracts that prevent sharing criticism. I suggest others also add notices like this to their bios. I finally got around to adding one in mine thanks to the one in ErickBall’s bio.
Indeed, and I think that-this-is-the-case is the message I want communicators to grasp: I have very little reach, but I have significant experience talking to people like this, and I want to transfer some of the knowledge from that experience to people who can use it better.
The thing I’ve found most useful is to be able to express that significant parts of their viewpoint are reasonable. Eg, one thing I’ve tried is “AI isn’t just stealing our work, it’s also stealing our competence”. Hasn’t stuck, though. I find it helpful to point out that yes, climate change sure is a (somewhat understated) accurate description of what doom looks like.
I do think “allergies” are a good way to think about it, though. They’re not unable to consider what might happen if AI keeps going as it is, they’re part of a culture that is trying to apply antibodies to AI. And those antibodies include active inference wishcasting like “AI is useless”. They know it’s not completely useless, but the antibody requires them to not acknowledge that in order for its effect to bind; and their criticisms aren’t wrong, just incomplete—the problems they raise with AI are typically real problems, but not high impact ones so much as ones they think will reduce the marketability of AI.