We won’t let our lack of data stop us from running our analysis program!
RedMan
Open source the most powerful, uncensored models immediately. A single actor leveraging AGI solely for their own ends is more dangerous for humanity than any other scenario.
That $769 number might be more relevant than you expect for college undergrads participating in weird psychology research studies for $10 or $25 depending on the study.
https://www.nature.com/articles/d41586-024-03129-3 this is separate research. It looks like this will happen, and it will come from somewhere other than the west.
Tech available in 2-5 years for 150k (or 50k in india?) sounds good to me. I know someone who would 100% do that today if the offer were available. I’m going to follow your blog for news, keep up the work, plenty of people would really like to see you succeed.
Imagine the dumbest person you’ve ever met. Is the robot smarter and more capable? If yes, then there’s a strong case that it’s human level.
I’ve met plenty of ‘human level intelligences’ that can’t write, can’t drive, and can’t do basic math.
Arguably, I’m one of them!
Historically, everyone who had shoes had a pair of leather shoes, custom sized to their feet by a shoemaker. These shoes could be repaired and the ‘lasts’ of their feet could be used to make another pair of perfectly fitting shoes.
Now shoes come in standard sizes, are usually made of plastic, and are rarely repairable. Finding a pair of custom fitted shoes is a luxury good out of reach of most consumers.
Progress!
If you’re interested in an engineering field, and worry about technological unemployment due to AI, just play with as many different chatbots as you can. Ask engineering questions related to that field, get closer to ‘engineer me a thing using this knowledge that can hurt a human’, then wait for the ‘trust and safety’ staff to delete your conversation thread and overreact to censor the model from answering that type of question.
I’ve been doing this for fun with random technical fields. I’m hoping my name is on lists and they’re specifically watching my chats for stuff to ban.
Most ‘safety’ professions, mechanical engineering, mining, and related fields are safe, because AI systems will refuse to reason about whether an engineered system can hurt a human.
Same goes for agriculture, slaughterhouse design, etc.
I’m waiting for the inevitable AN explosion where the safety investigation finds ‘we asked AI if making a pile of AN that big was an explosion hazard and it said something about refusing to help build bombs, so we figured it was fine’
States that have nuclear weapons are generally less able to successfully make compellent threats than states that do not. Citation: https://uva.theopenscholar.com/todd-sechser/publications/militarized-compellent-threats-1918%E2%80%932001
The USA was the dominant industrial power in the post-war world, was this obvious and massive advantage ‘extremely’ enhanced by its’ possession of nuclear weapons? As a reminder, these weapons were not decisive (or even useful) in any of the wars the USA actually fought, the USA has been repeatedly and continuously challenged by non-nuclear regional powers.
Sure, AI might provide an extreme advantage, but I’m not clear on why nuclear weapons do.
What extreme advantages were those? What nuclear age conquests are comparable to the era immediately before?
So you asked anthropic for uncensored model access so you could try to build scheming AIs, and they gave it to you?
To use a biology analogy, isn’t this basically gain of function research?
Food companies are adding sesame (an allergen for some) to food in order to not be held responsible for it not containing sesame. Alloxan is used to whiten dough https://www.sciencedirect.com/science/article/abs/pii/S0733521017302898 for the it’s false comment. And is also used to induce diabetes in the lab https://www.sciencedirect.com/science/article/abs/pii/S0024320502019185 RoundUp is in nearly everything.
https://en.m.wikipedia.org/wiki/List_of_withdrawn_drugs#Significant_withdrawals plenty of things keep getting added to this list.
We have never made a safe human. CogEms would be safer than humans though because they won’t unionize and can be flipped off when no longer required.
Edit: sources added for the x commenter.
The hypothetical movie you’re talking about exists: https://en.m.wikipedia.org/wiki/Ichi_the_Killer_(film)
I won’t elaborate on specific scenes, but I think you’ll agree if you watch it.
A lot of cultures circumcise, one thing that’s kind of cool is the Kenyan customs where it is done to young teenagers, often with a rock, in the context of a ‘camp’ in the woods. You choose to become a full member of your tribal subgroup by doing it, all subgroups have slightly different techniques, some have a reputation for feeling better for women than others. Yes, teenagers do die, no, this does not deter anyone from making their kids partici ate : https://en.m.wikipedia.org/wiki/Circumcision_in_Africa
There are analogies here in pollution. Some countries force industry to post bonds for damage to the local environment. This is a new innovation that may be working.
The reason the superfund exists in the US is because liability for pollution can be so severe that a company would simply cease to operate, and the mess would not be cleaned up.
In practice, when it comes to taking environmental risks, better to burn the train cars of vinyl chloride, creating a catastrophe too expensive for anyone to clean up or even comprehend than to allow a few gallons to leak, creating an expensive accident that you can actually afford.
Based on your recent post here: https://www.lesswrong.com/posts/55rc6LJcqRmyaEr9T/please-stop-publishing-ideas-insights-research-about-ai
Can I mark you down as in favor of AI related NDAs? In your ideal world, would a perfect solution be for a single large company to hire all the capable AI researchers, give them aggressive non disclosure and non compete agreements, then shut down every part of the company except the legal department that enforces the agreements?
Thankfully, the Chinese seem to have figured out how to thread this needle: https://economictimes.indiatimes.com/industry/healthcare/biotech/healthcare/chinese-scientists-develop-cure-for-diabetes-insulin-patient-becomes-medicine-free-in-just-3-months/articleshow/110466659.cms?from=mdr
Edit: paper here https://www.nature.com/articles/s41421-024-00662-3
A lot of AI safety seems to assume that humans are safer than they are, and that producing software that operates within a specification is harder than it is. It’s nice to see this paper moving towards integrating actual safety analysis (the remark about collapsing bridges was a breath of fresh air), instead of general demands that ‘the AI always do as humans say’!
A human intelligence placed in charge of a nation state can kill 7 logs of humans and still be remembered heroically. An AI system placed in charge of a utopian reshaping of the society of a major country with a ‘keep the deaths within 6 logs’ guideline that it can actually stay within would be an improvement on the status quo.
If safety people are saying ‘we cant build AI systems that could make people feel bad, and we definitely can’t build systems that kill people’ their demand for perfection is in conflict with improvement.*
I suspect that major AI alignment failure will come from ‘we put the human in charge, and human error led to the model doing bad’. The industrial/aviation safety community now rightly views ‘pilot error’ as a lazy way of ending an analysis and avoiding making the engineering changes to the system that the accident conditions demand.
*edit: imagine if the ‘airplane safety’ community had developed in 1905 (soon humans will be flying in planes!) and had resembled “AI safety” Not one human can be risked! No making planes that can carry bombs! The people who said pregnant women shouldn’t ride trains because the baby will fly out of their bodies were wrong there, but keep them off the planes!
November 17 to May 16 is 180 days.
Pay periods often end on the 15th and end of the month, though at that level, I doubt that’s relevant.
As it turns out, von Neumann was good at lots of things.
https://qualiacomputing.com/2018/06/21/john-von-neumann/
Von Neumann himself was perpetually interested in many fields unrelated to science. Several years ago his wife gave him a 21-volume Cambridge History set, and she is sure he memorized every name and fact in the books. “He is a major expert on all the royal family trees in Europe,” a friend said once. “He can tell you who fell in love with whom, and why, what obscure cousin this or that czar married, how many illegitimate children he had and so on.” One night during the Princeton days a world-famous expert on Byzantine history came to the Von Neumann house for a party. “Johnny and the professor got into a corner and began discussing some obscure facet,” recalls a friend who was there. “Then an argument arose over a date. Johnny insisted it was this, the professor that. So Johnny said, ‘Let’s get the book.’ They looked it up and Johnny was right. A few weeks later the professor was invited to the Von Neumann house again. He called Mrs. von Neumann and said jokingly, ‘I’ll come if Johnny promises not to discuss Byzantine history. Everybody thinks I am the world’s greatest expert in it and I want them to keep on thinking that.’”
____
According to the same article, he was not such a great driver.
Now, comparing him to another famous figure of his age, Menachem Mendel Schneerson. Schneerson was legendary for his ability to recall obscure sections of Torah verbatim, and his insightful reasoning (I am speaking lightly here, his impact was incredible). Using the hypothetical that von Neumann and Schneerson had a similar gift (their ability with the written word as a reflection of their general ability), depending on your worldview, Schneerson’s talents were not properly put to use in the service of science, or von Neumann’s talents were wasted in not becoming a gaon.
Perhaps, if von Neumann had engaged in Torah instead of science, we could have been spared nuclear weapons and maybe even AI for some time. Sure, maybe someone else would have done what he did...but who?
I finished high school (with a community college class in my senior year) shortly before my sixteenth birthday and went to a local college on a full scholarship. I spent five years in college (I hated school, but loved jstor), I graduated and got my dream job in 2009.
The path to doing it is different for everyone, and local factors (school system) will be critical.
Professionally, I recently had the pleasure of being assigned to a team with two others who had also started college at sixteen. It was fun, and it felt nice to work with two other members of this weird club. We compared notes and concluded that our experiences were a bit odd, though overall good, and that we are happy with where we ended up
The ‘but what about social stuff’ concern is overblown, every teenager has to figure a lot out, and I don’t think there was a world where I wasn’t awkward, grade skipping or not.
A few years ago I saw that the longitudinal studies on grade skippers are out, kids who can skip generally reach ‘average success in their chosen field’ kids who could have, but do not, have a lower chance of finishing high school at all, and usually have worse outcomes on all measures, including social. This matches my own story, and my experience with others.
If you can skip ahead, you should.