https://www.nature.com/articles/d41586-024-03129-3 this is separate research. It looks like this will happen, and it will come from somewhere other than the west.
RedMan
Tech available in 2-5 years for 150k (or 50k in india?) sounds good to me. I know someone who would 100% do that today if the offer were available. I’m going to follow your blog for news, keep up the work, plenty of people would really like to see you succeed.
Imagine the dumbest person you’ve ever met. Is the robot smarter and more capable? If yes, then there’s a strong case that it’s human level.
I’ve met plenty of ‘human level intelligences’ that can’t write, can’t drive, and can’t do basic math.
Arguably, I’m one of them!
Historically, everyone who had shoes had a pair of leather shoes, custom sized to their feet by a shoemaker. These shoes could be repaired and the ‘lasts’ of their feet could be used to make another pair of perfectly fitting shoes.
Now shoes come in standard sizes, are usually made of plastic, and are rarely repairable. Finding a pair of custom fitted shoes is a luxury good out of reach of most consumers.
Progress!
If you’re interested in an engineering field, and worry about technological unemployment due to AI, just play with as many different chatbots as you can. Ask engineering questions related to that field, get closer to ‘engineer me a thing using this knowledge that can hurt a human’, then wait for the ‘trust and safety’ staff to delete your conversation thread and overreact to censor the model from answering that type of question.
I’ve been doing this for fun with random technical fields. I’m hoping my name is on lists and they’re specifically watching my chats for stuff to ban.
Most ‘safety’ professions, mechanical engineering, mining, and related fields are safe, because AI systems will refuse to reason about whether an engineered system can hurt a human.
Same goes for agriculture, slaughterhouse design, etc.
I’m waiting for the inevitable AN explosion where the safety investigation finds ‘we asked AI if making a pile of AN that big was an explosion hazard and it said something about refusing to help build bombs, so we figured it was fine’
States that have nuclear weapons are generally less able to successfully make compellent threats than states that do not. Citation: https://uva.theopenscholar.com/todd-sechser/publications/militarized-compellent-threats-1918%E2%80%932001
The USA was the dominant industrial power in the post-war world, was this obvious and massive advantage ‘extremely’ enhanced by its’ possession of nuclear weapons? As a reminder, these weapons were not decisive (or even useful) in any of the wars the USA actually fought, the USA has been repeatedly and continuously challenged by non-nuclear regional powers.
Sure, AI might provide an extreme advantage, but I’m not clear on why nuclear weapons do.
What extreme advantages were those? What nuclear age conquests are comparable to the era immediately before?
So you asked anthropic for uncensored model access so you could try to build scheming AIs, and they gave it to you?
To use a biology analogy, isn’t this basically gain of function research?
Food companies are adding sesame (an allergen for some) to food in order to not be held responsible for it not containing sesame. Alloxan is used to whiten dough https://www.sciencedirect.com/science/article/abs/pii/S0733521017302898 for the it’s false comment. And is also used to induce diabetes in the lab https://www.sciencedirect.com/science/article/abs/pii/S0024320502019185 RoundUp is in nearly everything.
https://en.m.wikipedia.org/wiki/List_of_withdrawn_drugs#Significant_withdrawals plenty of things keep getting added to this list.
We have never made a safe human. CogEms would be safer than humans though because they won’t unionize and can be flipped off when no longer required.
Edit: sources added for the x commenter.
The hypothetical movie you’re talking about exists: https://en.m.wikipedia.org/wiki/Ichi_the_Killer_(film)
I won’t elaborate on specific scenes, but I think you’ll agree if you watch it.
A lot of cultures circumcise, one thing that’s kind of cool is the Kenyan customs where it is done to young teenagers, often with a rock, in the context of a ‘camp’ in the woods. You choose to become a full member of your tribal subgroup by doing it, all subgroups have slightly different techniques, some have a reputation for feeling better for women than others. Yes, teenagers do die, no, this does not deter anyone from making their kids partici ate : https://en.m.wikipedia.org/wiki/Circumcision_in_Africa
There are analogies here in pollution. Some countries force industry to post bonds for damage to the local environment. This is a new innovation that may be working.
The reason the superfund exists in the US is because liability for pollution can be so severe that a company would simply cease to operate, and the mess would not be cleaned up.
In practice, when it comes to taking environmental risks, better to burn the train cars of vinyl chloride, creating a catastrophe too expensive for anyone to clean up or even comprehend than to allow a few gallons to leak, creating an expensive accident that you can actually afford.
Based on your recent post here: https://www.lesswrong.com/posts/55rc6LJcqRmyaEr9T/please-stop-publishing-ideas-insights-research-about-ai
Can I mark you down as in favor of AI related NDAs? In your ideal world, would a perfect solution be for a single large company to hire all the capable AI researchers, give them aggressive non disclosure and non compete agreements, then shut down every part of the company except the legal department that enforces the agreements?
Thankfully, the Chinese seem to have figured out how to thread this needle: https://economictimes.indiatimes.com/industry/healthcare/biotech/healthcare/chinese-scientists-develop-cure-for-diabetes-insulin-patient-becomes-medicine-free-in-just-3-months/articleshow/110466659.cms?from=mdr
Edit: paper here https://www.nature.com/articles/s41421-024-00662-3
A lot of AI safety seems to assume that humans are safer than they are, and that producing software that operates within a specification is harder than it is. It’s nice to see this paper moving towards integrating actual safety analysis (the remark about collapsing bridges was a breath of fresh air), instead of general demands that ‘the AI always do as humans say’!
A human intelligence placed in charge of a nation state can kill 7 logs of humans and still be remembered heroically. An AI system placed in charge of a utopian reshaping of the society of a major country with a ‘keep the deaths within 6 logs’ guideline that it can actually stay within would be an improvement on the status quo.
If safety people are saying ‘we cant build AI systems that could make people feel bad, and we definitely can’t build systems that kill people’ their demand for perfection is in conflict with improvement.*
I suspect that major AI alignment failure will come from ‘we put the human in charge, and human error led to the model doing bad’. The industrial/aviation safety community now rightly views ‘pilot error’ as a lazy way of ending an analysis and avoiding making the engineering changes to the system that the accident conditions demand.
*edit: imagine if the ‘airplane safety’ community had developed in 1905 (soon humans will be flying in planes!) and had resembled “AI safety” Not one human can be risked! No making planes that can carry bombs! The people who said pregnant women shouldn’t ride trains because the baby will fly out of their bodies were wrong there, but keep them off the planes!
November 17 to May 16 is 180 days.
Pay periods often end on the 15th and end of the month, though at that level, I doubt that’s relevant.
As it turns out, von Neumann was good at lots of things.
https://qualiacomputing.com/2018/06/21/john-von-neumann/
Von Neumann himself was perpetually interested in many fields unrelated to science. Several years ago his wife gave him a 21-volume Cambridge History set, and she is sure he memorized every name and fact in the books. “He is a major expert on all the royal family trees in Europe,” a friend said once. “He can tell you who fell in love with whom, and why, what obscure cousin this or that czar married, how many illegitimate children he had and so on.” One night during the Princeton days a world-famous expert on Byzantine history came to the Von Neumann house for a party. “Johnny and the professor got into a corner and began discussing some obscure facet,” recalls a friend who was there. “Then an argument arose over a date. Johnny insisted it was this, the professor that. So Johnny said, ‘Let’s get the book.’ They looked it up and Johnny was right. A few weeks later the professor was invited to the Von Neumann house again. He called Mrs. von Neumann and said jokingly, ‘I’ll come if Johnny promises not to discuss Byzantine history. Everybody thinks I am the world’s greatest expert in it and I want them to keep on thinking that.’”
____
According to the same article, he was not such a great driver.
Now, comparing him to another famous figure of his age, Menachem Mendel Schneerson. Schneerson was legendary for his ability to recall obscure sections of Torah verbatim, and his insightful reasoning (I am speaking lightly here, his impact was incredible). Using the hypothetical that von Neumann and Schneerson had a similar gift (their ability with the written word as a reflection of their general ability), depending on your worldview, Schneerson’s talents were not properly put to use in the service of science, or von Neumann’s talents were wasted in not becoming a gaon.
Perhaps, if von Neumann had engaged in Torah instead of science, we could have been spared nuclear weapons and maybe even AI for some time. Sure, maybe someone else would have done what he did...but who?
Temporary implies immediately reversible and mild.
People who are on benzos often have emotional regulation issues, serious withdrawal symptoms (sometimes after very short courses potentially even a single dose), and cognitive issues that do not resolve quickly.
In an academic sense, this idea is ‘fine’, but in a very personal way, if someone asked me ‘should I take a member of this class of drug for any reason other than a serious issue that is severely affecting my quality of life?‘, I would answer ‘absolutely not, and if you have a severe issue that they might help with, try absolutely everything else first, because once you’re on these, you’re probably not coming off’.
What are the norms on drug/alcohol use at these events?
On a scale from ‘absent from the campus and if found with legal substances you will be expelled from the event and possibly the community’ to ‘use of pharma or illegal drugs is likely to be common and potentially encouraged by mild peer pressure’?
In computer security, there is an ongoing debate about vulnerability disclosure, which at present seems to have settled on ‘if you aren’t running a bug bounty program for your software you’re irresponsible, project zero gets it right, metasploit is a net good, and it’s ok to make exploits for hackers ideologically aligned with you’.
The framing of the question for decades was essentially “do you tell the person or company
with the vulnerable software, who may ignore you or sue you because they don’t want to spend money? Do you tell the public, where someone might adapt your report into an attack?
Of course, there is the (generally believed to be) unethical option chosen by many “sell it to someone who will use it, and will protect your identity as the author from people who might retaliate”
There was an alternative called ‘antisec’: https://en.m.wikipedia.org/wiki/Antisec_Movement which basically argued ‘dont tell people about exploits, they’re expensive to make, very few people develop the talents to smash the stack for fun and profit, and once they’re out, they’re easy to use to cause mayhem’.
They did not go anywhere, and the antisec viewpoint is not present in any mainstream discussion about vulnerability ethics.
Alternatively, nations have broadly worked together to not publicly disclose technical data that would make building nuclear bombs simple. It is an exercise for the reader to determine whether it has worked.
So, the ideas here have been tried in different fields, with mixed results.
That $769 number might be more relevant than you expect for college undergrads participating in weird psychology research studies for $10 or $25 depending on the study.