I like Metz. I’d rather have EY, but that won’t happen.
Blueberry
This exactly. Having the Grey Lady report about AI risk is a huge step forward and probably decreased the chance of us dying by at least a little.
This is completely false, as well as irrelevant.
-
he did not “doxx” Scott. He was going to reveal Scott’s full name in a news article about him without permission, which is not by any means doxxing, it’s news reporting. News is important and news has a right to reveal the full names of public figures.
-
this didn’t happen, because Scott got the NYT to wait until he was ready before doing so.
-
the article on rationalism isn’t a “hit piece” even if it contains some things you don’t like. I thought it was fair and balanced.
-
none of this is relevant, and it’s silly to hold a grudge against a reporter for an article you don’t like from years ago when what’s more important is this current article about AI risk.
-
Why do you think an LLM could become superhuman at crafting business strategies or negotiating? Or even writing code? I don’t believe this is possible.
oh wow, thanks!
She didn’t get the 5D thing—it’s not that the messengers live in five dimensions, they were just sending two-dimensional pictures of a three-dimensional world.
LLMs’ answers on factual questions are not trustworthy; they are often hallucinatory.
Also, I was obviously asking you for your views, since you wrote the comment.
Sorry, 2007 was a typo. I’m not sure how to interpret the ironic comment about asking an LLM, though.
OTOH, if you sent back Attention is all you need
What is so great about that 2007 paper?
People didn’t necessarily have a use for all the extra compute
Can you please explain the bizarre use of the word “compute” here? Is this a typo? “compute” is a verb. The noun form would be “computing” or “computing power.”
Yudkowsky makes a few major mistakes that are clearly visible now, like being dismissive of dumb, scaled, connectionist architectures
I don’t think that’s a mistake at all. Sure, they’ve given us impressive commercial products, but no progress towards AGI, so the dismissiveness is completely justified.
Maybe you’re an LLM.
there would be no way to glue these two LLMs together to build an English-to-Japanese translator such that training the “glue” takes <1% of the comput[ing] used to train the independent models?
Correct. They’re two entirely different models. There’s no way they could interoperate without massive computing and building a new model.
(Aside: was that a typo, or did you intend to say “compute” instead of “computing power”?)
I don’t see why fitting a static and subhuman mind into consumer hardware from 2023 means that Yudkowsky doesn’t lose points for saying you can fit a learning (implied) and human-level mind into consumer hardware from 2008.
Because one has nothing to do with the other. LLMs are getting bigger and bigger, but that says nothing about whether a mind designed algorithmically could fit on consumer hardware.
Yeah, one example is the view that AGI won’t happen, either because it’s just too hard and humanity won’t devote sufficient resources to it, or because we recognize it will kill us all.
I really disagree with this article. It’s basically just saying that you drank the LLM Kool-Aid. LLMs are massively overhyped. GPT-x is not the way to AGI.
This article could have been written a dozen years ago. A dozen years ago, people were saying the same thing: “we’ve given up on the Good Old-Fashioned AI / Douglas Hofstadter approach of writing algorithms and trying to find insights! it doesn’t give us commerical products, whereas the statistical / neural network stuff does!”
And our response was the same as it is today. GOFAI is hard. No one expected to make much progress on algorithms for intelligence in just a decade or two. We knew in 2005 that if you looked ahead a decade or two, we’d keep seeing impressive-looking commercial products from the statistical approach, and the GOFAI approach would be slow. And we have, but we’re no closer to AGI. GPT-x only predicts the next words based on a huge corpus, so it gives you what’s already there. An average, basically. An impressive-looking toy, but it can’t reason or set goals, which is the whole idea here. GOFAI is the only way to do that. And it’s hard, and it’s slow, but it’s the only path going in the right direction.
Once you understand that, you can see where your review errs.
-
cyc—it’s funny that Hanson takes what you’d expect to be Yudkowsky’s view, and vice versa. cyc is the correct approach. The only reason to doubt this is if you’re expecting commercially viable results in a few years, which no one was. Win Hanson.
-
AI before ems—AI does not seem well on its way, so I disagree that there’s been any evidence one way or the other. Draw.
-
sharing cognitive content and improvements—clear win Yudkowsky. The neural network architecture is so common for commercial reasons only, not because it “won” or is more effective. And even if you only look at neural networks, you can’t share content or improvements between one and another. How do you share content or improvements between GPT and Stable Diffusion, for instance?
-
algorithms:
Yudkowsky seems quite wrong here, and Hanson right, about one of the central trends—and maybe the central trend—of the last dozen years of AI.
Well, that wasn’t the question, was it? The question was about AI progress, not what the commercial trend would be. The issue is that AI progress and the commercial trend are going in opposite directions. LLMs and throwing more money, data, and training at neural networks aren’t getting us closer to actual AGI. Win Yudkowsky.
But—regardless of Yudkowsky’s current position—it still remains that you’d have been extremely surprised by the last decade’s use of comput[ing] if you had believed him
No, no you would not. Once again, the claim is that GOFAI is the slow and less commercializable path, but the only true path to AGI, and the statistical approach has and will continue to give us impressive-looking and commercializable toys and will monopolize research, but will not take us anywhere towards real AGI. The last decade is exactly what you’d expect on this trend. Not a surprise at all.
-
I don’t agree with that. Neutral-genie stories are important because they demonstrate the importance of getting your wish right. As yet, deep learning hasn’t taken us to AGI, and it may never, and even if it does, we may still be able to make them want particular things or give them particular orders or preferences.
Here’s a great AI fable from the Air Force:
[This is] a hypothetical “thought experiment” from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation … “We’ve never run that experiment, nor would we need to in order to realise that this is a plausible outcome,” Col. Tucker “Cinco” Hamilton, the USAF’s Chief of AI Test and Operations, said … “Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI”
“We were training it in simulation to identify and target a Surface-to-air missile (SAM) threat. And then the operator would say yes, kill that threat. The system started realizing that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” Hamilton said, according to the blog post.
He continued to elaborate, saying, “We trained the system–‘Hey don’t kill the operator–that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target”
Can you give me an example of a NLP “program” that influences someone, or link me to a source that discusses this more specifically? I’m interested but, as I said, skeptical, and looking for more specifics.
I’d guess it was more likely to be emotional stuff relating to living with people who once had such control over you. I can’t stand living at my parents’ for very long either… it’s just stressful and emotionally draining.
What pragmatist said. Even if you can’t break it down step by step, can you explain what the mechanism was or how the attack was delivered? Was it communicated with words? If it was hidden how did your friend understand it?
How did the attack happen? I’m skeptical.
I don’t see it as sneering at all.
I’m not sure what you mean by “senpai noticed me” but I think it is absolutely critical, as AI becomes more familiar to hoi polloi, that prominent newspapers report on AI existential risk.
The fact that he even mentions EY as the one who started the whole thing warms my EY-fangirl heart—a lot of stuff on AI risk does not mention him.
I also have no idea what you mean about Clippy—how is it misunderstood? I think it’s an excellent way to explain.
Would you prefer this?
https://www.vice.com/en/article/4a33gj/ai-controlled-drone-goes-rogue-kills-human-operator-in-usaf-simulated-test