I am generally not thrilled with the idea of humor where the joke is how badly you can strawman your opponent. You also end up with Schroedinger’s comedian, a phenomenon that happens a lot with real-world comedians too: the comedian is making an insightful comment about a real world issue, up until someone points out that they are saying something flawed or outright false, to which the reply is “it’s just humor. It doesn’t have to be accurate”.
I don’t think a lot of the jokes are strawman, anymore than I think all of the jokes in the original ‘Impossibility’ paper strawmen. (The power consumption ones might as well have been copy-pasted straight from the original criticisms of GPT-3 from many award-winning academics back in 2020-2023 for all that Claude-3 wound up modifying them.) The arguments by people like Penrose against strong AI really were that bad and rely on insane premises like ‘humans are logically omniscient and never make mistakes’ that have no steelman. And there have been countless equally rubbish arguments made in all seriousness by serious people against AI scaling. It is not strawmanning to point out how breathtakingly bad those were. (Remember when image-gen models were useless because ‘they could never generate a character consistently’? Or ‘they will never generate realistic hands’? Or ‘the fact that they can’t generate text inside images proves deep learning has hit a wall’?)
In any case, the main interest of this demo is in showing how easy it is now to generate a a pretty coherent and not-too-uncreative/ChatGPTesque 23,000+ word essay by scaffolding an inner-monologue prompt with a cheap publicly-accessible long-context-window LLM—which is a long way from the 1000-word GPT-3 attempts I made 4 years ago. (It was a trial run for a poetry project I hope will be more worth reading on its merits rather than merely as a tech demo.)
The power consumption ones might as well have been copy-pasted straight from the original criticisms of GPT-3 from many award-winning academics back in 2020-2023 for all that Claude-3 wound up modifying them.
Classic singularitarian, simply ignoring inconvenient evidence. (Your cult leader hero Eliezer Yudkowsky predicted the singularity would have occurred by 2021 in 1996. In 2005, shadow demon Ray Kurzweil predicted AGI by 2029. Notice a pattern? Just like fusion power and mind uploading, the singularity is always about 20 years away.)
Image generation still struggles with hands, text, and consistency. (AI “art” has failed every time it’s tried to compete with artists. They’re getting laid off because AI is cheaper than stock images and better than nothing.) Google’s most “advanced” text predictor just told millions of people to eat rocks and put glue on pizza. (Maybe that’s the eeevil superintelligence trying to destroy us?)
GPT-3 was around in 2018 and GPT-4, Claude etc. are minuscule improvements. So much for “exponential progress”—in fact, AI is getting dumber and we are running out of data to fix it. AI writing is still trash regardless of the model and it is incredibly easy to tell. Yes, that includes your “essay”. I’ve read it. (Hint: If AI was really that good, people would be using it for more than spam websites. And no, AI is not being adopted in the workplace, even after three years of you guys shoving Blockchain 2.0 down everyone’s throat. Don’t believe everything OpenAI’s marketing tells you.)
The oncoming AIpocalypse looks more like the Willy Wonka Experience than Terminator to me. It indeed poses a serious threat to our society, but only through sheer incompetence.
I am generally not thrilled with the idea of humor where the joke is how badly you can strawman your opponent. You also end up with Schroedinger’s comedian, a phenomenon that happens a lot with real-world comedians too: the comedian is making an insightful comment about a real world issue, up until someone points out that they are saying something flawed or outright false, to which the reply is “it’s just humor. It doesn’t have to be accurate”.
I don’t think a lot of the jokes are strawman, anymore than I think all of the jokes in the original ‘Impossibility’ paper strawmen. (The power consumption ones might as well have been copy-pasted straight from the original criticisms of GPT-3 from many award-winning academics back in 2020-2023 for all that Claude-3 wound up modifying them.) The arguments by people like Penrose against strong AI really were that bad and rely on insane premises like ‘humans are logically omniscient and never make mistakes’ that have no steelman. And there have been countless equally rubbish arguments made in all seriousness by serious people against AI scaling. It is not strawmanning to point out how breathtakingly bad those were. (Remember when image-gen models were useless because ‘they could never generate a character consistently’? Or ‘they will never generate realistic hands’? Or ‘the fact that they can’t generate text inside images proves deep learning has hit a wall’?)
In any case, the main interest of this demo is in showing how easy it is now to generate a a pretty coherent and not-too-uncreative/ChatGPTesque 23,000+ word essay by scaffolding an inner-monologue prompt with a cheap publicly-accessible long-context-window LLM—which is a long way from the 1000-word GPT-3 attempts I made 4 years ago. (It was a trial run for a poetry project I hope will be more worth reading on its merits rather than merely as a tech demo.)
That problem hasn’t gone away?
Classic singularitarian, simply ignoring inconvenient evidence. (Your
cult leaderhero Eliezer Yudkowsky predicted the singularity would have occurred by 2021 in 1996. In 2005, shadow demon Ray Kurzweil predicted AGI by 2029. Notice a pattern? Just like fusion power and mind uploading, the singularity is always about 20 years away.)Image generation still struggles with hands, text, and consistency. (AI “art” has failed every time it’s tried to compete with artists. They’re getting laid off because AI is cheaper than stock images and better than nothing.) Google’s most “advanced” text predictor just told millions of people to eat rocks and put glue on pizza. (Maybe that’s the eeevil superintelligence trying to destroy us?)
GPT-3 was around in 2018 and GPT-4, Claude etc. are minuscule improvements. So much for “exponential progress”—in fact, AI is getting dumber and we are running out of data to fix it. AI writing is still trash regardless of the model and it is incredibly easy to tell. Yes, that includes your “essay”. I’ve read it. (Hint: If AI was really that good, people would be using it for more than spam websites. And no, AI is not being adopted in the workplace, even after three years of you guys shoving Blockchain 2.0 down everyone’s throat. Don’t believe everything OpenAI’s marketing tells you.)
The oncoming AIpocalypse looks more like the Willy Wonka Experience than Terminator to me. It indeed poses a serious threat to our society, but only through sheer incompetence.