The pun conflates two meanings of “Brick and Mortar”, a brick-and-mortar store and a brick which is part of a building.
That is just plain wrong. That is not the double-sense meant: the double-sense is the name Rick and Morty and the phrase brick and mortar, not that the building type and what it’s made of!
The second one doesn’t even get the character names right: ‘the characters called “Brick and Mortar”‘. If you think the characters are named ‘brick and mortar’, you have definitely misunderstood the joke.
That is just plain wrong. That is not the double-sense meant: the double-sense is the name Rick and Morty and the phrase brick and mortar, not that the building type and what it’s made of!
A nice example of humans who are not concentrating are not general intelligences: I read most of the first explanation but didn’t read its last sentence properly, thought that GPT was doing an impressive job as always, and was also confused since it seemed like a good explanation to me.
I let it pass even though its answer was not well formed because it mentioned both the show and the type of store, so I judged that it saw all the relevant connections. I suppose you’re used to better form from it.
Feel free to be rude to me, I operate by Crocker’s rules :)
I don’t regard bag-of-words as sufficient to show it understood. I mean, would you say that if GPT-3 responded “61” to the question “10+6=”, it understands arithmetic correctly? It mentions both the right digits, after all!
I might be a little more lenient if it had occasionally gotten some of the others right (perhaps despite my sampling settings it was still a bad setting - ‘sampling can show the presence of knowledge but not the absence’) or at least come close like it does on very hard arithmetic problems when you format them correctly, but given how badly it performs on all of the other puns, in both generating and explaining them, it’s clear which direction I should regress to the mean my estimate of the quality of that explanation...
...it didn’t fail abysmally? Am I being silly? It correctly explains the first two puns and fails on the third.
No, it fails on both of those.
That is just plain wrong. That is not the double-sense meant: the double-sense is the name Rick and Morty and the phrase brick and mortar, not that the building type and what it’s made of!
The second one doesn’t even get the character names right: ‘the characters called “Brick and Mortar”‘. If you think the characters are named ‘brick and mortar’, you have definitely misunderstood the joke.
A nice example of humans who are not concentrating are not general intelligences: I read most of the first explanation but didn’t read its last sentence properly, thought that GPT was doing an impressive job as always, and was also confused since it seemed like a good explanation to me.
I was thinking precisely that myself, but I didn’t want to be rude to Gurkenglas by pointing it out.
I let it pass even though its answer was not well formed because it mentioned both the show and the type of store, so I judged that it saw all the relevant connections. I suppose you’re used to better form from it.
Feel free to be rude to me, I operate by Crocker’s rules :)
I don’t regard bag-of-words as sufficient to show it understood. I mean, would you say that if GPT-3 responded “61” to the question “10+6=”, it understands arithmetic correctly? It mentions both the right digits, after all!
I might be a little more lenient if it had occasionally gotten some of the others right (perhaps despite my sampling settings it was still a bad setting - ‘sampling can show the presence of knowledge but not the absence’) or at least come close like it does on very hard arithmetic problems when you format them correctly, but given how badly it performs on all of the other puns, in both generating and explaining them, it’s clear which direction I should regress to the mean my estimate of the quality of that explanation...