or can read interview transcripts in much less time than listening to a podcast would take.
This always baffles me. :) Guess I’m both a slow reader and a fast listener, but for me audio allows for easily 3x as much speed as reading.
or can read interview transcripts in much less time than listening to a podcast would take.
This always baffles me. :) Guess I’m both a slow reader and a fast listener, but for me audio allows for easily 3x as much speed as reading.
So what made you change your mind?
It’s interesting how two years later, the “buy an expert’s time” suggestion is almost outdated. There are still situations where it makes sense, but probably in the majority of situations any SOTA LLM will do a perfectly fine job giving useful feedback on exercises in math or language learning.
Thanks for the post!
The puzzle does not include any question or prompt. What does “try it out” mean exactly? I suppose it means “figure out how the notation works”, or am I missing something? (I didn’t read the rest to not get spoiled)
I guess a related pattern is the symmetric case where people talk past each other because both sides are afraid their arguments won’t get heard, so they both focus on repeating their arguments and nobody really listens (or maybe they do, but not in a way that convinces the other person they really got their argument). So there, too, I agree with your advice—taking a step back and repeating the other person’s viewpoint seems like the best way out of this.
Some further examples:
Past me might have said: Apple products are “worse” because they are overpriced status symbols
Many claims in politics, say “we should raise the minimum wage because it helps workers”
We shouldn’t use nuclear power because it’s not really “renewable”
When AI lab CEOs warn of AI x risk we can dismiss that because they might just want to build hype
AI cannot be intelligent, or dangerous, because it’s just matrix multiplications
One shouldn’t own a cat because it’s an unnatural way for a cat to live
Pretty much any any-benefit mindset that makes it into an argument rather than purely existing in a person’s behavior
It certainly depends on who’s arguing. I agree that some sources online see this trade-off and end up on the side of not using flags after some deliberation, and I think that’s perfectly fine. But this describes only a subset of cases, and my impression is that very often (and certainly in the cases I experienced personally) it is not even acknowledged that usability, or anything else, may also be a concern that should inform the decision.
(I admit though that “perpetuates colonialism” is a spin that goes beyond “it’s not a 1:1 mapping” and is more convincing to me)
This makes me wonder, how could an AI figure out whether it had conscious experience? I always used to assume that from first person perspective it’s clear when you’re conscious. But this is kind of circular reasoning as it assumes you have a “perspective” and are able to ponder the question. Now what does a, say, reasoning model do? If there is consciousness, how will it ever know? Does it have to solve the “easy” problem of consciousness first and apply the answer to itself?
In no particular order, because interestingness is multi-dimensional and they are probably all to some degree on my personal interesting Pareto frontier:
Almost everything is causally linked, saying “A has no effect on B” is almost always wrong (unless you very deliberately search for A and B that fundamentally cannot be causally linked). If you ran a study with a bazillion subjects for long enough, practically anything you can measure would reach statistical significance
Many disagreements are just disagreements about labels (“LLMs are not truly intelligent”, “Free will does not exist”) and can be easily resolved / worked around once you realize this (see also)
Selection biases of all kind
Intentionality bias, it’s easy to explain human behavior with supposed intentions, but there is much more randomness and ignorance everywhere than we think
Extrapolations tend to work locally, but extrapolating further into the future very often gets things wrong; kind of obvious, applies to e.g. resource shortages (“we’ll run out of X and then there won’t be any X anymore!”), but also Covid (I kind of assumed Covid cases would just exponentially climb until everything went to shit, and forgot to take into account that people would get afraid and change their behavior on a societal scale, at least somewhat, and politicians would eventually do things, even if later than I would), and somewhat AI (we likely won’t just “suddenly” end up with a flawless superintelligence)
“If only I had more time/money/whatever” style thinking is often misguided, as often when people say/think this, the sentence continues with “then I could spend that time/money/whatever in other/more ways than currently”, meaning as soon as you get more of X, you would immediately want to spend it, so you’ll never sustainably end up in a state of “more X”. So better get used to X being limited and having to make trade-offs and decisions on how to use that limited resource rather than daydreaming about a hypothetical world of “more X”. (This does not mean you shouldn’t think about ways to increase X, but you should probably distance yourself from thinking about a world in which X is not limited)
Taleb’s Extremistan vs Mediocristan model
+1 to Minimalism that lsusr already mentioned
The mindblowing weirdness of very high-dimensional spaces
Life is basically an ongoing coordination problem between your past/present/future selves
The realization that we’re not smart enough to be true consequentialists, i.e. consequentialism is somewhat self-defeating
The teleportation paradox, and thinking about a future world in which a) teleportation is just a necessity to be successful in society (and/or there is just social pressure, e.g. all your friends do it and you get excluded from doing cool things if you don’t join in) and b) anyone having teleported before having convincing memories of having gone through teleportation and coming out on the other side. In such a world, anyone with worries about teleportation would basically be screwed. Not sure if I should believe in any kind of continuity of consciousness, but that certainly feels like a thing. So I’d probably prefer not to be forced to give that up just because the societal trajectory happens to lead through ubiquitous teleportation.
Random thought: maybe (at least pre-reasoning-models) LLMs are RLHF’d to be “competent” in a way that makes them less curious & excitable, which greatly reduces their chance of coming up with (and recognizing) any real breakthroughs. I would expect though that for reasoning models such limitations will necessarily disappear and they’ll be much more likely to produce novel insights. Still, scaffolding and lack of context and agency can be a serious bottleneck.
Interestingly, the text to speech conversion of the “Text does not equal text” section is another very concrete example of this:
The TTS AI summarizes the “Hi!” ASCII art picture as “Vertical lines arranged in a grid with minor variations”. I deliberately added an alt text to that image, describing what can be seen, and I expected that this alt text would be used for TTS—but seemingly that is not the case, and instead some AI describes the image in isolation. If I were to describe that image without any further context, I would probably mention that it says “Hi!”, but I grant that describing it as “Vertical lines arranged in a grid with minor variations” would also be a fair description.
the “| | | |↵|-| | |↵| | | o” string is read out as “dash O”. I would have expected the AI to just read that out in full, character by character. Which probably is an example of me falsely taking my intention as a given. There are probably many conceivable cases where it’s actually better for the AI to not read out cryptic strings character by character (e.g. when your text contains some hash or very long URL). So maybe it can’t really know that this particular case is an exception.
But what you’re probably not aware of is that 0.8% of the US population ends up dieing due to intentional homicide
That is an insane statistic. According to a bit of googling this indeed seems plausible, but would still be interested in your source if you can provide it.
Downvoted for 3 reasons:
The style strikes me as very AI-written. Maybe it isn’t—but the very repetitive structure looks exactly like the type of text I tend to get out of ChatGPT much of the time. Which makes it very hard to read.
There are many highly superficial claims here without much reasoning to back them up. Many claims of what AGI “would” do without elaboration. “AGI approaches challenges as problems to be solved, not battles to be won.”—first, why? Second, how does this help us when the best way to solve the problem involves getting rid of humans?
Lastly, I don’t get the feeling this post engages with the most common AI safety arguments at all. Neither does it with evidence from recent AI developments. How do you expect “international agreements” with any teeth in the current arms race? When we don’t even get national or state level agreements. While Bing/Sydney was not an AGI, it clearly showed that much of what this post dismisses as anthropocentric projections is realistic, and, currently, maybe even the default of what we can expect of AGI as long as it’s LLM-based. And even if you dismiss LLMs and think of more “Bostromian” AGIs, that still leaves you with instrumental convergence, which blows too many holes into this piece to leave anything of much substance.
Or as a possible more concrete prompt if preferred: “Create a cost benefit analysis for EU directive 2019⁄904, which demands that bottle caps of all plastic bottles are to remain attached to the bottles, with the intention of reducing littering and protecting sea life.
Output:
key costs and benefits table
economic cost for the beverage industry to make the transition
expected change in littering, total over first 5 years
QALYs lost or gained for consumers throughout the first 5 years”
In the EU there’s some recent regulation about bottle caps being attached to bottles, to prevent littering. (this-is-fine.jpg)
Can you let the app come up with a good way to estimate the cost benefit ratio of this piece of regulation? E.g. (environmental?) benefit vs (economic? QALY?) cost/drawbacks, or something like that. I think coming up with good metrics to quantify here is almost as interesting as the estimate itself.
For a long time, I used to wonder what causes people to consistently mispronounce certain words even when they are exposed to many people pronouncing them correctly. (which mostly applies to people speaking in a non-native language, e.g. people from continental Europe speaking English)
Some examples that I’ve heard from different people around me over the years:
Saying “rectangel” instead of “rectangle”
Saying “pre-purr” (like prefer, but with a p) instead of “prepare”
Saying something like, uhh, “devil-oupaw” instead of “developer”
Saying “leech” instead of “league”
Saying “immu-table” instead of “immutable”
Saying “cyurrently” instead of “currently”
I did, of course, understand that if you only read a word, particularly in English where pronunciations are all over the place and often unpredictable, you may end up with a wrong assumption of how it’s pronounced. This happened to me quite a lot[1]. But then, once I did hear someone pronounce it, I usually quickly learned my lesson and adapted the correct way of saying it. But still I’ve seen all these other people stick to their very unusual pronunciations anyway. What’s up with that?[2] Naturally, it was always too awkward for me to ask them directly, so I never found out.
Recently, however, I got a rather uncomfortable insight into how this happens when a friend pointed out that I was pronouncing “dude” incorrectly, and have apparently done so for all my life, without anyone ever informing me about it, and without me noticing it.
So, as I learned now, “dude” is pronounced “dood” or “dewd”. Whereas I used to say “dyood” (similar to duke). And while I found some evidence that dyood is not completely made up, it still seems to be very unusual, and something people notice when I say it.
Hence I now have the, or at least one, answer to my age-old question of how this happens. So, how did I never realize? Basically, I did realize that some people said “dood”, and just took that as one of two possible ways of pronouncing that word. Kind of, like, the overly American way, or something a super chill surfer bro might say. Whenever people said “dood” (which, in my defense, didn’t happen all that often in my presence[3]) I had this subtle internal reaction of wondering why they suddenly saw the need to switch to such a heavy accent for a single word.
I never quite realized that practically everyone said “dood” and I was the only “dyood” person.
So, yeah, I guess it was a bit of a trapped prior and it took some well-directed evidence to lift me out of that valley. And maybe the same is the case for many of the other people out there who are consistently mispronouncing very particular words.
But, admittedly, I still don’t wanna be the one to point it out to them.
And when I lie awake at night, I wonder which other words I may be mispronouncing with nobody daring to tell me about it.
e.g., for some time I thought “biased” was pronounced “bee-ased”. Or that “sesame” was pronounced “see-same”. Whoops. And to this day I have a hard time remembering how “suite” is pronounced.
Of course one part of the explanation is survivorship bias. I’m much less likely to witness the cases where someone quickly corrects their wrong pronunciation upon hearing it correctly. Maybe 95% of cases end up in this bucket that remains invisible to me. But still, I found the remaining 5% rather mysterious.
Maybe they were intimidated by my confident “dyood”s I threw left and right.