Haha, yet more context I didn’t have much probability of understanding
I work in C# almost exclusively and so I’ve never used an LLM with the expectation that it would run the code itself. I usually explicitly specify what language and form of response I need “Generate a C# <class/method/LINQ statement> that does x y and z in this way with parameters a, b, and c”
ErioirE
I see.
Maybe.
However much I can at this inferential distance
It’s funny how things like matrices are critical for some types of coding (AI, Statistics, etc) but completely unnecessary for others. As a software developer who is not in AI or statistics they have never come up once, though perhaps I would’ve been able to spot potential use cases if I had that background.
Similar to how I frequently see use cases for SQL where inferior options are being used in the wild[1].- ^
To whom it may concern: Please Stop using Excel like that. It’s a crime against humanity, performance, and good data
- ^
Because it’s absurdly addictive, although it’s certainly possible to play it responsibly.
It was partly a joke, party serious because I personally have a difficult time self-regulating if I let myself play it.
As someone with coding expertise but very little knowledge of math terminology, and without looking up any of the terms mentioned:
I can tell there is a joke here. I cannot tell where the joke is, because I don’t have a solid enough understanding of what I can only assume are made up terms or terms related to matrix algebra (and/or whatever related fields are indicated. An annoying part of learning these sorts of things is when you don’t even know enough to be able to identify the precise field being used.)
Did you make up some of those terms to make this a trick question?
After posting this comment to record my confusion I will then allow myself to search for those terms and find out how good or bad my guesses are.
My penalty for being wrong is everyone gets to laugh at me proportional to how far off I am
Adrenaline junkies should not be involved in building AGI, any more than they should be commercial pilots or bus drivers. (Less, even.)
To follow the pattern of “Those with a large built-in incentive for X shouldn’t be in charge of X”:
Ambitious people shouldn’t be handed power
Kids shouldn’t decide the candy budget
Engineers shouldn’t play Factorio
Unfortunately with few exceptions those make up a large portion of the primary interested parties.
Best of luck keeping them away for long.
Not sarcasm. I hope we succeed. But incentives are stacked to make it difficult
This is made more difficult because a large portion of those running trials do not do the data management and/or analysis in-house, instead outsourcing those tasks to CROs (Contract Research Organizations). Inter-organization communication barriers certainly don’t make the disconnect any easier to resolve.
I was an observer for the conversations that (I suspect) contributed to your opinion here. My perspective is that it seems in large part differences in communication style preferences, rather than object-level disagreements. He seems to enjoy the catharsis of being able to emphatically state positions that are non-politically correct in general discourse, which is a sentiment I understand. I don’t recall him responding with anything I would classify as insults or vitriol, though those are to some degree subjective.
One person’s insult is another’s friendly banter, and I suspect he didn’t realize you took as the former what he had meant as the latter.
Would I be correct if I summarized your opinion as “He doesn’t treat controversial topics with enough tact and diplomacy” rather than specific factual or epistemic disagreements?
If his presence is the only thing stopping you from wanting to go, why not reach out to him? I suspect you’d be able to amicably smooth things over.
A related idea: For LessOnline would it be useful to start a norm where if a debate becomes excessively charged any participant could ask for it to be put on hold so that a time can be set aside to productively discuss it in a more structured setting? (i.e. with an impartial moderator mutually agreed upon.)
Has someone made Manifold markets for these predictions? (As of writing this comment I have not found any and I would rather not do it myself since I don’t typically keep tabs on those respective metrics.)
People wouldn’t let there be things constantly competing for their attention, so the future won’t be like that, he says.
Sufficiently absurd news is indistinguishable from satire? Approximate corollary to “sufficiently advanced satire is indistinguishable from news”
Oops, I was unclear[1], when I said “all algorithmic feed platforms” I was referring to those with opaque large-scale engagement-incentivized algorithms. Merely having an algorithmically parsed feed was not the sole load-bearing attribute.
Platforms like this aren’t directly dependent on mass engagement in the same way e.g. no ad revenue, enforcing minimum standards of quality etc.
If it ever became big enough/moderation policies changed enough I would have to find other sites.
In it’s current state I don’t mind LessWrong’s feed. The rate of content generation is small enough I eventually see everything either way.- ^
It’s rare for me to be insufficiently pedantic
- ^
A few years ago I distanced myself from all algorithmic feed platforms and I’ve certainly found it to be worth it. A few Discord servers and some forums like LW are effectively my “social media” and they are quite sufficient for the purpose.
This was interesting, but I notice I’m confused about what the goal was.
I’m also confused why anyone would want any more numbers pre/sufix-ing their username than the minimum required to claim one that hasn’t been taken.Suspicion/potential spoiler
Wait a minute… is she an AI roleplaying a human?
Necessity may be the mother of invention, but at this point we’re pretty sure the father was Laziness.
For complex topics on which I do not have deep knowledge E.G. AI Alignment, I find my opinion is easily swayed by any sufficiently well-written, plausible-sounding argument. And so I recognize that I do not have the necessary knowledge and perspective to add value to the discussion and I purposefully avoid making any confident claims on the subject until if and when I decide to dedicate significant effort to closing the inferential distance.
In a similar vein how does Spirulina look? I hear it is very efficient in terms of protein per sq meter per year compared to using the same space to raise grazing animals.
I’ve had similar experiences.
For me personally, in cases where:The Technical Truth is not my business: go ahead and lie to me and/or omit sensitive details if possible.
— is a much more complex thing that I likely don’t have the foundational understanding to grasp: tell me a portion and then check for comprehension, if I fail that just say some vague ‘it’s complicated’ and give me some ideas of what to study if I really want to know.
— would probably be disturbing for me to know and I am not likely to be negatively affected by not knowing: You can lie to me or omit some details. Alternatively, ask me what reference classes of things I would want to not be informed about.
— would be likely to cause significant harm in my hands or the hands of those I would likely tell it to: obviously lie or omit.
After reflection, the situations where I would mind being lied to are when my future actions are contaminated by reliance on incorrect data. If the lie will not meaningfully affect my future actions I probably don’t care. Although obviously not feasible to accurately predict all possible future actions I might take and why, giving it your best guess is usually sufficient since most conversations are trivial and irrelevant, particularly small talk.
As topic of conversation becomes more consequential, the importance of accuracy also increases.
This seems like a somewhat difficult use case for LLMs. It may be a mistake to think of them as a database of the *entire contents* of the training data. Perhaps instead think of them as compressed amalgamations of the the general patterns in the training data? I’m not terribly surprised that random obscure quotes can get optimized away.
We should have a game where we create a list of interesting questions and then have a few notable writers here answer them, but then also generate some responses from LLMs (with prompts tailored to getting a less-obviously AI response).
Writers would get points for how well they fool people and it has all sorts of fun mind games like
“This has an AI-smelling mistake, but is it the human faking a mistake they know an AI might make?”
Government is also reliant on its citizens to not violently protest, which would happen if it got to the point you describe.
The idealist in me hopes that eventually those with massive gains in productivity/wealth from automating everything would want to start doing things for the good of humanity™, right?
…Hopefully that point is long before large scale starvation.
The Symbolic Representation of good software is often what is wanted. Not good software