teenager | mathematics enthusiast | MIT class of 2026 | vaguely Grey Triber | personal website: https://duck-master.github.io
duck_master
Newton – ACX Meetups Everywhere Spring 2025
Try to wean people off excessive reliance on LLMs. This is probably the biggest source of AI-related negative effects today. I am trying to do this myself (I formerly alternated between claude, chatgpt, and lmarena.ai several times a day), but it is hard.
(By AI-related risks I mean effects like people losing their ability to write originally or think independently.)
When I visited Manhattan, I realized that “Wall Street” and “Broadway” are not just overused clichés, but the names of actual streets (you can walk on them!)
i will have to probably leave by 6:30pm at the latest :|
when does this event end?
I am a bit sick today but the meetup will happen regardless.
Newton, MA, USA—ACX Meetups Everywhere Fall 2024
“Real summer”?
Actually, not going at all. Scheduling conflict.
(To organizer: Sorry for switching to “Can’t Go” and back; I thought this was on the wrong day. I might be able to make this.)
The single biggest question I have is “what is Dirichlet?”
I might come if the venue wasn’t a bar
May 2024 Newton meetup???
To be fair, there is no evidence requirement for upvoting, either.
I could see why someone would want this (eg Reddit’s upvote/downvote system seems to be terrible), but I think LW is small and homogenous-ish enough that it works okay here.
Newton – ACX Meetups Everywhere Spring 2024
Stop being surprised by the passage of time
“AI that can verify itself” seems likely doable for reasons wholly unrelated to metamathematics (unlike what you claim offhandedly) since AIs are finite objects that nevertheless need to handle a combinatorially large space. This has the flavor of “searching a combinatorial explosion based on a small yet well-structured set of criteria” (ie the relatively easy instances of various NP problems), which has had a fair bit of success with SAT/SMT solvers and nonconvex optimizers and evolutionary algorithms and whatnot. I don’t think constructing a system that systematically explores the exponentially big input space of a neural network is going to be too hard a challenge.
Also, has anyone really constructed a specific self-verifying theory yet? (It seems like from the English Wikipedia article, if I understand correctly, Dan Willard came up with a system where subtraction and division are primitive operations with addition and multiplication defined in terms of them, with it being impossible to prove that multiplication is a total function.)
Speaking of MathML are there other ways for one to put mathematical formulas into html? I know Wikipedia uses <math> and its own template {{math}} (here’s the help page), but I’m not sure about any others. There’s also LaTeX (which I think is the best program for putting mathematical formulas into text in general), as well as some other bespoke things in Google Docs and Microsoft Word that I don’t quite understand.
Make base models great again. I’m still nostalgic for GPT-2 or GPT-3. I can understand why RLHF was invented in the first place but it seems to me that you could still train a base model, so that if it’s about to say something dangerous, it just prematurely cuts off the generation by emitting the
<endoftext>
token instead.Alternatively, make models natively emit structured data. LLMs in their current form emit free-form arbitrary text which needs to be parsed in all sorts of annoying ways in order to make it useful for any downstream applications anyways. Also, structured output could help with preventing misaligned behavior.
(I’m less confident in this idea than the previous one.)