How many are excited and aiming for 3+ children?
Given modern technology and old style community, raising 5--7 would be a joy, IDK what you’re talking about. (Not a parent, could be wrong.)
How many are excited and aiming for 3+ children?
Given modern technology and old style community, raising 5--7 would be a joy, IDK what you’re talking about. (Not a parent, could be wrong.)
I’d probably allow something like the synthetic data generation used for AlphaGeometry (Fig. 3) except in base ZFC and giving away very little human math inside the deduction engine
IIUC yeah, that definitely seems fair; I’d probably also allow various other substantial “quasi-mathematical meta-ideas” to seep in, e.g. other tricks for self-generating a curriculum of training data.
But I wouldn’t be surprised if like >20% of the people on LW who think A[G/S]I happens in like 2-3 years thought that my thing could totally happen in 2025 if the labs were aiming for it (though they might not expect the labs to aim for it), with your things plausibly happening later
Mhm, that seems quite plausible, yeah, and that does make me want to use your thing as a go-to example.
whether such a system would prove Cantor’s theorem (stated in base ZFC) (imo this would still be pretty crazy to see)?
This one I feel a lot less confident of, though I could plausibly get more confident if I thought about the proof in more detail.
Part of the spirit here, for me, is something like: Yes, AIs will do very impressive things on “highly algebraic” problems / parts of problems. (See “Algebraicness”.) One of the harder things for AIs is, poetically speaking, “self-constructing its life-world”, or in other words “coming up with lots of concepts to understand the material it’s dealing with, and then transitioning so that the material it’s dealing with is those new concepts, and so on”. For any given math problem, I could be mistaken about how algebraic it is (or, how much of its difficulty for humans is due to the algebraic parts), and how much conceptual progress you have to do to get to a point where the remaining work is just algebraic. I assume that human math is a big mix of algebraic and non-algebraic stuff. So I get really surprised when an AlphaMath can reinvent most of the definitions that we use, but I’m a lot less sure about a smaller subset because I’m less sure if it just has a surprisingly small non-algebraic part. (I think that someone with a lot more sense of the math in general, and formal proofs in particular, could plausibly call this stuff in advance significantly better than just my pretty weak “it’s hard to do all of a wide variety of problems”.)
we already have AI that does every qualitative kind of thing you say AIs qualitatively can’t do
As I mentioned, my response is here https://www.lesswrong.com/posts/sTDfraZab47KiRMmT/views-on-when-agi-comes-and-on-strategy-to-reduce#_We_just_need_X__intuitions:
just because an idea is, at a high level, some kind of X, doesn’t mean the idea is anything like the fully-fledged, generally applicable version of X that one imagines when describing X
I haven’t heard a response / counterargument to this yet, and many people keep making this logic mistake, including AFAICT you.
requiring the benchmarks to be when the hardest things are solved
My definition is better than yours, and you’re too triggered or something to think about it for 2 minutes and understand what I’m saying. I’m not saying “it’s not AGI until it kills us”, I’m saying “the simplest way to tell that something is an AGI is that it kills us; now, AGI is whatever that thing is, and could exist some time before it kills us”.
I tried to explain it in DM and you dismissed the evidence,
What do you mean? According to me we barely started the conversation, you didn’t present evidence, I tried to explain that to you, we made a bit of progress on that, and then you ended the conversation.
human proofs, problems, or math libraries
(I’m not sure whether I’m supposed to nitpick. If I were nitpicking I’d ask things like: Wait are you allowing it to see preexisting computer-generated proofs? What counts as computer generated? Are you allowing it to see the parts of papers where humans state and discuss propositions and just cutting out the proofs? Is this system somehow trained on a giant human text corpus, but just without the math proofs?)
But if you mean basically “the AI has no access to human math content except a minimal game environment of formal logic, plus whatever abstract priors seep in via the training algorithm+prior, plus whatever general thinking patterns in [human text that’s definitely not mathy, e.g. blog post about apricots]”, then yeah, this would be really crazy to see. My points are trying to be, not minimally hard, but at least easier-ish in some sense. Your thing seems significantly harder (though nicely much more operationalized); I think it’d probably imply my “come up with interesting math concepts”? (Note that I would not necessary say the same thing if it was >25% of IMO problems; there I’d be significantly more unsure, and would defer to you / Sam, or someone who has a sense for the complexity of the full proofs there and the canonicalness of the necessary lemmas and so on.)
You refered to ” others’ definition (which is similar but doesn’t rely on the game over clause) ”, and I’m saying no, it’s not relevantly similar, and it’s not just my definition minus doom.
I also dispute that genuine HLMI refers to something meaningfully different from my definition. I think people are replacing HLMI with “thing that can do all stereotyped, clear-feedback, short-feedback tasks”, and then also claiming that this thing can replace many human workers (probably true of 5 or 10 million, false of 500 million) or cause a bunch of unemployment by making many people 5x effective (maybe, IDK), and at that point IDK why we’re talking about this, when X-risk is the important thing.
nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years).
You’re in an echo chamber. They don’t have very good reasons for thinking this. https://www.lesswrong.com/posts/sTDfraZab47KiRMmT/views-on-when-agi-comes-and-on-strategy-to-reduce
It is still the case that some people don’t sign up for cryonics simply because it takes work to figure out the process / financing. If you do sign up, it would therefore be a public service to write about the process.
You people are somewhat crazy overconfident about humanity knowing enough to make AGI this decade. https://www.lesswrong.com/posts/sTDfraZab47KiRMmT/views-on-when-agi-comes-and-on-strategy-to-reduce
One hope on the scale of decades is that strong germline engineering should offer an alternative vision to AGI. If the options are “make supergenius non-social alien” and “make many genius humans”, it ought to be clear that the latter is both much safer and gets most of the hypothetical benefits of the former.
How many proteins are left after these 60 seconds?
I wonder if there’s a large-ish (not by percentage, but still) class of mechanically self-destructing proteins? E.g. suppose you have something like this:
RRRRRAARRRRRRAAAEEEEEEEXXXXXXXXXXXXXXXEEEEEEEE
where R could be any basic amino, E any acidic one, A any neutral one. And then the Xs are some sequence that eventually forms a strong extended thing. So the idea is that you get strong bonds between the second R island and the first E island, and between the first R island and the second E island. Then the X segment pulls the two Es apart, ripping the protein between the two R islands. like:
ARRRRRA <<--------------->> ARRRRRR
A.| | | | | | …...................................… | | | | | |
AEEEEEEXXXXXXXXXXXXEEEEEEE
This is 100% made up, no idea if anything like it could happen.
Eternity can seem kinda terrifying.
A lifeist doesn’t say “You must decide now to live literally forever no matter what happens.”!
In what sense were you lifeist and now deathist? Why the change?
General warning / PSA: arguments about the limits of selection power are true… BUT with an important caveat: selection power is NOT limited to postnatal organisms. Gametogenesis and fertilization involve many millions of cells undergoing various selective filters (ability to proliferate, ability to appropriately respond to regulatory signals, passing genetic integrity checks, undergoing meiosis, physically passing through reproductive organs, etc.). This also counts as selection power available to the species-evolution.
Say a “deathist” is someone who says “death is net good (gives meaning to life, is natural and therefore good, allows change in society, etc.)” and a “lifeist” (“anti-deathist”) is someone who says “death is net bad (life is good, people should not have to involuntarily die, I want me and my loved ones to live)”. There are clearly people who go deathist → lifeist, as that’s most lifeists (if nothing else, as an older kid they would have uttered deathism, as the predominant ideology). One might also argue that young kids are naturally lifeist, and therefore most people have gone lifeist → deathist once. Are there people who have gone deathist → lifeist → deathist? Are there people who were raised lifeist and then went deathist?
(Still impressive and interesting of course, just not literally SOTA.)
According to the article, SOTA was <1% of cells converted into iPSCs
I don’t think that’s right, see https://www.cell.com/cell-stem-cell/fulltext/S1934-5909(23)00402-2
metapreferences are important, but their salience is way out of proportion to their importance.
You mean the salience is too high? On the contrary, it’s too low.
one of the most immediate natural answers is “metapreferences!”.
Of course, this is not an answer, but a question-blob.
as evidenced by experiences like “I thought I wanted X, but in hindsight I didn’t”
Yeah I think this is often, maybe almost always, more like “I hadn’t computed / decided to not want [whatever Thing-like thing X gestured at], and then I did compute that”.
a last-line fallback for extreme cases
It’s really not! Our most central values are all of the proleptic (pre-received; foreshadowed) type: friendship, love, experience, relating, becoming. They all can only be expressed in an either vague or incomplete way: “There’s something about this person / myself / this collectivity / this mental activity that draws me in to keep walking that way.”. Part of this is resolvable confusion, but probably not all of it. Part of the fun of relating with other people is that there’s a true open-endedness; you get to cocreate something non-pre-delimited, find out what another [entity that is your size / as complex/surprising/anti-inductive as you] is like, etc. “Metapreferences” isn’t an answer of course, but there’s definitely a question that has to be asked here, and the answer will fall under “metapreferences” broadly construed, in that it will involve stuff that is ongoingly actively determining [all that stuff we would call legible values/preferences].
“What does it even mean to be wrong about our own values? What’s the ground truth?”
Ok we can agree that this should point the way to the right questions and answers, but it’s an extremely broad question-blob.
If compute is linear in space, then in the obvious way of doing things, you have your Nth kid in your 2N/3th year.