The previous SOTA for MATH (https://arxiv.org/pdf/2009.03300.pdf) is a fine-tuned GPT-2 (1.5b params), whereas the previous SOTA for GSM8K (https://arxiv.org/pdf/2203.11171.pdf) is PaLM (540b params), using a similar “majority voting” method as Minerva (query each question ~40 times, take the most common answer).
IL
Here’s a thought experiment: Suppose that a market is perfectly efficient, except that every 50 years or so there’s a crash, which sufficiently smart people can predict a month in advance. Would you say that this market is efficient? Technically it isn’t, because smart people have a systematic advantage over the market. But practically, no trader systematically beats the market, because no trader lives long enough!
I suppose you can create a long-living institution, a “black swan fund”, that very rarely makes bets on predictable crashes, and over a few centuries can prove it has higher returns. But I guess not enough people care about returns over these timescales.
What’s the best way to convince skeptics of the severity of COVID? I keep seeing people saying it’s just a slightly worse flu, or that car accidents kill a lot more people, and so on. I want some short text or image that illustrates just how serious this is.
I found this heartbreaking testimony from an Italian ICU doctor: https://twitter.com/silviast9/status/1236933818654896129
But I guess skeptics will want a more authoritative source.
“Or the first replicator to catch on, if there were failed alternatives lost to history—but this seems unlikely, given the Fermi Paradox; a replicator should be more improbable than that, or the stars would teem with life already.”
So do you thing that the vast majority of The Big Filter is concentrated on the creation of a first replicator? What’s the justification for that?
-You can’t prove I’m wrong!
-Well, I’m an optimist.
-Millions of people believe it, how can they all be wrong?
-You’re relying too much on cold rationality.
-How can you possibly reduce all the beauty in the world to a bunch of equations?
Eliezer, I remember an earlier post of yours, when you said something like: “If I would never do impossible things, how could I ever become stronger?” That was a very inspirational message for me, much more than any other similar sayings I heard, and this post is full of such insights.
Anyway, on the subject of human augmentation, well, what about them? If you are talking about a timescale of decades, than intelligence augmentation does seems like a worthy avenue of investment (it doesn’t has to be full scale neural rewiring, it could be just smarter nootropics).
…Can someone explain why?
Many people believe in an afterlife… why sign up for cryonics when you’re going to go to Heaven when you die?
That’s probably not the explanation, since there are many millions of atheists who heard about cryonics and/or extinction risks. I figure the actual explanation is a combination of conformity, the bystander effect, the tendency to focus on short term problems, and the Silliness Factor.
Eliezer, I have an objection to your metaethics and I don’t think it’s because I mixed levels:
If I understood your metaethics correctly, then you claim that human morality consists of two parts: a list of things that we value(like love, friendship, fairness etc), and what we can call “intuitions” that govern how our terminal values change when we face moral arguments. So we have a kind of strange loop (in the Hofstadterian sense); our values judge if a moral argument is valid or not, and the valid moral arguments change our terminal values. I think I accept this. It explains quite nicely a lot of questions, like where does moral progress comes from. What I am skeptic about is the claim that if a person hears enough moral arguments, their values will always converge to a single set of values, so you could say that his morality approximates some ideal morality that can be found if you look deep enough into his brain. I think it’s plausible that the initial set of moral arguments that the person hears will change considerably his list of values, so that his morality will diverge rather than converge, and there won’t be any “ideal morality” that he is approximating.
Note that I am talking about a single human that hears different sets of moral arguments, and not about the convergence of moralities across all humans (which is a different matter altogether)
Also note that this is a purely empirical objection; I am asking for empirical evidence that supports your metaethics
why isn’t the moral of this fable that pursuing subjective intuitions about correctness a wild goose chase?
Bacause those subjective intuitions are all we got. Sure, in an absolute sense, human intuitions on correctness are just as arbitrary as the pebblesorter’s intuitions(and vastly more complex), but we don’t judge intuitions in an absolute way, we judge them with are own intuitons. You can’t unwind past your own intuitions. That was the point of Eliezer’s series of posts.
The gap between autistic humans and neurotypical humans may be bigger than the gap between male and female humans. I would list autism as an exception to the psychological unity of humankind.
I remember reading “The Curious Incident of the Dog in the Night-time” and thinking: “This guy is more alien than most aliens I saw in Sci-fi”.
P.S : My great “Aha!” moment from reading this post is the realisation that morality is not just a utility function that maps states of the world to real numbers, but also a set of intuitions for changing that utility function.
Let me see if I get this straight:
Our morality is composed of a big computation that includes a list of the things that we value(love, friendship, happiness,...) and a list of valid moral arguments(contagion backward in time, symmetry,...). If so, then how do we discover those lists? I guess that the only way is to reflect on our own minds, but if we do that, then how do we know if a particular value comes from our big computation, or is it just part of our regular biases? And if our biases are inextricably tangled with The Big Computation, then what hope can we possibly have?
Anyway, I think it would be useful to moral progress to list all of the valid moral arguments. Contagion backward in time and symmentry seem to be good ones. Any other suggestions?
I second Doug.
The AIs will not care about the things you care about. They’ll have no reason to.
The “AIs” will be created by us. If we’re smart enough, we will program the AIs to value the things we value (not in the same way that evolution programmed us).The important thing is that we humans value love, and therefore we want love to perpetuate throught the universe.
I guess that this will lead to your concept of volition, isn’t it?
Anyway, is Obert really arguing that morality is entirely outside the mind? Couldn’t the “fact” of morality that he is trying to discover be derived from his (or humanity’s, or whatever) brain design? And if you tweak Subhan’s definition of “want” enough, couldn’t they actually reach agreement?
Also, I take it that this means you don’t believe in the whole, “if a program implements consciousness, then it must be conscious while sitting passively on the hard disk” thing. I remember this came up before in the quantum series and it seemed to me absurd, sort of for the reasons you say.
I used that as an argument against timeless physics: If you could have consciousness in a timeless universe, than this means that you could simulate a conscious being without actually running the simulation, you could just put the data on the hard drive. I’m still waiting out for an answer on that one!
Vladimir, why not? From reading your comment, it seems like the only reason you don’t hurt other people is because you will get hurt by it, so if you would take the pill, you would be able to hurt other people. Have I got it wrong? Is this really the only reason you don’t hurt people?
Vladimir, if there was a pill that would make the function of the mirror neurons go away, in other words, a pill that would make you able to hurt people without feeling remorse or anguish, would you take it?
Roko, what exactly do you mean by “optimal”? “optimal” means “good”, which is another word for “ethical”, so your definition of ethics doesn’t actually tell us anything new! An AI can view the supergoal of “creating more paperclips” as the optimal/correct/succesful/good thing to do. the value of the AI’s supergoal(s) doesn’t has anything to do with it’s intelligence.
When you exhaust all the language data from text, you can start extracting language from audio and video.
As far as I know the largest public repository of audio and video is YouTube. We can do a rough back-of-the-envelope computation for how much data is in there:
According to some 2019 article I found, in every minute 50 hours of video are uploaded to YouTube. If we assume this was the average for the last 15 years, that gets us 200 billion minutes of video.
An average conversation has 150 words per minute, according to a Google search. That gets us 30T words, or 30T tokens if we assume 1 token per word (is this right?)
Let’s say 1% of that is actually useful, so that gets us 300B tokens, which is… a lot less than I expected.
So it seems like video doesn’t save us, if we just use it for the language data. We could do self-supervised learning on the video data, but for that we need to know the scaling laws for video (has anyone done that?).