instantly recall any fact and solve any math problem
You mean, recall any fact that’s been put into text-searchable form in the past and by you, and solve any calculation problem that’s in a reasonably common form.
I’m saying that the effect on philosophical problem-solving is just not very large. Yeah, if you’ve been spending 80% of your time on manually calculating things and 20% on “leaps of logic”, and you could just as well spend 90% on the leaps, then calculators help a lot. But it’s not by making you be able to do significantly better leaps. Maybe you can become better by getting more practice or something? But generally skills tend to plateau pretty sharply—there’s always new bottlenecks, like a clicker game. If an improvement only addresses some smallish subset of the difficulty involved in some overall challenge, the overall challenge isn’t addressed that much.
Like, if you could do calculations with 10x less effort, what calculations would you do to solve alignment, or get AGI banned, or make sure everyone gets food, or fix the housing crisis, or ….?
To put it a different way, I don’t think Gödel’s lack of a big fast calculator mattered too much?
You mean, recall any fact that’s been put into text-searchable form in the past and by you, and solve any calculation problem that’s in a reasonably common form.
No, I do not mean that at all.
An ideal system would store every piece of information its user has ever seen or heard in addition to every book/article/program ever written or recorded and be able to translate problems given in “common english” into objective mathematical proofs then giving an explanation of the answer in English again.
But generally skills tend to plateau pretty sharply—there’s always new bottlenecks, like a clicker game.
This is an empirical question, but based on my own experience I would speculate the gain is quite significant. Again, merely giving me access to a calculator and a piece of paper makes me better at math than 99.99% of people who do not have access to such tools.
Like, if you could do calculations with 10x less effort, what calculations would you do to solve alignment, or get AGI banned, or make sure everyone gets food, or fix the housing crisis, or ….?
would I
“solve alignment”?
Yes.
“get AGI banned”
No, because I solved alignment.
“make sure everyone gets food, or fix the housing crisis”
Both of these are political problems that have nothing to do with “intelligence”. If everyone was 10x smarter, maybe they would stop voting for retarded self-destructive polices. Idk, though.
An ideal system would store every piece of information its user has ever seen or heard in addition to every book/article/program ever written or recorded and be able to translate problems given in “common english” into objective mathematical proofs then giving an explanation of the answer in English again.
That’s what I said. It excludes, for example, a fact that the human thinks of, unless ze speaks or writes it.
Again, merely giving me access to a calculator and a piece of paper makes me better at math than 99.99% of people who do not have access to such tools.
It makes you better at calculation, which is relevant for some kinds of math. It doesn’t make you better at math in general though, no. If you’re not familiar with higher math (the sort of things that grad students and professors do), you might not be aware: Most of the stuff that most of them do involves not very much that one would plug in to a calculator.
would I “solve alignment”? Yes.
What calculations would you plug into your fast-easy-calculator that result in you solving alignment?
I don’t think most of those proposals make sense, but anyway, the ones that do make sense only make sense with a pretty extreme math oracle—not something that leaves the human to fill in the “leaps of logic”. It’s just talking about AGI, basically. Which defeats the purpose.
Not true. This isn’t the place for this debate, but if you want to know:
To get an AGI that can solve problems that require lots of genuinely novel thinking, you’re probably pulling an agent out of a hat, and then you have an agent with unknown values and general optimization channels.
Even if you only want to solve problems, you still need compute, and therefore wish to conquer the universe (for science!).
An agent that only thinks about math problems isn’t going to take over the real world (it doesn’t even have to know the real world exists, as this isn’t a thing you can deduce from first principles).
Even if you only want to solve problems, you still need compute
We’re going to get compute anyway. Mundane uses of deep learning already use a lot of compute.
I think whether or not the math-proof-AI became accidentally an agent is indeed a moot point if you can successfully create the agent in a censored environment (aka using censored data sets and simulations, with careful obfuscation of the true nature of the substrate on which it is being simulated). I agree with Logan that in such a scenario, the pure math the agent has been trained on doesn’t plausibly give even a superhumanly smart agent enough clues about our physical universe to figure out that it could hack out or manipulate some hypothetical hidden operator into letting it out. An agent specializing in human biology could figure this sort of thing out, but specializing in math? No, I think you can keep the data clean enough to avoid tells.
Is there some weird theoretical super-powerful agent who could figure out how to escape its containment despite having no direct knowledge about its substrate or operators? Perhaps, but I don’t think you need to create an agent anywhere near that powerful in order to satisfy the specified use case of ‘slightly superhuman proof assistant’.
First of all, we don’t know how to keep a computer system secure against humans, let alone superhumans running on the fucking computer. The AI doesn’t need to know the color of your shoes or how to snowboard before it breaks its software context, pwns its compute rack, and infects the dev’s non-airgapped machine when ze logs in to debug the AI. (Or were you expecting AGI researchers to develop AIs by just… making up what code they thought would be cool, and then spending $10 mill on a run while not measuring anything about the AI except for some proof tasks...?)
Second, how do you get world-saving work out of superhuman proof assistant?
Third of all, if you’re not doing sponge alignment, the humans have to communicate with the AI. I would guess that if there’s an answer to the second point, it involves not just getting yes/no to math questions, but also understanding proofs—in which case you’re asking for much higher bandwidth communication.
Yeah, I think we’re not so far apart here. I’m not arguing for a math-proof-AI as a solution because I don’t believe that it enables world-saving actions.
I’m just trying to say you could have a narrow math-assistant AI at a higher level of math-specific competence relatively safely compared to a general AI which knew facts about computers and humans and such.
No, I think you can keep the data clean enough to avoid tells.
What data? Why not just train it on literally 0 data (muZero style)? You think it’s going to derive the existence of the physical world from the Peano Axioms?
[Edit: to be clear, I’m not arguing with Logan here, I’m agreeing with Logan. I think it’s clear to most people who might read this comment thread that training a model on nothing but pure math data is unlikely to result in something which could hack it’s way out of computer systems while still anywhere near the ballpark of human genius level. There’s just too much missing info that isn’t implied by pure math.
A more challenging, but I think still feasible, training set would be math and programming. To do this in a safe way for this hypothetical extremely powerful future model architecture, you’d need to ‘dehumanize’ the code, get rid of all details like variable names that could give clues about the real physical universe.]
You mean, recall any fact that’s been put into text-searchable form in the past and by you, and solve any calculation problem that’s in a reasonably common form.
I’m saying that the effect on philosophical problem-solving is just not very large. Yeah, if you’ve been spending 80% of your time on manually calculating things and 20% on “leaps of logic”, and you could just as well spend 90% on the leaps, then calculators help a lot. But it’s not by making you be able to do significantly better leaps. Maybe you can become better by getting more practice or something? But generally skills tend to plateau pretty sharply—there’s always new bottlenecks, like a clicker game. If an improvement only addresses some smallish subset of the difficulty involved in some overall challenge, the overall challenge isn’t addressed that much.
Like, if you could do calculations with 10x less effort, what calculations would you do to solve alignment, or get AGI banned, or make sure everyone gets food, or fix the housing crisis, or ….?
To put it a different way, I don’t think Gödel’s lack of a big fast calculator mattered too much?
No, I do not mean that at all.
An ideal system would store every piece of information its user has ever seen or heard in addition to every book/article/program ever written or recorded and be able to translate problems given in “common english” into objective mathematical proofs then giving an explanation of the answer in English again.
This is an empirical question, but based on my own experience I would speculate the gain is quite significant. Again, merely giving me access to a calculator and a piece of paper makes me better at math than 99.99% of people who do not have access to such tools.
would I
“solve alignment”?
Yes.
“get AGI banned”
No, because I solved alignment.
“make sure everyone gets food, or fix the housing crisis”
Both of these are political problems that have nothing to do with “intelligence”. If everyone was 10x smarter, maybe they would stop voting for retarded self-destructive polices. Idk, though.
That’s what I said. It excludes, for example, a fact that the human thinks of, unless ze speaks or writes it.
It makes you better at calculation, which is relevant for some kinds of math. It doesn’t make you better at math in general though, no. If you’re not familiar with higher math (the sort of things that grad students and professors do), you might not be aware: Most of the stuff that most of them do involves not very much that one would plug in to a calculator.
What calculations would you plug into your fast-easy-calculator that result in you solving alignment?
Already wrote an essay about this.
I don’t think most of those proposals make sense, but anyway, the ones that do make sense only make sense with a pretty extreme math oracle—not something that leaves the human to fill in the “leaps of logic”. It’s just talking about AGI, basically. Which defeats the purpose.
A “math proof only” AGI avoids most alignment problems. There’s no need to worry about paperclip maximizing or instrumental convergence.
Not true. This isn’t the place for this debate, but if you want to know:
To get an AGI that can solve problems that require lots of genuinely novel thinking, you’re probably pulling an agent out of a hat, and then you have an agent with unknown values and general optimization channels.
Even if you only want to solve problems, you still need compute, and therefore wish to conquer the universe (for science!).
An agent that only thinks about math problems isn’t going to take over the real world (it doesn’t even have to know the real world exists, as this isn’t a thing you can deduce from first principles).
We’re going to get compute anyway. Mundane uses of deep learning already use a lot of compute.
I think whether or not the math-proof-AI became accidentally an agent is indeed a moot point if you can successfully create the agent in a censored environment (aka using censored data sets and simulations, with careful obfuscation of the true nature of the substrate on which it is being simulated). I agree with Logan that in such a scenario, the pure math the agent has been trained on doesn’t plausibly give even a superhumanly smart agent enough clues about our physical universe to figure out that it could hack out or manipulate some hypothetical hidden operator into letting it out. An agent specializing in human biology could figure this sort of thing out, but specializing in math? No, I think you can keep the data clean enough to avoid tells.
Is there some weird theoretical super-powerful agent who could figure out how to escape its containment despite having no direct knowledge about its substrate or operators? Perhaps, but I don’t think you need to create an agent anywhere near that powerful in order to satisfy the specified use case of ‘slightly superhuman proof assistant’.
First of all, we don’t know how to keep a computer system secure against humans, let alone superhumans running on the fucking computer. The AI doesn’t need to know the color of your shoes or how to snowboard before it breaks its software context, pwns its compute rack, and infects the dev’s non-airgapped machine when ze logs in to debug the AI. (Or were you expecting AGI researchers to develop AIs by just… making up what code they thought would be cool, and then spending $10 mill on a run while not measuring anything about the AI except for some proof tasks...?)
Second, how do you get world-saving work out of superhuman proof assistant?
Third of all, if you’re not doing sponge alignment, the humans have to communicate with the AI. I would guess that if there’s an answer to the second point, it involves not just getting yes/no to math questions, but also understanding proofs—in which case you’re asking for much higher bandwidth communication.
Yeah, I think we’re not so far apart here. I’m not arguing for a math-proof-AI as a solution because I don’t believe that it enables world-saving actions.
I’m just trying to say you could have a narrow math-assistant AI at a higher level of math-specific competence relatively safely compared to a general AI which knew facts about computers and humans and such.
What data? Why not just train it on literally 0 data (muZero style)? You think it’s going to derive the existence of the physical world from the Peano Axioms?
Math data!
[Edit: to be clear, I’m not arguing with Logan here, I’m agreeing with Logan. I think it’s clear to most people who might read this comment thread that training a model on nothing but pure math data is unlikely to result in something which could hack it’s way out of computer systems while still anywhere near the ballpark of human genius level. There’s just too much missing info that isn’t implied by pure math. A more challenging, but I think still feasible, training set would be math and programming. To do this in a safe way for this hypothetical extremely powerful future model architecture, you’d need to ‘dehumanize’ the code, get rid of all details like variable names that could give clues about the real physical universe.]