instantly recall any fact and solve any math problem
You mean, recall any fact that’s been put into text-searchable form in the past and by you, and solve any calculation problem that’s in a reasonably common form.
I’m saying that the effect on philosophical problem-solving is just not very large. Yeah, if you’ve been spending 80% of your time on manually calculating things and 20% on “leaps of logic”, and you could just as well spend 90% on the leaps, then calculators help a lot. But it’s not by making you be able to do significantly better leaps. Maybe you can become better by getting more practice or something? But generally skills tend to plateau pretty sharply—there’s always new bottlenecks, like a clicker game. If an improvement only addresses some smallish subset of the difficulty involved in some overall challenge, the overall challenge isn’t addressed that much.
Like, if you could do calculations with 10x less effort, what calculations would you do to solve alignment, or get AGI banned, or make sure everyone gets food, or fix the housing crisis, or ….?
To put it a different way, I don’t think Gödel’s lack of a big fast calculator mattered too much?
You mean, recall any fact that’s been put into text-searchable form in the past and by you, and solve any calculation problem that’s in a reasonably common form.
No, I do not mean that at all.
An ideal system would store every piece of information its user has ever seen or heard in addition to every book/article/program ever written or recorded and be able to translate problems given in “common english” into objective mathematical proofs then giving an explanation of the answer in English again.
But generally skills tend to plateau pretty sharply—there’s always new bottlenecks, like a clicker game.
This is an empirical question, but based on my own experience I would speculate the gain is quite significant. Again, merely giving me access to a calculator and a piece of paper makes me better at math than 99.99% of people who do not have access to such tools.
Like, if you could do calculations with 10x less effort, what calculations would you do to solve alignment, or get AGI banned, or make sure everyone gets food, or fix the housing crisis, or ….?
would I
“solve alignment”?
Yes.
“get AGI banned”
No, because I solved alignment.
“make sure everyone gets food, or fix the housing crisis”
Both of these are political problems that have nothing to do with “intelligence”. If everyone was 10x smarter, maybe they would stop voting for retarded self-destructive polices. Idk, though.
An ideal system would store every piece of information its user has ever seen or heard in addition to every book/article/program ever written or recorded and be able to translate problems given in “common english” into objective mathematical proofs then giving an explanation of the answer in English again.
That’s what I said. It excludes, for example, a fact that the human thinks of, unless ze speaks or writes it.
Again, merely giving me access to a calculator and a piece of paper makes me better at math than 99.99% of people who do not have access to such tools.
It makes you better at calculation, which is relevant for some kinds of math. It doesn’t make you better at math in general though, no. If you’re not familiar with higher math (the sort of things that grad students and professors do), you might not be aware: Most of the stuff that most of them do involves not very much that one would plug in to a calculator.
would I “solve alignment”? Yes.
What calculations would you plug into your fast-easy-calculator that result in you solving alignment?
I don’t think most of those proposals make sense, but anyway, the ones that do make sense only make sense with a pretty extreme math oracle—not something that leaves the human to fill in the “leaps of logic”. It’s just talking about AGI, basically. Which defeats the purpose.
Not true. This isn’t the place for this debate, but if you want to know:
To get an AGI that can solve problems that require lots of genuinely novel thinking, you’re probably pulling an agent out of a hat, and then you have an agent with unknown values and general optimization channels.
Even if you only want to solve problems, you still need compute, and therefore wish to conquer the universe (for science!).
An agent that only thinks about math problems isn’t going to take over the real world (it doesn’t even have to know the real world exists, as this isn’t a thing you can deduce from first principles).
Even if you only want to solve problems, you still need compute
We’re going to get compute anyway. Mundane uses of deep learning already use a lot of compute.
You mean, recall any fact that’s been put into text-searchable form in the past and by you, and solve any calculation problem that’s in a reasonably common form.
I’m saying that the effect on philosophical problem-solving is just not very large. Yeah, if you’ve been spending 80% of your time on manually calculating things and 20% on “leaps of logic”, and you could just as well spend 90% on the leaps, then calculators help a lot. But it’s not by making you be able to do significantly better leaps. Maybe you can become better by getting more practice or something? But generally skills tend to plateau pretty sharply—there’s always new bottlenecks, like a clicker game. If an improvement only addresses some smallish subset of the difficulty involved in some overall challenge, the overall challenge isn’t addressed that much.
Like, if you could do calculations with 10x less effort, what calculations would you do to solve alignment, or get AGI banned, or make sure everyone gets food, or fix the housing crisis, or ….?
To put it a different way, I don’t think Gödel’s lack of a big fast calculator mattered too much?
No, I do not mean that at all.
An ideal system would store every piece of information its user has ever seen or heard in addition to every book/article/program ever written or recorded and be able to translate problems given in “common english” into objective mathematical proofs then giving an explanation of the answer in English again.
This is an empirical question, but based on my own experience I would speculate the gain is quite significant. Again, merely giving me access to a calculator and a piece of paper makes me better at math than 99.99% of people who do not have access to such tools.
would I
“solve alignment”?
Yes.
“get AGI banned”
No, because I solved alignment.
“make sure everyone gets food, or fix the housing crisis”
Both of these are political problems that have nothing to do with “intelligence”. If everyone was 10x smarter, maybe they would stop voting for retarded self-destructive polices. Idk, though.
That’s what I said. It excludes, for example, a fact that the human thinks of, unless ze speaks or writes it.
It makes you better at calculation, which is relevant for some kinds of math. It doesn’t make you better at math in general though, no. If you’re not familiar with higher math (the sort of things that grad students and professors do), you might not be aware: Most of the stuff that most of them do involves not very much that one would plug in to a calculator.
What calculations would you plug into your fast-easy-calculator that result in you solving alignment?
Already wrote an essay about this.
I don’t think most of those proposals make sense, but anyway, the ones that do make sense only make sense with a pretty extreme math oracle—not something that leaves the human to fill in the “leaps of logic”. It’s just talking about AGI, basically. Which defeats the purpose.
A “math proof only” AGI avoids most alignment problems. There’s no need to worry about paperclip maximizing or instrumental convergence.
Not true. This isn’t the place for this debate, but if you want to know:
To get an AGI that can solve problems that require lots of genuinely novel thinking, you’re probably pulling an agent out of a hat, and then you have an agent with unknown values and general optimization channels.
Even if you only want to solve problems, you still need compute, and therefore wish to conquer the universe (for science!).
An agent that only thinks about math problems isn’t going to take over the real world (it doesn’t even have to know the real world exists, as this isn’t a thing you can deduce from first principles).
We’re going to get compute anyway. Mundane uses of deep learning already use a lot of compute.