In fact, I expect that given the right way of modelling, formal verification of learning systems up to epsilon-delta bounds (in the style of PAC-learning, for instance) should be quite doable. Why?
Dropping the ‘formal verification’ part and replacing it with approximate error bound variance reduction this is potentially interesting—although it also seems to be a general technique that would—if it worked well—be useful for practical training, safety aside.
Why? Because, as mentioned regarding PAC learning, it’s the existing foundation for machine learning.
Machine learning is an eclectic field with many mostly independent ‘foundations’ - bayesian statistics of course, optimization methods (hessian free, natural, etc), geometric methods and NLDR, statistical physics …
That being said—I’m not very familiar with the PAC learning literature yet—do you have a link to a good intro/summary/review?
Hell, if I could find the paper showing that deep networks form a “funnel” in the model’s free-energy landscape—where local minima are concentrated in that funnel and all yield more-or-less as-good test error, while the global minimum reliably overfits—I’d be posting the link myself.
That sounds kind of like the saddle point paper. It’s easy to show that in complex networks there are a large number of equivalent minima due to various symmetries and redundancies. Thus finding the actual technical ‘global optimum’ quickly becomes suboptimal when you discount for resource costs.
If it seems really really really impossibly hard to solve a problem even with the ‘simplification’ of lots of computing power, perhaps the underlying assumptions are wrong. For example—perhaps using lots and lots of computing power makes the problem harder instead of easier.
You’re not really being fair to Nate here, but let’s be charitable to you: this is fundamentally a dispute between the heuristics-and-biases school of thought about cognition and the bounded/resource-rational school of thought.
Yes that is the source of disagreement, but how am I not being fair? I said ‘perhaps’ - as in have you considered this? Not ‘here is why you are certainly wrong’.
Computationally, this is saying, “When we have enough resources that only asymptotic complexity matters, we use the Old Computer Science way of just running the damn algorithm that implements optimal behavior and optimal asymptotic complexity.” Trying to extend this approach into statistical inference gets you basic Bayesianism and AIXI, which appear to have nice “optimality” guarantees, but are computationally intractable and are only optimal up to the training data you give them.
Solonomoff/AIXI and more generally ‘full Bayesianism’ is useful as a thought model, but is perhaps over valued on this site compared to the machine learning field. Compare the number of references/hits to AIXI on this site (tons) to the number on r/MachineLearning (1!). Compare the number of references for AIXI papers (~100) to other ML papers and you will see that the ML community sees AIXI and related work as minor.
The important question is what does the optimal practical approximation of Solonomoff/Bayesian look like? And how different is that from what the brain does? By optimal I of course I mean optimal in terms of all that really matters, which is intelligence per unit resources.
Human intelligence—including that of Turing or Einstein, only requires 10 watts of energy and more surprisingly only around 10^14 switches/second or less—which is basically miraculous. A modern GPU uses more than 10^18 switches/second. You’d have to go back to a pentium or something to get down to 10^14 switches per second. Of course the difference is that switch events in an ANN are much more powerful because they are more like memory ops, but still.
It is really really hard to make any sort of case that actual computer tech is going to become significantly more efficient than the brain anytime in the near future (at least in terms of switch events/second). There is a very strong case that all the H&B stuff is just what actual practical intelligence looks like. There is no such thing as intelligence that is not resource efficient—or alternatively we could say that any useful definition of intelligence must be resource normalized (ie utility/cost).
I’m not sure what you’re looking for in terms of the PAC-learning summary, but for a quick intro, there’s this set of slides or thesetwo lectures notes from Scott Aaronson. For a more detailed review of the literature in all the field up until the mid 1990s, there’s this paper by David Haussler, though given its length you might as well read up Kearns and Vazirani’s 1994 textbook on the subject. I haven’t been able to find a more recent review of the literature though—if anyone had a link that’d be great.
Human intelligence—including that of Turing or Einstein, only requires 10 watts of energy and more surprisingly only around 10^14 switches/second or less—which is basically miraculous. A modern GPU uses more than 10^18 switches/second. You’d have to go back to a pentium or something to get down to 10^14 switches per second. Of course the difference is that switch events in an ANN are much more powerful because they are more like memory ops, but still.
It’s not that amazing when you understand PAC-learning or Markov processes well. A natively probabilistic (analogously: “natively neuromorphic”) computer can actually afford to sacrifice precision “cheaply”, in the sense that sizeable sacrifices of hardware precision actually entail fairly small injections of entropy into the distribution being modelled. Since what costs all that energy in modern computers is precision, that is, exactitude, a machine that simply expects to get things a little wrong all the time can still actually perform well, provided it is performing a fundamentally statistical task in the first place—which a mind is!
Eli this doesn’t make sense—the fact that digital logic switches are higher precision and more powerful and thus require more minimal energy makes the brain/mind more impressive, not less.
The energy efficiency per op in the brain is rather poor in one sense—perhaps 10^5 larger than the minimum imposed by physics for a low SNR analog op, but essentially all of this cost is wire energy.
The miraculous thing is how much intelligence the brain/mind achieves for such a tiny amount of computation in terms of low level equivalent bit ops/second. It suggests that brain-like ANNs will absolutely dominate the long term future of AI.
Eli this doesn’t make sense—the fact that digital logic switches are higher precision and more powerful and thus require more minimal energy makes the brain/mind more impressive, not less.
Nuh-uh :-p. The issue is that the brain’s calculations are probabilistic. When doing probabilistic calculations, you can either use very, very precise representations of computable real numbers to represent the probabilities, or you can use various lower-precision but natively stochastic representations, whose distribution over computation outcomes is the distribution being inferred.
Hence why the brain is, on the one hand, very impressive for extracting inferential power from energy and mass, but on the other hand, “not that amazing” in the sense that it, too, begins to add up to normality once you learn a little about how it works.
When doing probabilistic calculations, you can either use very, very precise representations of computable real numbers to represent the probabilities, or you can use various lower-precision but natively stochastic representations, whose distribution over computation outcomes is the distribution being inferred.
Of course—and using say a flop to implement a low precision synaptic op is inefficient by six orders of magnitude or so—but this just strengthens my point. Neuromorphic brain-like AGI thus has huge potential performance improvement to look forward to, even without Moore’s Law.
Neuromorphic brain-like AGI thus has huge potential performance improvement to look forward to, even without Moore’s Law.
Yes, if you could but dissolve your concept of “brain-like”/”neuromorphic” into actual principles about what calculations different neural nets embody.
Human intelligence—including that of Turing or Einstein, only requires 10 watts of energy and more surprisingly only around 10^14 switches/second or less—which is basically miraculous. A modern GPU uses more than 10^18 switches/second.
I don’t think that “switches” per second is a relevant metric here. The computation performed by a single neuron in a single firing cycle is much more complex than the computation performed by a logic gate in a single switching cycle.
The amount of computational power required to simulate a human brain in real time is estimated in the petaflops range. Only the largest supercomputer operate in that range, certainly not common GPUs.
You misunderstood me—the biological switch events I was referring to are synaptic ops, and they are comparable to transistor/gate switch ops in terms of minimum fundemental energy cost in Landauer analysis.
The amount of computational power required to simulate a human brain in real time is estimated in the petaflops range.
That is a tad too high, the more accurate figure is 10^14 ops/second (10^14 synapses * avg 1 hz spike rate). The minimal computation required to simulate a single GPU in real time is 10,000 times higher.
That is a tad too high, the more accurate figure is 10^14 ops/second (10^14 synapses * avg 1 hz spike rate).
I’ve seen various people give estimates in the order of 10^16 flops by considering the maximum firing rate of a typical neuron (~10^2 Hz) rather than the average firing rate, as you do.
On one hand, a neuron must do some computation whether it fires or not, and a “naive” simulation would necessarily use a cycle frequency of the order of 10^2 Hz or more, on the other hand, if the result of a computation is almost always “do not fire”, then as a random variable the result has little information entropy and this may perhaps be exploited to optimize the computation. I don’t have a strong intuition about this.
The minimal computation required to simulate a single GPU in real time is 10,000 times higher.
On a traditional CPU perhaps, on another GPU I don’t think so.
Dropping the ‘formal verification’ part and replacing it with approximate error bound variance reduction this is potentially interesting—although it also seems to be a general technique that would—if it worked well—be useful for practical training, safety aside.
Machine learning is an eclectic field with many mostly independent ‘foundations’ - bayesian statistics of course, optimization methods (hessian free, natural, etc), geometric methods and NLDR, statistical physics …
That being said—I’m not very familiar with the PAC learning literature yet—do you have a link to a good intro/summary/review?
That sounds kind of like the saddle point paper. It’s easy to show that in complex networks there are a large number of equivalent minima due to various symmetries and redundancies. Thus finding the actual technical ‘global optimum’ quickly becomes suboptimal when you discount for resource costs.
Yes that is the source of disagreement, but how am I not being fair? I said ‘perhaps’ - as in have you considered this? Not ‘here is why you are certainly wrong’.
Solonomoff/AIXI and more generally ‘full Bayesianism’ is useful as a thought model, but is perhaps over valued on this site compared to the machine learning field. Compare the number of references/hits to AIXI on this site (tons) to the number on r/MachineLearning (1!). Compare the number of references for AIXI papers (~100) to other ML papers and you will see that the ML community sees AIXI and related work as minor.
The important question is what does the optimal practical approximation of Solonomoff/Bayesian look like? And how different is that from what the brain does? By optimal I of course I mean optimal in terms of all that really matters, which is intelligence per unit resources.
Human intelligence—including that of Turing or Einstein, only requires 10 watts of energy and more surprisingly only around 10^14 switches/second or less—which is basically miraculous. A modern GPU uses more than 10^18 switches/second. You’d have to go back to a pentium or something to get down to 10^14 switches per second. Of course the difference is that switch events in an ANN are much more powerful because they are more like memory ops, but still.
It is really really hard to make any sort of case that actual computer tech is going to become significantly more efficient than the brain anytime in the near future (at least in terms of switch events/second). There is a very strong case that all the H&B stuff is just what actual practical intelligence looks like. There is no such thing as intelligence that is not resource efficient—or alternatively we could say that any useful definition of intelligence must be resource normalized (ie utility/cost).
I’m not sure what you’re looking for in terms of the PAC-learning summary, but for a quick intro, there’s this set of slides or these two lectures notes from Scott Aaronson. For a more detailed review of the literature in all the field up until the mid 1990s, there’s this paper by David Haussler, though given its length you might as well read up Kearns and Vazirani’s 1994 textbook on the subject. I haven’t been able to find a more recent review of the literature though—if anyone had a link that’d be great.
It’s not that amazing when you understand PAC-learning or Markov processes well. A natively probabilistic (analogously: “natively neuromorphic”) computer can actually afford to sacrifice precision “cheaply”, in the sense that sizeable sacrifices of hardware precision actually entail fairly small injections of entropy into the distribution being modelled. Since what costs all that energy in modern computers is precision, that is, exactitude, a machine that simply expects to get things a little wrong all the time can still actually perform well, provided it is performing a fundamentally statistical task in the first place—which a mind is!
Eli this doesn’t make sense—the fact that digital logic switches are higher precision and more powerful and thus require more minimal energy makes the brain/mind more impressive, not less.
The energy efficiency per op in the brain is rather poor in one sense—perhaps 10^5 larger than the minimum imposed by physics for a low SNR analog op, but essentially all of this cost is wire energy.
The miraculous thing is how much intelligence the brain/mind achieves for such a tiny amount of computation in terms of low level equivalent bit ops/second. It suggests that brain-like ANNs will absolutely dominate the long term future of AI.
Nuh-uh :-p. The issue is that the brain’s calculations are probabilistic. When doing probabilistic calculations, you can either use very, very precise representations of computable real numbers to represent the probabilities, or you can use various lower-precision but natively stochastic representations, whose distribution over computation outcomes is the distribution being inferred.
Hence why the brain is, on the one hand, very impressive for extracting inferential power from energy and mass, but on the other hand, “not that amazing” in the sense that it, too, begins to add up to normality once you learn a little about how it works.
Of course—and using say a flop to implement a low precision synaptic op is inefficient by six orders of magnitude or so—but this just strengthens my point. Neuromorphic brain-like AGI thus has huge potential performance improvement to look forward to, even without Moore’s Law.
Yes, if you could but dissolve your concept of “brain-like”/”neuromorphic” into actual principles about what calculations different neural nets embody.
I don’t think that “switches” per second is a relevant metric here. The computation performed by a single neuron in a single firing cycle is much more complex than the computation performed by a logic gate in a single switching cycle.
The amount of computational power required to simulate a human brain in real time is estimated in the petaflops range. Only the largest supercomputer operate in that range, certainly not common GPUs.
You misunderstood me—the biological switch events I was referring to are synaptic ops, and they are comparable to transistor/gate switch ops in terms of minimum fundemental energy cost in Landauer analysis.
That is a tad too high, the more accurate figure is 10^14 ops/second (10^14 synapses * avg 1 hz spike rate). The minimal computation required to simulate a single GPU in real time is 10,000 times higher.
I’ve seen various people give estimates in the order of 10^16 flops by considering the maximum firing rate of a typical neuron (~10^2 Hz) rather than the average firing rate, as you do.
On one hand, a neuron must do some computation whether it fires or not, and a “naive” simulation would necessarily use a cycle frequency of the order of 10^2 Hz or more, on the other hand, if the result of a computation is almost always “do not fire”, then as a random variable the result has little information entropy and this may perhaps be exploited to optimize the computation. I don’t have a strong intuition about this.
On a traditional CPU perhaps, on another GPU I don’t think so.