The concern that ML has no solid theoretical foundations reflects the old computer science worldview, which is all based on finding bit exact solutions to problems within vague asymptotic resource constraints.
It is an error to confuse the “exact / approximate” axis with the “theoretical / empirical” exis. There is plenty of theoretical work in complexity theory on approximate algorithms.
A good ML researcher absolutely needs a good idea of what is going on under the hood—at least at a sufficient level of abstraction.
There is difference between “having an idea” and “solid theoretical foundations”. Chemists before quantum mechanics had a lots of ideas. But they didn’t have a solid theoretical foundation.
Why not test safety long before the system is superintelligent? - say when it is a population of 100 child like AGIs. As the population grows larger and more intelligent, the safest designs are propagated and made safer.
Because this process is not guaranteed to yield good results. Evolution did the exact same thing to create humans, optimizing for genetic fitness. And humans still went and invented condoms.
So it may actually be easier to drop the traditional computer science approach completely.
When the entire future of mankind is at stake, you don’t drop approaches because it may be easier. You try every goddamn approach you have (unless “trying” is dangerous in itself of course).
There is difference between “having an idea” and “solid theoretical foundations”. Chemists before quantum mechanics had a lots of ideas. But they didn’t have a solid theoretical foundation.
That’s a bad example. You are essentially asking researchers to predict what they will discover 50 years down the road. A more appropriate example is a person thinking he has medical expertise after reading bodybuilding and nutrition blogs on the internet, vs a person who has gone through medical school and is an MD.
I’m not asking researchers to predict what they will discover. There are different mindsets of research. One mindset is looking for heuristics that maximize short term progress on problems of direct practical relevance. Another mindset is looking for a rigorously defined overarching theory. MIRI is using the latter mindset while most other AI researchers are much closer to the former mindset.
Evolution did the exact same thing to create humans, optimizing for genetic fitness. And humans still went and invented condoms.
Though humans are the most populous species of large animal on the planet.
Condoms were invented because evolution, being a blind watchmaker, forgot to make sex drive tunable with child mortality, hence humans found a loophole. But whatever function humans are collectively optimizing, it still closely resembles genetic fitness.
It is an error to confuse the “exact / approximate” axis with the “theoretical / empirical” exis. There is plenty of theoretical work in complexity theory on approximate algorithms.
There is difference between “having an idea” and “solid theoretical foundations”. Chemists before quantum mechanics had a lots of ideas. But they didn’t have a solid theoretical foundation.
Because this process is not guaranteed to yield good results. Evolution did the exact same thing to create humans, optimizing for genetic fitness. And humans still went and invented condoms.
When the entire future of mankind is at stake, you don’t drop approaches because it may be easier. You try every goddamn approach you have (unless “trying” is dangerous in itself of course).
That’s a bad example. You are essentially asking researchers to predict what they will discover 50 years down the road. A more appropriate example is a person thinking he has medical expertise after reading bodybuilding and nutrition blogs on the internet, vs a person who has gone through medical school and is an MD.
I’m not asking researchers to predict what they will discover. There are different mindsets of research. One mindset is looking for heuristics that maximize short term progress on problems of direct practical relevance. Another mindset is looking for a rigorously defined overarching theory. MIRI is using the latter mindset while most other AI researchers are much closer to the former mindset.
Though humans are the most populous species of large animal on the planet.
Condoms were invented because evolution, being a blind watchmaker, forgot to make sex drive tunable with child mortality, hence humans found a loophole. But whatever function humans are collectively optimizing, it still closely resembles genetic fitness.
Looking at Japan, that’s not self-evident to me :-/
Google “waifu”. No wait, don’t. :D
I’m familiar with the term :-)
So, you were talking about what humans are optimizing for..? X-)