If you started going to college and actually worked at it a bit you could have skipped to Ph.D. work if you wanted to I did. I skipped all of the B.S. and M.S. work straight to Ph.D. But if your math that you’ve posted is any sign of the state of your knowledge I don’t hold much hope of that happening, since you can’t seem to do basic derivatives correctly. When I started skipping classes for example skipping all of calculus and linear algebra to differential equations I had a partially finished a manuscript on solving differential equations that I have been working on for a while. Now the question that logically pops up is do I have Ph.D. now? No, I am taking a break from that to start a company or three if I can.
GenericThinker
“GenericThinker is simply extremely confused—as the comments about the halting problem make abundantly clear. I would comment on the idea of singletons being ruled out by the speed of light—but I can’t think of anything polite to say.”
Well I take this as a complement coming from people who post here. If an ignorant person thinks your wrong chances are your on the right track. If you want to correct my idea feel free but if this is the best you’ve got parroting Eliezer comment then I have nothing to fear. Eliezer comments about my post already having proved false once since my comments on the importance of computational power effecting future chip designs were right on the money, but of course they were since I have actually designed computer chips before big surprise some who posts here actually posting based on knowledge I would encourage you Tim to look into that concept and try it out.
“GenericThinker, please stop posing as an authority on things you know very little about (e.g. the halting problem). If you don’t actually work at Intel or another chip fab, I’m not particularly interested in your overestimates of how much you know about the field.”
As to the point of the halting problem, my point is correct the question of whether a given AI program halts may not be particularly interesting but my response was directed at the post above mine which I took to be implying that since and AGI is not an arbitrary program therefore the halting problem does not apply. If I miss understood the persons post fine I retract my comment. If that was what was meant then I am correct. Since all the halting problem does is ask the question given some program and some input does the program halt or go on infinitely? That is computability 101, it is also related to Godel’s Incompleteness theorem.
“GenericThinker, please stop posing as an authority on things you know very little about (e.g. the halting problem). If you don’t actually work at Intel or another chip fab, I’m not particularly interested in your overestimates of how much you know about the field.”
How precisely do you know I have never worked at intel? You have admitted you don’t know this issue so how would you have a clue whether I am right (cite your sources to prove me wrong)? In fact I am correct the ability to simulate in real-time is directly related to the amount of computational power available which greatly effects the level of complexity you can design into your chip. Look at the performance achieved in 1998 which was around 1 TFLOP if I recall correctly. The tera-scale research chip I spoke of which achieves the same performance with 1998 hardware it would be extremely difficult to simulate the tera-scale design. This is because emulation always requires more power then the actual design The inability to accurately simulate without having to pay a huge premium would make the design of future chips extremely difficult it would limit the design space. Take a different example the SR-71 part of the reason the SR-71 ended up looking as it did was because at the time engineers could only simulate simple shapes for super sonic flight. The same applies in processor design if you cannot simulate the design without a Blue-Gene super computer it is very hard to make improvements that are economical. Further it is extremely hard to prove your design correct which is a huge part of processor design. Obviously this is not the only issue but it is the point you made which is still false whatever you may believe being totally irrelevant to that point.
Actually intel uses things like FPGAs to simulate future processor designs since FPGAs have programmable logic.
On a final note you are a grade school dropout don’t talk to me about knowledge or pretense of knowledge. The only fraud here is you pretending to be an AI researcher (what a joke). You may feel free to critique me when you have published a technical paper proving mastery of mathematics beyond basic statistics and calculus and when you have patents to your name. Until that point you should be careful making such claims since you haven’t a leg to stand on.
“If anyone from Intel reads this, and wishes to explain to me how it would be unbelievably difficult to do their jobs using computers from ten years earlier, so that Moore’s Law would slow to a crawl—then I stand ready to be corrected. But relative to my present state of partial knowledge, I would say that this does not look like a strong feedback loop, compared to what happens to a compound interest investor when we bound their coupon income at 1998 levels for a while.”
This is simple to disprove whether being part of intel or not. The issue is that since current processors with multiple cores and millions-now billions of transistors are getting so complex that the actual design has to be done on computer. What is more before fabrication the design needs to be simulate to check for logic errors and to ensure good performance. It would be impossible to simulate a Tera-scale research chip on 1998 hardware. The issue is that simulating a computers design require a lot of computational power. The advances made in going from 65nm to 45nm now moving to 32nm were enabled by computers that could better simulate the designs without todays computers it would be hard to design the fabrication systems or run the fabrication system for the future processors. Since you admit partial knowledge I won’t bore you with the details of all this, suffices to say that your claim as state is incorrect.
I would however like to point out a misconception about Moores law, the law never says speed increases merely the number of transistors double every 18 months. There are a lot of facts apart from the number of transistors that play into computer speed. While more transistors are useful one has to match them with an architecture to take advantage of them otherwise you would not get the speed increase necessarily.
That is not completely true the halting problem deals with any program then taking some given input trying to determine if the program goes into an infinite loop or not. The question can be asked of any program and is formally undecidable. So AGI would not be exempt from the halting program, it just may not be an interesting question to ask of an AGI program.
“But the much-vaunted “massive parallelism” of the human brain, is, I suspect, mostly cache lookups to make up for the sheer awkwardness of the brain’s serial slowness—if your computer ran at 200Hz, you’d have to resort to all sorts of absurdly massive parallelism to get anything done in realtime. I suspect that, if correctly designed, a midsize computer cluster would be able to get high-grade thinking done at a serial speed much faster than human, even if the total parallel computing power was less.”
That is just patently false, the brain is massively parallel and the parallelism is not cache look-ups it would be more like current GPUs. The computational estimate does not take into account for why the brain has as much computational power as it does ~10^15 or more. When you talk about relative speed what you have to remember is that we are tied to our perception of time which is roughly between 30-60FPS. Having speeds beyond 200Hz isn’t necessary since the brain doesn’t have RAM or caches like a traditional computer to store solutions in advance in the same way. By having the speed at 200Hz the brain can run fast enough to give us real-time perceptions while having time to do multi-step operations. A nice thing would be if we could think about multiple things in parallel the way a computer with multiple processors can focus on more then one application at the same time.
I think all these discussions of the brains speed are fundamentally misguided, and show lack of understanding of current neuroscience computational or otherwise. Since to say run the brain at 2Ghz what would that mean? How would that work with our sensory systems? If you only have one processing element with only 6-12 functional units then 2Ghz is nice if you have billions of little processors and your senses all run around 30-60FPS then 200Hz is just fine without being overkill unless your algorithms require more then 100 serial steps. My guess would be that brain does a form of parallel algorithms to process information to limit that possibility.
On the issue of mental processing power look at savants, some of them can count in primes all day long or can recite a million digits of pi. For some reason the disfunction in their brains allows them to tap into all sorts of computational power. The big issue with the brain is that we cannot focus on multiple things and the way in which we perform for example math is not nearly as streamlined as a computer. For may own part I am at my limit multiplying a 3 digit number by a 3 digit number in my head. This is of course a function of many things but it is in part a function of the limitations of short term memory and the way in which our brains allow us to do math.
“The explosion in computing capability is a historical phenomenon that has been going on for decades. For “specific numbers”, for example, look at the well-documented growth of the computer industry since the 1950s. Yes, there are probably limits, but they seem far away—so far away, we are not even sure where they are, or even whether they exist.”
The growth you are referring to has a hard upper limit which is when transistors are measured in angstroms, at the point when they start playing by the rules of quantum mechanics. That is the hard upper limit of computing that you are referring to. Now if we take quantum computing that may or may not take us further there has been a lot of work done recently that casts doubt on quantum computing and its ability to solve a lot of our computing issues. There are a lot of other possible computing technologies it is just not clear which one will emerge at the top yet.
I really fail to understand this entire issue of anti-theism. If we think about the question logically, I think we can all say humans are defective and that we are not terribly moral agents. Whether God exists or not doesn’t seem to be very relevant in the sense that whether one be an atheist a theist or whatever the idea of becoming a better person morally etc is still important. I would argue that whether you believe in God or not if that belief unfounded or not drives you to behave in a more moral way then so be it. I think it is a fundamental waste of time to debate the unanswerable question of whether God exists it not being provable beyond circumstantial evidence which is open to interpretation. If Goes does exist it makes issues of evolution easier to explain and less surprising that it managed to evolve human intelligence and if not; if the idea of God drives people to be better then great. Sitting here bashing God seems like a bit of an illogical thing to do in the grand scheme of things.
Andrew,
Whether you believe in God or not you still have an underlying assumption that you take for granted, Christians take the evidence of the world around us to say God exists, an Atheist looks at the same evidence and says God does not exist. The issue here is ultimately that the statement God exists is formally unprovable and totally unscientific. This means that ultimately you have faith that God does not exist and others have faith that he does, neither sides can formally prove themselves correct.
Will
“Also do you have some FLOPS per cubic centimeter estimations for nanocomputers? I looked at this briefly, and I couldn’t find anything. It references a previous page that I can’t find.”
FLOPs are not a good measure of computing performance since Floating Point Calculations are only one small aspect of what computers have to do. Further the term nanocomputers as used is misleading since all of todays processors could be classified as nanocomputers the current ones using the 45nm process moving to the 32nm process.
Eliezer
“Just to make it clear why we might worry about this for nanotech, rather than say car manufacturing—if you can build things from atoms, then the environment contains an unlimited supply of perfectly machined spare parts. If your moleculary factory can build solar cells, it can acquire energy as well.”
Ignoring the other obvious issues in your post, this is of course not true. One cannot just bond any atom to any atom this is well known and have something useful. I would also like to point out that everyone tosses around the term nano including the Foresight institute but the label has been so abused through projects that don’t deserve it that it seems a bit meaningless.
The other issue is of course this concept that we will build everything from atoms in the future that you seem to imply. This is of course silly since building a 747 from atoms up is much harder then just doing it the way we do it now. Nano engineering has to be applied to the right aspects to make it useful.
“I don’t think they’ve improved our own thinking processes even so much as the Scientific Revolution—yet. But some of the ways that computers are used to improve computers, verge on being repeatable (cyclic).”
This is not true either, current computers are designed by the previous generation. If we look at how things are done on the current processors and how they were done we see large improvements. The computing industry has made huge leaps forward since the early days.
Finally I have trouble with the assumption that once we have advanced nanotech whatever that means that we will all of a sudden have access to tremendously more computing power. Nanotech as such will not do this, regardless of whether we ever have molecular manufacturing we will have 16nm processors in a few years. Computing power should continue to follow Moore’s law till processor components are measured in angstroms. This being the case the computer power to run the average estimates of the human brains computational power already exist. The IBM Roadrunner system is one example. The current issue is the software there is no end to possible hardware improvement but unless software matches who cares.
“For the record, I’ve been a coder and judged myself a reasonable hacker—set out to design my own programming language at one point, which I say not as a mark of virtue but just to demonstrate that I was in the game. (Gave it up when I realized AI wasn’t about programming languages.)”
AI is about programming languages since AI is about computers, and current “AI” languages really aren’t that great. I would say that it would be of huge value if someone could design an AI specific language that would be better then Lisp. Also a programming language that better deals with mass parallelism would be of great value to AI. Devoting yourself to that goal would further AI since the problem is one of theory and one of enabling technology.
Just an aside the good hacker in your own view isn’t a good metric due to the fact that people always think of themselves as being better at something then they really are.
PK you are absolutely right. We can even take things a step further and say positive AI will happen regardless of Eliezer’s involvement, and even as far as to say his involvement not having the needed experience in both math and programming will be as a cheerleader and not someone who makes it happen.
Mike
“Can’t do basic derivatives? Seriously?!? I’m for kicking the troll out. His bragging about mediocre mathematical accomplishments isn’t informative or entertaining to us readers.”
Did you look at his derivatives? “dy/dt = F(y) = Ay whose solution is y = e^(At)” How is e^(at) = dy/dt=ay Basic derivatives 101 d/dx e^x = e^x
“Solving
dy/dt = e^y
yields
y = -ln(C—t)”
again dy/dt=e^y does not equal -ln(c-t) unless e is not the irrational constant that it is normally even if that it is the case the solution is still wrong… again refer to a basic derivative table...
So I am a troll because I point out errors? Ok, fine then I am a troll and will never come back. Thats interesting so you must be a saint for thinking these errors are the truth.
I apologize that I am not amusing you, but I am not a court jester like yourself.
Mediocre accomplishments hmm… well did you skip all of your bachelors work straight to grad school in mathematics? I would bet not. Don’t talk of mediocrity unless you can prove yourself above that standard. So I believe your credentials would be needed to prove that or some of your own superior accomplishments if you have any? I await eagerly.