@Aron, wow, from your initial post I thought I was giving advice to an aspiring undergraduate, glad to realize I’m talking to an expert :-)
Personally I continually bump up against performance limitations. This is often due to bad coding on my part and the overuse of Matlab for loops but I still have the strong feeling that we need faster machines. In particular, I think full intelligence will require processing VAST amounts of raw unlabelled data (video, audio, etc) and that will require fast machines. The application of statistical learning techniques to vast unlabeled data streams is about to open new doors. My take on this idea is spelled out better here.
I am suspicious of attempts to define intelligence for the following reason. Too often, they lead the definer down a narrow and ultimately fruitless path. If you define intelligence as the ability to perform some function XYZ, then you can sit down and start trying to hack together a system that does XYZ. Almost invariably this will result in a system that achieves some superficial imitation of XYZ and very little else.
Rather than attempting to define intelligence and move in a determined path toward that goal, we should look around for novel insights and explore their implications.
Imagine if Newton had followed the approach of “define physics and then move toward it”. He may have decided that physics is the ability to build large structures (certainly an understanding of physics is helpful or required for this). He might then have spent all his time investigating the material properties of various kinds of stone—useful perhaps, but misses the big picture. Instead he looked around in the most unlikely places to find something interesting that had very little immediate practical application. That should be our mindset in pursuing AI: the scientist’s, rather than the engineer’s, approach.