The problem is that you conflate into general intelligence both the problem solving aspect and the ‘wanting to make real something’ aspect, which requires some extra secret sauce that won’t appear out of thin air and which nobody (save for few lunatics) wants.
Consider software that can analyze and find inputs that result in maximum of a large class of computable functions. You can make better microchips with it, you can make cure for cancer, you can make a self driving car with it. You can even use it to design a paperclip factory. What it does not do, what it can not do without a lot of secret sauces which it can’t design, is run amok paperclip maximizer style. (The closest to running amok is the AIXI with huge number of steps specified, and its dubious this can even self preserve. Or if it can trade some rewards for better sensory input, or trade some rewards for not going blind. As far as self propelled artificial idiocies go, it is incredibly benign for how much computing power it needs and for how much can be done with this much computing power )
What I am expecting is a good progress on making rather general problem solver (but not general mind), with it not working even remotely like the narrow speculations in science fiction say it works. The situation similar to how you imagine some technology to change the life in some very particular way—very privileged hypothesis—and can’t see any other way (so the ‘i see no alternative = no alternative exist’ fallacy happens), and then reality turns to be very different.
The problem is that you conflate into general intelligence both the problem solving aspect and the ‘wanting to make real something’ aspect, which requires some extra secret sauce that won’t appear out of thin air and which nobody (save for few lunatics) wants.
Consider software that can analyze and find inputs that result in maximum of a large class of computable functions. You can make better microchips with it, you can make cure for cancer, you can make a self driving car with it. You can even use it to design a paperclip factory. What it does not do, what it can not do without a lot of secret sauces which it can’t design, is run amok paperclip maximizer style. (The closest to running amok is the AIXI with huge number of steps specified, and its dubious this can even self preserve. Or if it can trade some rewards for better sensory input, or trade some rewards for not going blind. As far as self propelled artificial idiocies go, it is incredibly benign for how much computing power it needs and for how much can be done with this much computing power )
What I am expecting is a good progress on making rather general problem solver (but not general mind), with it not working even remotely like the narrow speculations in science fiction say it works. The situation similar to how you imagine some technology to change the life in some very particular way—very privileged hypothesis—and can’t see any other way (so the ‘i see no alternative = no alternative exist’ fallacy happens), and then reality turns to be very different.