The problem is that it really is utter and complete bullshit. I really do think so. On the likelihood to convince: there’s the data point: someone called it bullshit. That’s probably all the impact that could possibly be made (unless speaking from position of power).
With the technobabble, I do mean as used in science fiction when something has to be explained. Done with great dedication (more along the lines of wiki article I linked).
edit: e.g. you have animalist (desires) based intuition of what AI will want to do—obviously the AI will want to make it’s prediction come true in the real world (it well might if it is a mind upload). That doesn’t sound very technical. You replace want with ‘utility’, replace a few other things with technical looking equivalents, and suddenly it sounds technical to such a point that experts don’t understand what you are talking about but don’t risk assuming that you are talking nonsense rather than badly communicating some sense.
Ohkay… but… if you’re using a utility-function-maximizing system architecture, that is a great simplification to the system that really give a clear meaning to ‘wanting’ things, in a way that it doesn’t have for neural nets or whatnot.
The mere fact that the utility function to be specified has to be far far more complex for a general intelligence than a driving robot doesn’t change that. The vagueness is a marker for difficult work to be done, not something they’re implying they’ve already done.
The problem is that it really is utter and complete bullshit. I really do think so. On the likelihood to convince: there’s the data point: someone called it bullshit. That’s probably all the impact that could possibly be made (unless speaking from position of power).
With the technobabble, I do mean as used in science fiction when something has to be explained. Done with great dedication (more along the lines of wiki article I linked).
edit: e.g. you have animalist (desires) based intuition of what AI will want to do—obviously the AI will want to make it’s prediction come true in the real world (it well might if it is a mind upload). That doesn’t sound very technical. You replace want with ‘utility’, replace a few other things with technical looking equivalents, and suddenly it sounds technical to such a point that experts don’t understand what you are talking about but don’t risk assuming that you are talking nonsense rather than badly communicating some sense.
Ohkay… but… if you’re using a utility-function-maximizing system architecture, that is a great simplification to the system that really give a clear meaning to ‘wanting’ things, in a way that it doesn’t have for neural nets or whatnot.
The mere fact that the utility function to be specified has to be far far more complex for a general intelligence than a driving robot doesn’t change that. The vagueness is a marker for difficult work to be done, not something they’re implying they’ve already done.