Watson’s Jeopardy win shows that, given enough time, a team of AI engineers has an excellent chance of creating a specialized system which can outpace the best human expert in a much wider variety of tasks than we might have thought before.
One could read that comment on a spectrum of charitableness. I will speak for myself, at the risk of ruffling some feathers, but we are all here to bounce ideas around, not tow any party lines, right? To me, Watson’s win means very little, almost nothing. Expert systems have been around for years, even decades. I experimented with coding one myself, many years ago.
It shows what we already knew: given a large budget, a large team of mission-targeted programmers can hand craft a mission specific expert system out of an unlimited pool of hardware resources, to achieve a goal like winning a souped-up game of trivia, laced with puns as well as literal questions.
It was a billion dollar stunt, IMO, by IBM and related project leaders.
Has it achieved consciousness, self awareness, evidence of compassion, a fear of death, moral intuition?
That would have impressed me, that we were entering a new era. (And I will try to rigorously claim, over time, that this is exactly what we really need, in order to have a fighting chance of producing fAGI. I think those not blinded by a paradigm that should have died out with logical positivism and behaviorism, would like to admit (some fraction of them) that penetrating, intellectually honest analysis accumulates a conviction that no mechanical decision procedure we design, no matter how spiffy our mathematics, (and I was a math major with straight As in my day) can guarantee that an emotionless, compassionless, amoral, non,conscious, mechanically goal-seeking apparatus, will not—inadvertently or advertently—steam roller right over us.
I will speak more about that as time goes on. But in keeping with my claim yesterday that “intelligence” and “consciousness” are not coextensive in any simple way, “intelligence” and “sentience” are disjoint. I think that the autonomous “restraint” we need, to make AGIs into friendly AGIs, requires giving them sentience, and creating conditions favorable to them discovering a morality compatible with our own.
Creativity, free will (or autonomy, in language with less philosophical baggage), emotion, a theory of ethics and meta-ethics, and a theory of motivation.… we need to make progress on these, the likely basic building blocks of moral, benign, enlightened, beneficent forms of sentience… as well as progress on the fancy tech needed to implement this, once we have some idea what we are actually trying to implement.
And that thing we should implement is not, in my opinion, ever more sophisticated Watsons, or groups of hundreds or thousands of them, each hand crafted to achieve a specific function (machine vision, unloading a dishwasher, …..) Oh, sure, that would work, just like Watson worked. But if we want moral intuition to develop, a respect for life to develop, we need to have a more ambitious goal.
ANd I actually think we can do it. Now is the time. The choice that confronts us really, is not uAGI vs. fAGI, but dumb GOFAI, vs, sentient AI.
Watson: just another expert system. Had someone given me the budget and offered to let me lead a project team to build Watson, I would have declined, because it was clear in advance that it was just a (more nuanced) brute force, custom crafted and tuned, expert system. It’s success was assured, given a deep wallet.
What did we learn? Maybe some new algorithm-optimizations or N-space data structure topologies were discovered along the way, but nothing fundamental.
I’d have declined to lead the project (not that I would have been asked), because it was uninteresting. There was nothing to learn, and nothing much was learned, except some nuances of tech that always are acquired when you do any big distributed supercomputing, custom programming project.
We’ll learn as much making the next gen weather simulator.
It may have been a judgement call by the writer (Bostrom) and editor: He is trying to get the word out as widely as possible that this is a brewing existential crisis. In this society, how to you get most people’s (policymakers, decision makers, basically “the Suits” who run the world) attention?
Talk about the money. Most of even educated humanity sees the world in one color (can’t say green anymore, but the point is made.)
Try to motivate people about global warming? (”...um....but, but.… well, it might cost JOBS next month, if we try to save all future high level earthly life from extinction… nope the price [lost jobs] of saving the planet is obviously too high...”)
Want to get non-thinkers to even pick up the book and read the first chapter or two.… talk about money.
If your message is important to get in front of maximum eyeballs, sometimes you have to package it a little bit, just to hook their interest. Then morph the emphasis into what you really want them to hear, for the bulk of the presentation.
Of course, strictly speaking, what I just said was tangent to the original point, which was whether the summary reflected the predominant emphasis in the pages of the book it ostensibly covered.
But my point about PR considerations was worth making, and also, Katja or someone did, I think mention maybe formulating a reading guide for Bostrom’s book, in which case, any such author of a reading guide might be thinking already about this “hook ’em by beginning with economics” tactic, to make the book itself more likely to be read by a wider audience.