Again, this is a famous one, but Watson seems really impressive to me. It’s one thing to understand basic queries and do a DB query in response, but its ability to handle indirect questions that would confuse many a person (guilty), was surprising.
On the other hand, its implementation (as described in Second Machine Age) seems to be just as algorithmic, brittle and narrow as Deep Blue—basically Watson was as good as its programmers...
Along with self-driving cars, Watson’s Jeopardy win shows that, given enough time, a team of AI engineers has an excellent chance of creating a specialized system which can outpace the best human expert in a much wider variety of tasks than we might have thought before.
The capabilities of such a team has risen dramatically since I first studied AI. Charting and forecasting the capabilities of such a team is worthwhile.
Having an estimate of what such a team will be able to accomplish in ten years is material to knowing when they will be able to do things we consider dangerous.
After those two demonstrations, what narrow projects could we give a really solid AI team which would stump them? The answer is no longer at all clear. For example, the SAT or an IQ test seem fairly similar to Jeopardy, although the NLP tasks differ.
The Jeopardy system also did not incorporate a wide variety of existing methods and solvers, because they were not needed to answer Jeopardy questions.
In short order an IBM team can incorporate systems which can extract information from pictures and video, for example, into a Watson application.
Watson’s Jeopardy win shows that, given enough time, a team of AI engineers has an excellent chance of creating a specialized system which can outpace the best human expert in a much wider variety of tasks than we might have thought before.
One could read that comment on a spectrum of charitableness.
I will speak for myself, at the risk of ruffling some feathers, but we are all here to bounce ideas around, not tow any party lines, right?
To me, Watson’s win means very little, almost nothing. Expert systems have been around for years, even decades. I experimented with coding one myself, many years ago.
It shows what we already knew: given a large budget, a large team of mission-targeted programmers can hand craft a mission specific expert system out of an unlimited pool of hardware resources, to achieve a goal like winning a souped-up game of trivia, laced with puns as well as literal questions.
It was a billion dollar stunt, IMO, by IBM and related project leaders.
Has it achieved consciousness, self awareness, evidence of compassion, a fear of death, moral intuition?
That would have impressed me, that we were entering a new era. (And I will try to rigorously claim, over time, that this is exactly what we really need, in order to have a fighting chance of producing fAGI. I think those not blinded by a paradigm that should have died out with logical positivism and behaviorism, would like to admit (some fraction of them) that penetrating, intellectually honest analysis accumulates a conviction that no mechanical decision procedure we design, no matter how spiffy our mathematics, (and I was a math major with straight As in my day) can guarantee that an emotionless, compassionless, amoral, non,conscious, mechanically goal-seeking apparatus, will not—inadvertently or advertently—steam roller right over us.
I will speak more about that as time goes on. But in keeping with my claim yesterday that “intelligence” and “consciousness” are not coextensive in any simple way, “intelligence” and “sentience” are disjoint. I think that the autonomous “restraint” we need, to make AGIs into friendly AGIs, requires giving them sentience, and creating conditions favorable to them discovering a morality compatible with our own.
Creativity, free will (or autonomy, in language with less philosophical baggage), emotion, a theory of ethics and meta-ethics, and a theory of motivation.… we need to make progress on these, the likely basic building blocks of moral, benign, enlightened, beneficent forms of sentience… as well as progress on the fancy tech needed to implement this, once we have some idea what we are actually trying to implement.
And that thing we should implement is not, in my opinion, ever more sophisticated Watsons, or groups of hundreds or thousands of them, each hand crafted to achieve a specific function (machine vision, unloading a dishwasher, …..)
Oh, sure, that would work, just like Watson worked. But if we want moral intuition to develop, a respect for life to develop, we need to have a more ambitious goal.
ANd I actually think we can do it. Now is the time. The choice that confronts us really, is not uAGI vs. fAGI, but dumb GOFAI, vs, sentient AI.
Watson: just another expert system. Had someone given me the budget and offered to let me lead a project team to build Watson, I would have declined, because it was clear in advance that it was just a (more nuanced) brute force, custom crafted and tuned, expert system. It’s success was assured, given a deep wallet.
What did we learn? Maybe some new algorithm-optimizations or N-space data structure topologies were discovered along the way, but nothing fundamental.
I’d have declined to lead the project (not that I would have been asked), because it was uninteresting. There was nothing to learn, and nothing much was learned, except some nuances of tech that always are acquired when you do any big distributed supercomputing, custom programming project.
We’ll learn as much making the next gen weather simulator.
Again, this is a famous one, but Watson seems really impressive to me. It’s one thing to understand basic queries and do a DB query in response, but its ability to handle indirect questions that would confuse many a person (guilty), was surprising.
On the other hand, its implementation (as described in Second Machine Age) seems to be just as algorithmic, brittle and narrow as Deep Blue—basically Watson was as good as its programmers...
Along with self-driving cars, Watson’s Jeopardy win shows that, given enough time, a team of AI engineers has an excellent chance of creating a specialized system which can outpace the best human expert in a much wider variety of tasks than we might have thought before.
The capabilities of such a team has risen dramatically since I first studied AI. Charting and forecasting the capabilities of such a team is worthwhile.
Having an estimate of what such a team will be able to accomplish in ten years is material to knowing when they will be able to do things we consider dangerous.
After those two demonstrations, what narrow projects could we give a really solid AI team which would stump them? The answer is no longer at all clear. For example, the SAT or an IQ test seem fairly similar to Jeopardy, although the NLP tasks differ.
The Jeopardy system also did not incorporate a wide variety of existing methods and solvers, because they were not needed to answer Jeopardy questions.
In short order an IBM team can incorporate systems which can extract information from pictures and video, for example, into a Watson application.
One could read that comment on a spectrum of charitableness. I will speak for myself, at the risk of ruffling some feathers, but we are all here to bounce ideas around, not tow any party lines, right? To me, Watson’s win means very little, almost nothing. Expert systems have been around for years, even decades. I experimented with coding one myself, many years ago.
It shows what we already knew: given a large budget, a large team of mission-targeted programmers can hand craft a mission specific expert system out of an unlimited pool of hardware resources, to achieve a goal like winning a souped-up game of trivia, laced with puns as well as literal questions.
It was a billion dollar stunt, IMO, by IBM and related project leaders.
Has it achieved consciousness, self awareness, evidence of compassion, a fear of death, moral intuition?
That would have impressed me, that we were entering a new era. (And I will try to rigorously claim, over time, that this is exactly what we really need, in order to have a fighting chance of producing fAGI. I think those not blinded by a paradigm that should have died out with logical positivism and behaviorism, would like to admit (some fraction of them) that penetrating, intellectually honest analysis accumulates a conviction that no mechanical decision procedure we design, no matter how spiffy our mathematics, (and I was a math major with straight As in my day) can guarantee that an emotionless, compassionless, amoral, non,conscious, mechanically goal-seeking apparatus, will not—inadvertently or advertently—steam roller right over us.
I will speak more about that as time goes on. But in keeping with my claim yesterday that “intelligence” and “consciousness” are not coextensive in any simple way, “intelligence” and “sentience” are disjoint. I think that the autonomous “restraint” we need, to make AGIs into friendly AGIs, requires giving them sentience, and creating conditions favorable to them discovering a morality compatible with our own.
Creativity, free will (or autonomy, in language with less philosophical baggage), emotion, a theory of ethics and meta-ethics, and a theory of motivation.… we need to make progress on these, the likely basic building blocks of moral, benign, enlightened, beneficent forms of sentience… as well as progress on the fancy tech needed to implement this, once we have some idea what we are actually trying to implement.
And that thing we should implement is not, in my opinion, ever more sophisticated Watsons, or groups of hundreds or thousands of them, each hand crafted to achieve a specific function (machine vision, unloading a dishwasher, …..) Oh, sure, that would work, just like Watson worked. But if we want moral intuition to develop, a respect for life to develop, we need to have a more ambitious goal.
ANd I actually think we can do it. Now is the time. The choice that confronts us really, is not uAGI vs. fAGI, but dumb GOFAI, vs, sentient AI.
Watson: just another expert system. Had someone given me the budget and offered to let me lead a project team to build Watson, I would have declined, because it was clear in advance that it was just a (more nuanced) brute force, custom crafted and tuned, expert system. It’s success was assured, given a deep wallet.
What did we learn? Maybe some new algorithm-optimizations or N-space data structure topologies were discovered along the way, but nothing fundamental.
I’d have declined to lead the project (not that I would have been asked), because it was uninteresting. There was nothing to learn, and nothing much was learned, except some nuances of tech that always are acquired when you do any big distributed supercomputing, custom programming project.
We’ll learn as much making the next gen weather simulator.