It looks like the system is comprised of many independent skills and an algorithm to pick which skill to use at each state of the conversation. Some of the skills use neural nets, like a CNN for parsing images and a RNN for completing sentences but the models look relatively small.
Since her launch in 2014, XiaoIce has [...] succeeded in establishing long-term relationships with many [users].
Wouldn’t have expected to read this in the abstract of an AI paper yet.
Figure 1: A sample of conversation sessions between a user and XiaoIce in Chinese (right) and English translation (left), showing how an emotional connection between the user and XiaoIce has been established over a 2-month period. When the user encountered the chatbot for the first time (Session 1), he explored the features and functions of XiaoIce in conversation. Then, in 2 weeks (Session 6), the user began to talk with XiaoIce about his hobbies and interests (a Japanese manga). By 4 weeks (Session 20), he began to treat XiaoIce as a friend and asked her questions related to his real life. After 7 weeks (Session 42), the user started to treat XiaoIce as a companion and talked to her almost every day. After 2 more weeks (Session 71), XiaoIce became his preferred choice whenever he needed someone to talk to .
There is a paper describing the architecture https://arxiv.org/abs/1812.08989
It looks like the system is comprised of many independent skills and an algorithm to pick which skill to use at each state of the conversation. Some of the skills use neural nets, like a CNN for parsing images and a RNN for completing sentences but the models look relatively small.
Wouldn’t have expected to read this in the abstract of an AI paper yet.
Also this feels kinda creepy as a caption.