Congratulation to Denis, Emory System he developed Ranks First in Live Question Answering Challenge

A question answering system, developed at Emory by a Ph.D. student Denis Savenkov, was ranked first in the LiveQA challenge, as announced at the 2016 Text Retrieval Conference (TREC 2016) on November 2016. The goal of the LiveQA challenge was to quickly answer questions posted by real users on a popular website, Yahoo Answers. This task pushes the limits of automatic question answering and information retrieval, as these questions express complex information needs that people chose to ask to a community, rather than turning to existing search engines. On top of it, the systems had to respond within a minute, bringing additional realism to the challenge. 26 systems from North America, Europe, Asia and Australia participated in the LiveQA challenge. The system responses were judged by the assessors from the National Institute of Science and Technology (NIST), that co-organized the challenge together with Yahoo Labs and Emory University. An Emory system, developed by Denis Savenkov of the Emory IR Lab, was ranked 1st among all of the participants. Denis is advised by Prof. Eugene Agichtein, and is a Ph.D. student in the Emory Computer Science & Informatics program. The top-scoring Emory system, CRQA, uses many sources of information to automatically generate a set of candidate responses, and score them using state-of-the-art machine learning models. CRQA operates as a “cyborg”, combining the best of human intuition with automatic responses, by using a novel real-time crowdsourcing module to obtain additional feedback from real human workers, while still returning answers quickly. The CRQA system was able to provide acceptable answers to more than 60% of the user questions, and perfectly answered over 22% of the questions, according to the quality judgments provided by professional NIST assessors. More details about the Emory system are published in a paper titled “CRQA: Crowd-powered Real-time Automated Question Answering System”, by Denis Savenkov and Eugene Agichtein, in proceedings of the AAAI Human Computation conference (HCOMP 2016). Despite significant advances in automated question answering, the LiveQA challenge demonstrated that a large gap still remains between automatic question answering systems and good human answers, ~50% of which were judged to be excellent, compared to only 22% provided by the top-ranked Emory system. The LiveQA challenge will run again in 2017. More details about the challenge are available at

Posted in Other