Photo of Danqi Chen

Artificial intelligence & robotics

Danqi Chen

Teaching machines how to read

Year Honored
2019

Region
China

Danqi Chen, one of the most noteworthy rising stars in the field of NLP, has been devoting herself to building machines with better capabilities to understand human languages for the past 7 years.

In the fall of 2012, Danqi entered Stanford University as a graduate student and started working with Christopher Manning, the director of the Stanford AI lab and a leading expert in the field of NLP. In the following year, when deep learning approaches started to demonstrate the promise of solving a series of language problems, Danqi was immediately intrigued. With a solid foundation in computer science from Tsinghua University and the Stanford AI Lab, she rapidly grew into one of the pioneers in applying deep learning to NLP, all the while pursuing her doctoral study.

When it comes to NLP research, there are generally two categories of problems. The first is to analyze and understand the structure of language and the second is to directly tackle applications, such as machine translation, question answering, and dialogue systems. During her Ph.D., Danqi had done important research and contributed to both aspects, including syntactic parsing, knowledge base construction, question answering, and dialogue systems.

For instance, Danqi’s 2014 paper “A Fast and Accurate Dependency Parser Using Neural Networks” is the very first successful neural network model for dependency parsing, which created an accurate and fast parser to analyze the grammatical structure of sentences. Moreover, it laid the foundation for further research by Google’s NLP team on subsequent parsers.

In addition, Danqi’s 2017 paper “Reading Wikipedia to Answer Open-Domain Questions” has opened up the direction of combining information retrieval and neural reading comprehension approaches for open-domain question answering.

In 2018, Danqi completed her 156-page doctoral thesis titled "Neural Reading Comprehension and Beyond." The dissertation focuses on reading comprehension and question answering and quickly became one of the Stanford's most popular doctoral dissertations of the past decade right after it was officially released by the university. It was later translated into Chinese by Chinese NLP researchers, and is considered a must-read for many in the field.

Danqi has now embarked on a new journey. She is building her own research group at Princeton University, hoping to solve more fundamental problems in NLP, such as building intelligent agents to effectively access, organize, and reason knowledge. This is also one of the key challenges in Artificial Intelligence.

"I deeply care about building practical systems and I hope that my research results are not just a demonstration of nice ideas but can be useful and viable in real applications," she said.

At the age of 29, Danqi is working as an assistant professor at Princeton University, and she is working to accelerate the process of realizing her dream.