Photo of Akari Asai

Artificial intelligence & robotics

Akari Asai

A trailblazer in retrieval-augmented generation research is taking on the challenge of preventing ‘hallucinations’ and elevating LLM reliability.

Year Honored
2024

Organization
University of Washington

Region
Japan

The AI chatbot ChatGPT was released to the public by OpenAI in November 2022, garnering significant attention thanks to its ability to provide natural-sounding answers to a varied range of questions. Following this, Microsoft and Google also launched similar services, and AI rapidly became a familiar presence in our daily lives.

However, the large-scale language models (LLM) that serve as the backbone of these AI chatbots come with their own set of challenges. There is the problem of ‘hallucination,’ which involves answers containing false information and responses that are complete fabrications. Additionally, LLMs are unable to provide the basis for their answers or reflect new information learned after their initial training. As a result, it is difficult to use LLMs in situations where high reliability and up-to-date information are required.

Retrieval-Augmented Generation, or “RAG” for short, has garnered attention as a way to solve these issues. LLMs are traditionally trained beforehand, mainly with a vast selection of data collected across the Internet. In contrast, RAG searches large-scale databases in real time whenever questions come in from users, and the results are then used to generate answers. Thanks to this approach, LLMs can produce answers using the latest information available without any additional training. Further, this allows them to show evidence for their answers as well.

Akari Asai at the University of Washington’s graduate school is one of the trailblazers who has worked on RAG research since 2019. Even before ChatGPT picked up attention, she had published over 20 peer-reviewed papers with a focus on pairing knowledge-retrieval with large-scale models. In particular, a paper she authored in 2023 was the first in the world to show RAG’s success in curtailing hallucinations, which made major waves. This research utilized an extensive set of 14,000 questions to compare how accurate answers were from 10 different models and four augmented methods. These results made it clear that augmented methods like RAG offer a better option over enlarging the scale of models.

In 2024, she also proposed the next step in the evolution of RAG referred to as “Self-RAG.” Self-RAG offers a new way to determine the necessity of searches depending on the details of the question and prevents inappropriate answers from being generated. It received a top 0.9% ranking at ICLR 2024, the most rigorous international conference held in the machine learning field, and it has been adopted by major LLM libraries, including LlamaIndex and LangChain, as it picks up attention from numerous companies and researchers.

Asai aims to develop "an LLM that specialists can trust to automate critical tasks." She envisions applying LLMs in fields such as medical care and science, where a high level of reliability is essential, and continues to work on both fundamental research and real-world implementation.