Towards End-to-End Trustworthy Large Language Models

Date of Award

4-30-2024

Document Type

Thesis

Degree Name

Master of Science in Computer Vision

Department

Computer Vision

First Advisor

Dr. Qirong Ho

Second Advisor

Dr. Shijian Lu

Abstract

The field of Natural Language Generation (NLG), particularly through the development of Large Language Models (LLMs) like ChatGPT, has witnessed significant advancements, displaying the ability to generate coherent and contextually relevant text. This progress represents a revolutionary step in artificial intelligence, making LLMs increasingly integral to various applications. However, the deployment of LLMs comes with significant challenges that need to be addressed to ensure their trustworthiness and practical applicability. This thesis focuses on two pivotal components crucial for establishing end-to-end trustworthy LLMs: Privacy Protection and Hallucination Mitigation. It introduces two novel works, FedNAR and DoGe, aimed at addressing these respective challenges. Privacy protection in LLM training is imperative due to the reliance on vast datasets, which may include sensitive or private information. Misuse of such data could lead to severe consequences, underscoring the importance of safeguarding data privacy. This thesis presents Federated Learning as a viable solution to enhance privacy protection, particularly during the pre-training phase of LLMs. The Federated Optimization with Normalized Annealing Regularization (FedNAR) is introduced as an innovative approach within this domain. FedNAR is designed to improve the convergence and precision of federated learning algorithms by regulating update magnitudes through co-clipping the gradient and weight decay components. Through extensive experimentation, FedNAR demonstrates robust performance improvements across various datasets and federated optimization algorithms, showcasing its potential to significantly enhance LLM training privacy. Another critical challenge addressed is the phenomenon of Hallucination in LLMs, where models generate inaccurate or nonsensical text. This issue is particularly concerning in high-stakes environments, such as medical or legal applications, where reliability is paramount. To mitigate hallucination, this thesis introduces Dynamic Source-Grounded Decoding (DoGe), a novel approach that balances factuality and diversity in dialogue generation. DoGe dynamically alternates between leveraging internal model knowledge and external sources, depending on the model’s confidence in its factual accuracy. Experimental results across multiple datasets indicate that DoGe significantly outperforms existing decoding strategies, enhancing the diversity and factuality of responses in dialogue systems. This research contributes to the field of NLG by addressing two of the most pressing challenges in deploying trustworthy LLMs. By ensuring privacy protection and mitigating hallucination, the proposed solutions, FedNAR and DoGe, pave the way for more reliable, safe, and effective use of LLMs across various applications. These contributions not only enhance the operational trustworthiness of LLMs but also offer insights into improving their development and deployment processes, thereby benefiting both the research community and practical applications.

Comments

Thesis submitted to the Deanship of Graduate and Postdoctoral Studies

In partial fulfilment of the requirements for the M.Sc degree in Computer Vision

Advisors: Qirong HO, Shijian Lu

with 2 years embargo period

This document is currently not available here.

Share

COinS