Baraniuk, Richard G2025-05-162025-05-162025-052025-02-05May 2025https://hdl.handle.net/1911/118353The recent proliferation of LLMs necessitates a strategy for addressing these models' deleterious shortcomings: hallucination, and lack of explainability. Knowledge graphs (KGs) have gained attention as a potential solution to these problems, as they can serve as a traceable, factual database for LLMs; however, constructing high-quality KGs efficiently remains a challenge. To address these challenges, this thesis proposes Words2Wisdom, a logic-informed, LLM-based framework for generating quality KGs from textbooks. Words2Wisdom creates expressive KGs by leveraging the structure of propositional logic, and ensures accurate fact representation, demonstrating knowledge validity (precision) greater than 95% when using the GPT-4o model in a few-shot environment. Our results suggest targeted fine-tuning and model specialization can further enhance KG quality. Furthermore, this thesis examines whether LLMs are able to assess the quality of KGs. We introduce Libra, a framework establishing a novel KG evaluation protocol for validating KGs against textbook sources. Preliminary results show high observed agreement between Libra and human experts, suggesting that KG construction and evaluation and can indeed be effectively automated, paving the way for future research on the role of LLMs in hallucination mitigation.application/pdfenknowledge graphslarge language modelsTowards Efficient Knowledge Graph Generation From Textbooks: A Dual Framework ApproachThesis2025-05-16