Generative Language Models for Program Synthesis and Evaluation

Date
2024-12-06
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract

Recent advances in Large Language Models (LLMs), such as GPT and Claude, have significantly advanced the field of program synthesis. To evaluate the performance of these models, traditional benchmarks like APPS, MBPP, and HumanEval reveal limitations due to potential data leakage and their inability to mirror the complexity of real-world programming. These benchmarks typically feature concise, stand-alone code samples that fail to assess the nuanced capabilities required for comprehensive coding tasks adequately. To address these limitations, this dissertation introduces a novel, private benchmark dataset - SimCoPilot, specifically crafted to simulate the ability of an AI such as a large language model (LLM) to perform as a “copilot”-style, interactive coding assistant. In SimCoPilot, an AI is asked to provide small amounts of code within an existing project, ranging in size from hundreds to thousands of lines. The benchmark tests an AI’s ability to write code in both completion (providing code to finish a method or a block) and infill scenarios (providing code to fill a blank in a method), covering various domains such as classic algorithms, databases, computer vision, and neural networks. Despite their varied architectures, most LLMs typically treat source code as mere string objects and require large-scale models and extensive training datasets. Unlike natural language, however, source code is a formal language imbued with rich syntactical and semantic structures. Addressing this disparity, this dissertation explored an innovative approach that explicitly extracts and integrates these syntactic and semantic elements into an encoder-decoder transformer model. Our detailed evaluation analyzes how LLMs manage different code dependencies and logic complexities, providing insights into their operational effectiveness in realistic programming environments. This examination provides profound insights into the capabilities of modern Language Models in navigating realistic programming challenges, thereby making a significant contribution to the understanding of their practical applicability in the software development environment.

Description
Degree
Doctor of Philosophy
Type
Thesis
Keywords
Program Synthesis, LLM, GenAI, Program Evaluation
Citation
Has part(s)
Forms part of
Published Version
Rights
Link to license
Citable link to this page