Repository logo
English
  • English
  • Català
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Italiano
  • Latviešu
  • Magyar
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Tiếng Việt
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Yкраї́нська
  • Log In
    or
    New user? Click here to register.Have you forgotten your password?
Repository logo
  • Communities & Collections
  • All of R-3
English
  • English
  • Català
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Italiano
  • Latviešu
  • Magyar
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Tiếng Việt
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Yкраї́нська
  • Log In
    or
    New user? Click here to register.Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Lu, Yanxin"

Now showing 1 - 2 of 2
Results Per Page
Sort Options
  • Loading...
    Thumbnail Image
    Item
    Corpus-Driven Systems for Program Synthesis and Refactoring
    (2019-04-18) Lu, Yanxin; Chaudhuri, Swarat
    Programming is a difficult task. Programmers need to deal with small details inside overly complex computer programs. Sometimes it is inevitable for programmers to make small mistakes. To deal with this problem, software engineering techniques and formal method based techniques have been proposed to help facilitate programming. These techniques include various software engineering methodologies, design patterns, sophisticated testing methods, program repair algorithms, model checking algorithms, and program synthesis methods. In this thesis, we propose two additional corpus-driven systems for program synthesis and refactoring. We first introduce program splicing, a programming methodology that aims to automate the workflow of copying, pasting, and modifying code available online. Here, the programmer starts by writing a “draft” that mixes unfinished code, natural language comments, and correctness requirements. A program synthesizer that interacts with a large, searchable database of program snippets is used to automatically complete the draft into a program that meets the requirements. Our evaluation uses the system in a suite of everyday programming tasks, and includes a comparison with a state-of-the-art competing approach as well as a user study. The results point to the broad scope and scalability of program splicing and indicate that the approach can significantly boost programmer productivity. Next, we propose an algorithm that automates the process of API refactoring, where the goal is to rewrite an API call sequence into another sequence that only uses the API calls defined in the target library without modifying the functionality. We solve the problem of API refactoring by combining the techniques of API translation and API sequence synthesis. We evaluated our algorithm on a diverse set of benchmark problems, and our algorithm can refactor API sequences with high accuracy. In addition, we conducted a user study which indicates that our algorithm can help human developers with API refactoring.
  • Loading...
    Thumbnail Image
    Item
    Improving Peer Evaluation Quality in Massive Open Online Courses
    (2015-05-26) Lu, Yanxin; Chaudhuri, Swarat; Warren, Joe; Jermaine, Chris
    As several online course providers such as Coursera, Udacity and edX emerged in 2012, Massive Open Online Courses (MOOCs) gained much attention across the globe. While MOOCs provide learning opportunities for many people, several challenges exist in the context of MOOC and one of those is how to ensure the quality of peer grading. Interactive Programming in Python course (IPP) that Rice has offered for a number of years on Coursera has suffered from the problem of low-quality peer evaluations. In this thesis, we propose our solution to improve the quality of peer evaluations by motivating peer graders. Specifically, we want to answer the question: when a student knows that his or her own peer grading efforts are being examined and they are able to grade other peer evaluations, do those tend to motivate the student to do a better job when grading assignments? We implemented a web application where students can grade peer evaluations and we also conduct a series of controlled experiments. Finally, we find a strong effect on peer evaluation quality simply because students know that they are going to be studied using a software that is supposed to help with peer grading. In addition, we find strong evidence that by grading peer evaluations students tend to give better peer evaluations. However, the strongest effect seems to be obtained via the act of grading others’ evaluations, and not from the knowledge that one’s own peer evaluation will be examined.
  • About R-3
  • Report a Digital Accessibility Issue
  • Request Accessible Formats
  • Fondren Library
  • Contact Us
  • FAQ
  • Privacy Notice
  • R-3 Policies

Physical Address:

6100 Main Street, Houston, Texas 77005

Mailing Address:

MS-44, P.O.BOX 1892, Houston, Texas 77251-1892