Repository logo
English
  • English
  • Català
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Italiano
  • Latviešu
  • Magyar
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Tiếng Việt
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Yкраї́нська
  • Log In
    or
    New user? Click here to register.Have you forgotten your password?
Repository logo
  • Communities & Collections
  • All of R-3
English
  • English
  • Català
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Italiano
  • Latviešu
  • Magyar
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Tiếng Việt
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Yкраї́нська
  • Log In
    or
    New user? Click here to register.Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Cox, Alan Lee"

Now showing 1 - 1 of 1
Results Per Page
Sort Options
  • Loading...
    Thumbnail Image
    Item
    Performance Analysis and Optimization of Apache Pig
    (2014-08-08) Liu, Ruoyu; Cox, Alan Lee; Ng, Eugene Tze Sing; Mellor-Crummey, John
    Apache Pig is a language, compiler, and run-time library for simplifying the development of data-analytics applications on Apache Hadoop. Specifically, it enables developers to write data-analytics applications in a high-level, SQL-like language called Pig Latin that is automatically translated into a series of MapReduce computations. For most developers, this is both easier and faster than writing applications that use Hadoop directly. This thesis first presents a detailed performance analysis of Apache Pig running a collection of simple Pig Latin programs. In addition, it compares the performance of these programs to equivalent hand-coded Java programs that use the Hadoop MapReduce framework directly. In all cases, the hand-coded Java programs outperformed the Pig Latin programs. Depending on the program and problem size, the hand-coded Java was 1.15 to 3.07 times faster. The Pig Latin programs were slower for three reasons: (1) the overhead of translating Pig Latin into Java MapReduce jobs, (2) the overhead of converting data to and from the text format used in the HDFS files and Pig's own internal representation, and (3) the overhead of the additional MapReduce jobs that were performed by Pig. Finally, this thesis explores a new approach to optimizing the Fragment-replicated join operation in Apache Pig. In Pig's original implementation of this operation, an identical in-memory hash table is constructed and used by every Map task. In contrast, under the optimized implementation, this duplication of data is eliminated through the use of a new interprocess shared-memory hash table library. Benchmarks show that as the problem size grows the optimized implementation outperforms the original by a factor of two. Moreover, it is possible to run larger problems under the optimized implementation than under the original.
  • About R-3
  • Report a Digital Accessibility Issue
  • Request Accessible Formats
  • Fondren Library
  • Contact Us
  • FAQ
  • Privacy Notice
  • R-3 Policies

Physical Address:

6100 Main Street, Houston, Texas 77005

Mailing Address:

MS-44, P.O.BOX 1892, Houston, Texas 77251-1892