Ng, T. S. Eugene2014-08-072014-08-072013-122013-10-30December 2Dinu, Florin. "Understanding and Improving the Efficiency of Failure Resilience for Big Data Frameworks." (2013) Diss., Rice University. <a href="https://hdl.handle.net/1911/76486">https://hdl.handle.net/1911/76486</a>.https://hdl.handle.net/1911/76486Big data processing frameworks (MapReduce, Hadoop, Dryad) are hugely popular today because they greatly simplify the management and deployment of big data analysis jobs requiring the use of many machines in parallel. A strong selling point is their built-in failure resilience support. Big data frameworks can run computations to completion despite occasional failures in the system. However, an important but overlooked point has been the efficiency of their failure resilience. The vision of this thesis is that big data frameworks should not only be failure resilient but that they should provide the resilience in an efficient manner with minimum impact on computations both under failures as well as during failure-free periods. To this end, the first part of the thesis presents the first in-depth analysis of the efficiency of the failure resilience provided by the popular Hadoop framework under failures. The results show that even single machine failures can lead to large, variable and unpredictable job running times. This thesis determines the causes behind this inefficient behavior and points out the responsible Hadoop mechanisms and their limitations. The second part of the thesis focuses on providing efficient failure resilience for the case of computations comprised of multiple jobs. We present the design, implementation and evaluation of RCMP, a MapReduce system based on the fundamental insight that using data replication to enable failure resilience oftentimes leads to significant and unnecessary increases in computation running time. In contrast, RCMP is designed to use job re-computation as a first-order failure resilience strategy. Re-computations under RCMP are efficient. Specifically, RCMP re-computes the minimum amount of work and uniquely it ensure this minimum re-computation work is performed efficiently. In particular, RCMP mitigates hot-spots that affect data transfers during re-computations and ensures that the available compute node parallelism is well leveraged.application/pdfengCopyright is held by the author, unless otherwise indicated. Permission to reuse, publish, or reproduce the work beyond the bounds of fair use or other exemptions to copyright law must be obtained from the copyright holder.FailureFailure resilienceBig dataMapReduceHadoopSystemsNetworkingUnderstanding and Improving the Efficiency of Failure Resilience for Big Data FrameworksThesis2014-08-07