Dinu, FlorinNg, T. S. Eugene2017-08-022017-08-022011-08-11Dinu, Florin and Ng, T. S. Eugene. "Analysis of Hadoop�s Performance under Failures." (2011) https://hdl.handle.net/1911/96398.https://hdl.handle.net/1911/96398Failures are common in today’s data center environment and can significantly impact the performance of important jobs running on top of large scale computing frameworks. In this paper we analyze Hadoop’s behavior under compute node and process failures. Surprisingly, we find that even a single failure can have a large detrimental effect on job running times. We uncover several important design decisions underlying this distressing behavior: the inefficiency of Hadoop’s statistical speculative execution algorithm, the lack of sharing failure information and the overloading of TCP failure semantics. We hope that our study will add new dimensions to the pursuit of robust large scale computing framework designs.10 ppengYou are granted permission for the noncommercial reproduction, distribution, display, and performance of this technical report in any format, but this permission is only for a period of forty-five (45) days from the most recent time that you verified that this technical report is still available from the Computer Science Department of Rice University under terms that include this permission. All other rights are reserved by the author(s).Analysis of Hadoop’s Performance under FailuresTechnical reportTR11-05