Repository logo
English
  • English
  • Català
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Italiano
  • Latviešu
  • Magyar
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Tiếng Việt
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Yкраї́нська
  • Log In
    or
    New user? Click here to register.Have you forgotten your password?
Repository logo
  • Communities & Collections
  • All of R-3
English
  • English
  • Català
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Italiano
  • Latviešu
  • Magyar
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Tiếng Việt
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Yкраї́нська
  • Log In
    or
    New user? Click here to register.Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Druschel, Peter"

Now showing 1 - 20 of 31
Results Per Page
Sort Options
  • Loading...
    Thumbnail Image
    Item
    A New Approach to Routing With Dynamic Metrics
    (1998-11-18) Chen, Johnny; Druschel, Peter; Subramanian, Devika
    We present a new routing algorithm to compute paths within a network using dynamic link metrics. Dynamic link metrics are cost metrics that depend on a link's dynamic characteristics, e.g., the congestion on the link. Our algorithm is destination-initiated: the destination initiates a global path computation to itself using dynamic link metrics. All other destinations that do not initiate this dynamic metric computation use paths that are calculated and maintained by a traditional routing algorithm using static link metrics. Analysis of Internet packet traces show that a high percentage of network traffic is destined for a small number of networks. Because our algorithm is destination-initiated, it achieves maximum performance at minimum cost when it only recomputes dynamic metric paths to these selected "hot" destination networks. This selective approach to route recomputation reduces many of the problems (principally route oscillations) associated with calculating all routes simultaneously. We compare the routing efficiency and end-to-end performance of our algorithm against those of traditional algorithms using dynamic link metrics. The results of our experiments show that our algorithm can provide higher network performance at a significantly lower routing cost under conditions that arise in real networks. The effectiveness of the algorithm stems from the independent, time-staggered recomputation of important paths using dynamic metrics, allowing for splits in congested traffic that cannot be made by traditional routing algorithms.
  • Loading...
    Thumbnail Image
    Item
    A Resource Management Framework for Predictable Quality of Service in Web Servers
    (2003-07-07) Aron, Mohit; Druschel, Peter; Iyer, Sitaram
    This paper presents a resource management framework for providing predictable quality of service (QoS) in Web servers. The framework allows Web server and proxy operators to ensure a probabilistic minimal QoS level, expressed as an average request rate, for a certain class of requests (called a service), irrespective of the load imposed by other requests. A measurement-based admission control framework determines whether a service can be hosted on a given server or proxy, based on the measured statistics of the resource consumptions and the desired QoS levels of all the co-located services. In addition, we present a feedback-based resource scheduling framework that ensures that QoS levels are maintained among admitted, co-located services. Experimental results obtained with a prototype implementation of our framework on trace-based workloads show its effectiveness in providing desired QoS levels with high confidence, while achieving high average utilization of the hardware.
  • Loading...
    Thumbnail Image
    Item
    A Simple, Practical Distributed Multi-Path Routing Algorithm
    (1998-07-16) Chen, Johnny; Druschel, Peter; Subramanian, Devika
    We present a simple and practical distributed routing algorithm based on backward learning. The algorithm periodically floods \emscout packets that explore paths to a destination in reverse. Scout packets are small and of fixed size; therefore, they lend themselves to hop-by-hop piggy-backing on data packets, largely defraying their cost to the network. The correctness of the proposed algorithm is analytically verified. Our algorithm also has loop-free multi-path routing capabilities, providing increased network utilization and route stability. The Scout algorithm requires very little state and computation in the routers, and can efficiently and gracefully handle high rates of change in the network's topology and link costs. An extensive simulation study shows that the proposed algorithm is competitive with link-state and distance vector algorithms, particularly in highly dynamic networks.
  • Loading...
    Thumbnail Image
    Item
    Accountability for distributed systems
    (2009) Haeberlen, Andreas; Druschel, Peter
    Nodes in a distributed system can fail for many reasons, such as bugs, misconfigurations, hardware failures, intrusions, or insider attacks. Once a node has become faulty, its behavior can change arbitrarily. In benign cases, the node might simply stop; in less benign cases, it might actively try to subvert the rest of the system. A reliable distributed system must have a way to handle such faults. In this thesis, we explore a novel approach to this problem, which is based on accountability. In an accountable system, each node records its past actions in a tamper-evident log, and nodes inspect each other's log for signs of misbehavior. When nodes become faulty, the other nodes can eventually detect this, and they can obtain evidence that irrefutably links the fault to a faulty node. At the same time, correct nodes can always defend themselves against any false accusations. We characterize the class of faults that can be detected with our approach, and we show that it includes any fault that causally affects at least one correct node. We also present a set of techniques for enforcing accountability, including an algorithm for tamper-evident logs, and two techniques for detecting faults in the log: One relies on state machine replay to check a node's behavior against a reference implementation, while the other checks the logs against a declarative specification of the expected behavior. Each of these techniques can be applied to a wide range of distributed systems. To demonstrate that accountability is widely applicable, we have added it to several different types of systems, including a decentralized email system, a server-based file system, a peer-to-peer content distribution system, the Internet's interdomain routing system, and two multi-player games. In each case, accountability was able to detect a variety of problems that were previously reported in the literature. This shows that accountability is very general and can supersede a number of existing defenses. Our evaluation shows that accountability is practical, that its overhead is reasonable, and that it can scale to large numbers of nodes.
  • Loading...
    Thumbnail Image
    Item
    Advanced memory management and disk scheduling techniques for general-purpose operating systems
    (2006) Iyer, Sitaram; Druschel, Peter
    Operating systems have evolved into sophisticated, high-performance virtualizing platforms, to support and be fair towards concurrently running applications. However, since applications usually run oblivious of each other and prefer narrow system interfaces, they inadvertently contend for resources, resulting in inappropriate allocations and significant performance degradations. This dissertation identifies and eliminates two such problems: one we call rigidity in physical memory management which we solve using adaptive memory management, and a second we call deceptive idleness in disk schedulers that we solve through anticipatory disk scheduling . Many applications, their libraries, and runtimes can trade memory consumption for performance by maintaining caches, triggering garbage collection, etc. However, due to ignorance of memory pressure in the system, they are forced to be conservative about memory usage. Adaptive memory management is a technique that informs applications of the severity of memory pressure via a metric that quantifies the cost of using memory. This enables applications to allocate memory liberally when available (with performance benefits of 20% to 300%); and to release it under contention. The system thus reaches an equilibrium that balances the impact of memory pressure on each application; adapts to avoid paging during load bursts and improves stability and responsiveness; and reduces the need for manual configuration of memory footprints. It also provides finer control on memory usage by adapting proportional to application priorities. Disk schedulers generally schedule a request as soon as the previous request has finished. Unfortunately, many applications perform synchronous I/O by issuing a request after the previous request has been served. This causes the scheduler to suffer from deceptive idleness, a condition where it incorrectly assumes that the process has no further requests, and seeks to a request from another process. Anticipatory dish scheduling transparently solves this problem by sometimes injecting a small, controlled delay into the disk scheduler before it makes a scheduling decision, whenever it expects the current request to be quickly followed by another nearby request. This improves performance by up to 70% and enables proportional schedulers to achieve their contracts. Anticipatory scheduling has been ported to Linux, where it is now the default disk scheduler.
  • Loading...
    Thumbnail Image
    Item
    Analysis of TCP performance over ATM networks
    (1998) Aron, Mohit; Druschel, Peter
    ATM technology is expected to gain widespread use both in wide-area networks and in high-speed local area networks. The performance of the TCP/IP protocol suite over such networks is of great importance, as it is widely used in the Internet and in private internetworks. However, numerous studies have shown that the effective throughput of conventional TCP implementations suffers over plain ATM networks. This thesis presents a detailed study of the interactions between TCP's congestion control mechanisms and ATM networks. Based on the results of this study, we propose an enhanced version of TCP Vegas (called new-Vegas) and show that it achieves an increase in throughput of 40-70% over TCP Lite on plain ATM networks and up to 20% on EPD-enhanced ATM networks and packet networks. Moreover, our results indicate that the enhanced TCP Vegas is largely insensitive to EPD/PPD. In fact, even on plain ATM networks, it performs within 7% of its best throughput.
  • Loading...
    Thumbnail Image
    Item
    Ants and Reinforcement Learning: A Case Study in Routing in Dynamic Networks
    (1997-02-17) Chen, Johnny; Druschel, Peter; Subramanian, Devika
    We investigate two new distributed routing algorithms for data networks based on simple biological "ants" that explore the network and rapidly learn good routes, using a novel variation of reinforcement learning. These two algorithms are fully adaptive to topology changes and changes in link costs in the network, and have space and computational overheads that are competitive with traditional packet routing algorithms: although they can generate more routing traffic when the rate of failures in a network is low, they perform much better under higher failure rates. Both algorithms are more resilient than traditional algorithms, in the sense that random corruption of routing state has limited impact on the computation of paths. We present convergence theorems for both of our algorithms drawing on the theory of non-stationary and stationary discrete-time Markov chains over the reals. We present an extensive empirical evaluation of our algorithms on a simulator that is widely used in the computer networks community for validating and testing protocols. We present comparative results on data delivery performance, aggregate routing traffic (algorithm overhead), as well as the degree of resilience for our new algorithms and two traditional routing algorithms in current use. We also show that the performance of our algorithms scale well with increase in network size using a realistic topology.
  • Loading...
    Thumbnail Image
    Item
    Autonomous storage management for low-end computing environments
    (2011) Post, Ansley; Druschel, Peter
    To make storage management transparent to users, enterprises rely on expensive storage infrastructure, such as high end storage appliances, tape robots, and offsite storage facilities, maintained by full-time professional system administrators. From the user's perspective access to data is seamless regardless of location, backup requires no periodic, manual action by the user, and help is available to recover from storage problems. The equipment and administrators protect users from the loss of data due to failures, such as device crashes, user errors, or virii, as well as being inconvenienced by the unavailability of critical files. Home users and small businesses must manage increasing amounts of important data distributed among an increasing number of storage devices. At the same time, expert system administration and specialized backup hardware are rarely available in these environments, due to their high cost. Users must make do with error-prone, manual, and time-consuming ad hoc solutions, such as periodically copying data to an external hard drive. Non-technical users are likely to make mistakes, which could result in the loss of a critical piece of data, such as a tax return, customer database, or an irreplaceable digital photograph. In this thesis, we show how to provide transparent storage management for home and small business users We introduce two new systems: The first, PodBase, transparently ensures availability and durability for mobile, personal devices that are mostly disconnected. The second, SLStore, provides enterprise-level data safety (e.g. protection from user error, software faults, or virus infection) without requiring expert administration or expensive hardware. Experimental results show that both systems are feasible, perform well, require minimal user attention, and do not depend on expert administration during disaster-free operation. PodBase relieves home users of many of the burdens of managing data on their personal devices. In the home environment, users typically have a large number of personal devices, many of them mobile devices, each of which contain storage, and which connect to each other intermittently. Each of these devices contain data that must be made durable, and available on other storage devices. Ensuring durability and availability is difficult and tiresome for non-expert users, as they must keep track of what data is stored on which devices. PodBase transparently ensures the durability of data despite the loss or failure of a subset of devices; at the same time, PodBase aims to make data available on all the devices appropriate for a given data type. PodBase takes advantage of storage resources and network bandwidth between devices that typically goes unused. The system uses an adaptive replication algorithm, which makes replication transparent to the user, even when complex replication strategies are necessary. Results from a prototype deployment in a small community of users show that PodBase can ensure the durability and availability of data stored on personal devices under a wide range of conditions with minimal user attention. Our second system, SLStore, brings enterprise-level data protection to home office and small business computing. It ensures that data can be recovered despite incidents like accidental data deletion, data corruption resulting from software errors or security breaches, or even catastrophic storage failure. However, unlike enterprise solutions, SLStore does riot require professional system administrators, expensive backup hard- ware, or routine, manual actions on the part of the user. The system relies on storage leases, which ensure that data cannot be overwritten for a pre-determined period, and an adaptive storage management layer which automatically adapts the level of backup to the storage available. We show that this system is both practical, reliable and easy to manage, even in the presence of hardware and software faults.
  • Loading...
    Thumbnail Image
    Item
    Consistent Key Mapping in Structured Overlays
    (2005-08-16) Druschel, Peter; Haeberlen, Andreas; Hoye, Jeff; Mislove, Alan
    Most structured peer-to-peer overlays rely on consistent hashing to determine the node that is responsible for a given key. For consistent hashing to work properly, it is necessary that the nodes have a consistent view of their neighborhood in the identifier space. However, if routing anomalies occur in the underlying network, this view can become inconsistent, causing unstable overlay behavior and, worse, allowing more than one node to assume responsibility for ranges of keys. We present a set of techniques for preventing inconsistencies under routing anomalies, and we propose to adopt strategies from mobile ad-hoc networking for maintaining connectivity in the presence of path failures. We evaluate our design in the context of Pastry and present results from a deployment in the PlanetLab testbed.
  • Loading...
    Thumbnail Image
    Item
    Differentiated and predictable quality of service in Web server systems
    (2001) Aron, Mohit; Druschel, Peter
    As the World Wide Web experiences increasing commercial and mission-critical use, server systems are expected to deliver high and predictable performance. The phenomenal improvement in microprocessor speeds, coupled with the deployment of clusters of commodity workstations has enabled server systems to meet the continually increasing performance demands in a cost-effective and scalable manner. However, as the volume, variety and sophistication of services offered by server systems increase, effective support for providing differentiated and predictable quality of service has also become important. For example, it is often desirable to differentiate between the resources allocated to virtual web sites hosted on a server system so as to provide predictable performance to individual sites, regardless of the load imposed upon others. Server systems lack adequate support for providing predictable performance to hosted services in terms of metrics that are meaningful to server applications, such as average throughput, response time etc. This is because conventional systems multiplex resources by dealing with system level metrics such as CPU/disk bandwidth, memory pages etc. Maintaining predictable levels of performance in application level metrics, therefore, requires a corresponding mapping to system level metrics that is not supported in conventional systems. High performance server systems based upon cluster-based architectures also lack adequate support for providing differentiated quality of service. This is because providing differentiated quality of service in a cluster-based server system requires global resource management not found in current clusters. This dissertation is concerned with support for both differentiated as well as predictable quality of service in server systems. The specific requirements of server applications and their interactions with the resource management facilities in the operating system software are studied. This leads to a concerted design of a resource management framework for providing effective quality of service in server systems.
  • Loading...
    Thumbnail Image
    Item
    End-to-End TCP Congestion Control over ATM Networks
    (1997-02-12) Aron, Mohit; Druschel, Peter
    It is well documented that the effective throughput of TCP can suffer on plain ATM networks. Several research efforts have aimed at developing additions to ATM networks like Early Packet Discard that avoid TCP throughput degradation. This paper instead investigates improvements to TCP that allow it to perform well on ATM networks without switch-level enhancements, thus avoiding additional complexity and loss of layer transparency in ATM switches. We present an enhanced version of TCP Vegas and show that it performs nearly as well on plain ATM networks as on packet-oriented networks. Moreover, like the original TCP Vegas, our version achieves a significant increase in throughput over the BSD 4.4-Lite TCP. We also present an analysis of TCP dynamics over ATMnetworks, and a simulation based study of the various enhancements in our TCPimplementation and their impact on TCP performance over ATM and packet-oriented networks.
  • Loading...
    Thumbnail Image
    Item
    Exploring the design space of cooperative streaming multicast
    (2009) Nandi, Animesh; Druschel, Peter
    Video streaming over the Internet is rapidly rising in popularity, but the availability and quality of video content is currently limited by the high bandwidth costs and infrastructure needs of server-based solutions. Recently, however, cooperative end-system multicast (CEM) has emerged as a promising paradigm for content distribution in the Internet, because the bandwidth overhead of disseminating content is shared among the participants of the CEM overlay network. In this thesis, we identify the dimensions in the design space of CEMs, explore the design space, and seek to understand the inherent tradeoffs of different design choices. In the first part of the thesis, we study the control mechanisms for CEM overlay maintenance. We demonstrate that the control task of neighbor acquisition in CEMs can be factored out into a separate control overlay that provides a single primitive: a configurable anycast for peer selection. The separation of control from data overlay avoids the efficiency tradeoffs that afflict some of the current systems. The anycast primitive can be used to build and maintain different data overlay organizations like single-tree, multi-tree, mesh-based, and hybrids, by expressing appropriate policies. We built SAAR, a reusable, shared control overlay for CEMs, that efficiently implements this anycast primitive, and thereby, efficiently serves the control needs for CEMs. In the second part of the thesis, we focus on techniques for data dissemination. We built a common framework in which different CEM data delivery techniques can be faithfully compared. A systematic empirical comparison of CEM design choices demonstrates that there is no single approach that is best in all scenarios. In fact, our results suggest that every CEM protocol is inherently limited in certain aspects of its performance. We distill our observations into a novel model that explains the inherent tradeoffs of CEM design choices and provides bounds on the practical performance limits of any future CEM protocol. In particular, the model asserts that no CEM design can simultaneously achieve all three of low overhead, low lag, and high streaming quality.
  • Loading...
    Thumbnail Image
    Item
    FeedTree: Scalable and prompt delivery for Web feeds
    (2007) Sandler, Daniel R.; Druschel, Peter
    An increasing number of Internet users now use Web feeds (or RSS feeds) to get their news, hear music and audio programs, and keep in touch. The result for website owners, however, is known as the "RSS bandwidth problem": because each feed reader polls every subscribed feed repeatedly for updates, the bandwidth demands of hosting a popular feed can be extreme. Our FeedTree system replaces this polling architecture with efficient and scalable application-level multicast based on the Pastry peer-to-peer overlay network. Instead of independently polling feed resources, FeedTree users cooperate to distribute feed updates; they substantially reduce the bandwidth burden placed on feed publishers, while updates arrive faster than with polling. In this thesis we explore the existing problems with Web feeds and describe the design and implementation of the FeedTree solution. We also share our experiences deploying FeedTree as a public Internet service.
  • Loading...
    Thumbnail Image
    Item
    IO-Lite: A unified I/O buffering and caching system
    (1997-10-27) Druschel, Peter; Pai, Vivek; Zwaenepoel, Willy
    This paper presents the design, implementation, and evaluation ofIO-Lite, a unified I/O buffering and caching system. IO-Lite unifies all buffering and caching in the system, to the extent permitted by the hardware. In particular, it allows applications, interprocess communication, the file system, the file cache, and the network subsystem to share a single physical copy of the data safely and concurrently. Protection and security are maintained through a combination of access control and read-only sharing. The various subsystems use (mutable) buffer aggregates to access the data according to their needs. IO-Lite eliminates all copying and multiple buffering of I/Odata, and enables various cross-subsystem optimizations. Performance measurements show significant performance improvements on Web servers and other I/O intensive applications.
  • Loading...
    Thumbnail Image
    Item
    LALA: Location aware load aware overlay anycast
    (2004) Nandi, Animesh; Druschel, Peter
    Anycast is a powerful paradigm for managing and locating resources in decentralized distributed systems. Ideally, an anycast system must be scalable, location-aware and load-aware. Location-awareness means that the anycast system should be able to locate a resource that is near the client in the network. Load-awareness means that it must be able to disperse load to avoid overloading group members in the case of high demand in a certain region of the network. Existing anycast systems are either location-aware or load-aware, but not both. We motivate LALA, a generic architecture for doing scalable, location-aware, load-aware anycast that realizes the following anycast functionality: Given a client request, our goal is to select the closest anycast server that has enough resources to satisfy the client's request. We show how LALA can be designed on some of the existing overlay anycast architectures and close with an evaluation that demonstrates its effectiveness.
  • Loading...
    Thumbnail Image
    Item
    Measurement-Based Analysis, Modeling, and Synthesis of the Internet Delay Space for Large Scale Simulation
    (2006-10-04) Zhang, Bo; Ng, T. S. Eugene; Nandi, Animesh; Riedi, Rudolf H.; Druschel, Peter; Wang, Guohui
    The characteristics of packet delays among edge networks in the Internet can have a significant impact on the performance and scalability of global-scale distributed systems. Designers rely on simulation to study design alternatives for such systems at scale, which requires an appropriate model of the Internet delay space. The model must preserve the geometry and density distribution of the delay space, which are known, for instance, to influence the effectiveness of selforganization algorithms used in overlay networks. In this paper, we characterize measured delays between Internet edge networks with respect to a set of relevant metrics. We show that existing Internet models differ dramatically from measured delays relative to these metrics. Then, based on measured data, we derive a model of the Internet delay space. The model preserves the relevant metrics, allows for a compact representation, and can be used to synthesize delay data for large-scale simulations. Moreover, specific metrics of the delay space can be adjusted in a principled manner, thus allowing systems designers to study the robustness of their designs to such variations.
  • Loading...
    Thumbnail Image
    Item
    New approaches to routing for large-scale data networks
    (2000) Chen, Johnny; Druschel, Peter
    This thesis develops new routing methods for large-scale, packet-switched data networks such as the Internet. The methods developed increase network performance by considering routing approaches that take advantage of more available network resources than do current methods. Two approaches are explored: dynamic metric and multipath routing. Dynamic metric routing provides paths that change dynamically in response to network traffic and congestion, thereby increasing network performance because data travel less congested paths. The second approach, multipath routing, provides multiple paths between nodes and allows nodes to use these paths to best increase their network performance. Nodes in this environment achieve increased performance through aggregating the resources of multiple paths. This thesis implements and analyzes algorithms for these two routing approaches. The first approach develops hybrid-Scout, a dynamic metric routing algorithm that calculates independent and selective dynamic metric paths. These two calculation properties are key to reducing routing costs and avoiding routing instabilities, two difficulties commonly experienced in traditional dynamic metric routing. For the second approach, multipath routing, this thesis develops a complete multipath network that includes the following components: routing algorithms that compute multiple paths, a multipath forwarding method to ensure that data travel their specified paths, and an end-host protocol that effectively uses multiple paths. Simulations of these two routing approaches and their components demonstrate significant improvement over traditional routing strategies. The hybrid-Scout algorithm requires 3--4 times to 1--2 orders of magnitude less routing cost compared to traditional dynamic metric routing algorithms while delivering comparable network performance. For multipath routing, nodes using the multipath protocol fully exploit the offered paths and increase performance linearly in the additional resources provided by the multipath network. The performance improvements validate the multipath routing algorithms and the effectiveness of the proposed end-host protocol. Furthermore, this new multipath forwarding method allows multipath networks to be supported at low routing costs. This thesis demonstrates that the proposed methods to implement dynamic metric and multipath routing are efficient and deliver significant performance improvements.
  • Loading...
    Thumbnail Image
    Item
    Online social networks: Measurement, analysis, and applications to distributed information systems
    (2009) Mislove, Alan E.; Druschel, Peter
    Recently, online social networking sites have exploded in popularity. Numerous sites are dedicated to finding and maintaining contacts and to locating and sharing different types of content. Online social networks represent a new kind of information network that differs significantly from existing networks like the Web. For example, in the Web, hyperlinks between content form a graph that is used to organize, navigate, and rank information. The properties of the Web graph have been studied extensively, and have lead to useful algorithms such as PageRank. In contrast, few links exist between content in online social networks and instead, the links exist between content and users, and between users themselves. However, little is known in the research community about the properties of online social network graphs at scale, the factors that shape their structure, or the ways they can be leveraged in information systems. In this thesis, we use novel measurement techniques to study online social networks at scale, and use the resulting insights to design innovative new information systems. First, we examine the structure and growth patterns of online social networks, focusing on how users are connecting to one another. We conduct the first large-scale measurement study of multiple online social networks at scale, capturing information about over 50 million users and 400 million links. Our analysis identifies a common structure across multiple networks, characterizes the underlying processes that are shaping the network structure, and exposes the rich community structure. Second, we leverage our understanding of the properties of online social networks to design new information systems. Specifically, we build two distinct applications that leverage different properties of online social networks. We present and evaluate Ostra, a novel system for preventing unwanted communication that leverages the difficulty in establishing and maintaining relationships in social networks. We also present, deploy, and evaluate PeerSpective, a system for enhancing Web search using the natural community, structure in social networks. Each of these systems has been evaluated on data from real online social networks or in a deployment with real users.
  • Loading...
    Thumbnail Image
    Item
    Operating system support for server applications
    (1999) Banga, Gaurav; Druschel, Peter
    General-purpose operating systems provide inadequate support for large-scale servers. Server applications lack sufficient control over scheduling and management of machine resources, which makes it difficult to enforce priority policies, and to provide robust and controlled service. For example, server applications cannot provide differentiated quality of service to requests from different clients. The root cause of these problems is a fundamental mismatch between the original design assumptions underlying the resource management mechanisms of current general-purpose operating systems, and the behavior of modern server applications. In particular, the notions of protection domain and resource principal coincide in the process abstraction of current operating systems. Moreover, these operating systems provide insufficient control to an application over the resources that are consumed inside the kernel on behalf of the application. These aspects of current operating systems prevent a server process that manages large numbers of network connections, for example, from properly allocating system resources among those connections. This dissertation addresses the lack of operating system support for fine-grained resource management in large-scale server systems. It starts by characterizing the nature of the mismatch between the design assumptions of current general-purpose operating systems, and the behavior of server applications. The traditional design of core operating system abstractions and APIs is reevaluated in the light of the requirements of server applications. This reevaluation leads to a set of novel operating system abstractions and APIs that serve to provide effective support for server applications.
  • Loading...
    Thumbnail Image
    Item
    Performance analysis of TLS Web servers
    (2003) Coarfa, Cristian; Druschel, Peter
    TLS is the protocol of choice for securing today's e-commerce and online transactions, but adding TLS to a web server imposes a significant overhead relative to an insecure web server on the same platform. We perform a comprehensive study of the performance costs of TLS. Our methodology is to profile TLS web servers with trace-driven workloads, replace individual components inside TLS with no-ops and measure the observed increase in server throughput. We estimate the relative costs of each TLS processing stage, identifying the areas for which future optimizations would be worthwhile. Our results show that while the RSA operations represent the largest performance cost in TLS web servers, they do not solely account for TLS overhead. RSA accelerators are effective for e-commerce site workloads since they experience low TLS session reuse. Accelerators appear to be less effective for sites at which all requests are handled by a TLS server, because they have a higher session reuse rate. In this case investing in a faster CPU might provide a greater boost in performance. Our experiments show that having a second CPU is at least as useful as an RSA accelerator. As CPUs become more powerful, the relative cost of the cryptographic components of TLS is decreasing; faster CPUs will eventually bridge the performance gap between TLS-secured and insecure web servers. Our results suggest that long-term research efforts should consequently focus on designing more efficient web servers.
  • «
  • 1 (current)
  • 2
  • »
  • About R-3
  • Report a Digital Accessibility Issue
  • Request Accessible Formats
  • Fondren Library
  • Contact Us
  • FAQ
  • Privacy Notice
  • R-3 Policies

Physical Address:

6100 Main Street, Houston, Texas 77005

Mailing Address:

MS-44, P.O.BOX 1892, Houston, Texas 77251-1892